Prosecutors and AI: Navigating Justice in the Age of Algorithms

August 30, 2024

Ralph Losey. Published August 30, 2024.

AI has the potential to transform the criminal justice system through its ability to process vast datasets, recognize patterns, and predict outcomes. However, this potential comes with a profound responsibility: ensuring that AI is employed in ways that uphold basic human principles of justice. This article will focus on how AI can assist prosecutors in fulfilling their duty to represent the people fairly and equitably. It will highlight the practical benefits of AI in criminal law, providing specific examples of its application. The underlying theme emphasizes the necessity of human oversight to prevent the misuse of AI and to ensure that justice remains a human ideal, not an artificial construct.

The integration of AI into criminal prosecutions must be aligned with the ethical and legal obligations of prosecutors as outlined, for instance, by the American Bar Association’s Criminal Justice Standards for the Prosecution Function (ABA, 4th ed. 2017) (hereinafter “ABA Standards”). The ABA Standards emphasize the prosecutor’s duty to seek justice, maintain integrity, and act with transparency and fairness in all aspects of the prosecution function. This article will not cover the indirectly related topics of AI evidence. See Gless, Lederer, Weigend, AI-Based Evidence in Criminal Trials? (William & Mary Law School, Winter 2024). It will also not cover criminal defense lawyer issues, but maybe in a followup soon.

The Promise of AI in Criminal Prosecutions

The primary duty of the prosecutor is to seek justice within the bounds of the law, not merely to convict.” ABA Standard 3-1.2(b). When AI is used responsibly, it can assist prosecutors in fulfilling this duty by providing new tools. The AI powered tools can enhance evidence analysis, case management, and decision-making, all while maintaining the integrity and fairness expected of the prosecution function. Prosecutors with AI can better manage the vast amounts of data in modern investigations, identify patterns that might escape human detection, and make more informed decisions. It is no magic genie, but when used properly, can be a very powerful tool.

The National Institute of Justice in March 2018 sponsored a workshop of prosecutors from around the country that identified data and technology challenges as a high-priority need for prosecutors. According to the report by the Rand Corporation on the conference entitled, Prosecutor Priorities, Challenges, and Solutions (“Rand Report“) the key findings of the prestigious group were: (1) difficulties recruiting, training, managing, and retaining staff, (2) demanding and time-consuming tasks for identifying, tracking, storing, and disclosing officer misconduct and discipline issues, and (3) inadequate or inconsistent collection of data and other information shared among agencies . . . as well as by emerging digital and forensic technologies. The full Rand Report PDF may be downloaded here. The opening summary states:

Prosecutors are expected to deliver fair and legitimate justice in their decision making while balancing aspects of budgets and resources, working with increasingly larger volumes of digital and electronic evidence that have developed from technological advancements (such as social media platforms), partnering with communities and other entities, and being held accountable for their actions
and differing litigation strategies. . . .

Moreover, the increasing volume of potentially relevant digital information, video footage, and other information from technological devices and tools can significantly add to the amount of time needed to sufficiently examine and investigate the evidence in order to make decisions about whether to drop or pursue a case. This can be especially challenging because the staffing and other resources in prosecutors’ offices have not necessarily kept pace with these increasing demands.

Although the amount of digital information that prosecutors must sometimes sift through can be managed, in part, through innovative technological tools, such as data mining and data reduction solutions (Al Fahdi, Clarke, and Furnell, 2013; Quick and Choo, 2014), there are often steep learning curves or high costs that make it unrealistic for an office to implement these technologies.

Rand Report, pages 1-3.

Also see the excellent Duke Law sponsored one hour panel discussion video, The Equitable, the Ethical and the Technical: Artificial Intelligence’s Role in The U.S. Criminal Justice System for a comprehensive discussion of issues as of November 2021, just before the development and release of the new generative models of AI a year later.

e-Discovery, Evidence Analysis and Case Management

As the Rand Report confirms, the sheer volume of evidence in complex criminal investigations is a significant challenge for prosecutors. Also see: Tinder Date Murder Case Highlights the Increasing Complexity of eDiscovery in Criminal Investigations: eDiscovery Trends (e-Discovery Daily, 6/15/18). AI can analyze vast datasets—such as emails, text messages, and internet activity logs—to identify patterns indicative of criminal activity, but the software can be expensive and requires trained technology experts. AI algorithms can recognize specific types of evidence, such as images, sentiments, or key concepts relevant in many cases. They can help prosecutors identify patterns and connections within the evidence that might not be immediately apparent to human investigators. This capability can significantly reduce the time needed to search and study evidence, enabling prosecutors to build stronger cases more efficiently.

But, as the Rand Report also makes clear, prosecutors need adequate funding and trained personnel to purchase and use these new tools. Fortunately generative AI is substantially less expensive that the older models of AI and easier to use. Still, issues of fairness and guardrails against discrimination in their use remain as significant problems. There are also very significant privacy issues inherent in predictive policing. David Ly, Predictive Policing: Balancing Innovation and Ethics (The Fast Mode, 8/15/24); Arjun Bhatnagar, The Threat of Predictive Policing to Data Privacy and Personal Liberty (Dark Reading, 12/27/22).

Use of AI evidence search and classification tools such as predictive coding, which are well established in civil litigation, should be used more widely used soon in criminal law. The high costs involved are now plummeting and should soon be affordable to most prosecutors. They can drastically reduce the time needed to search and analyze large volumes of complex data. Still, budgets to hire trained personnel to operate the new tools must be expanded. AI can complement, but not entirely replace, human review in what I call a hybrid multimodal process. Ralph Losey, Chat GPT Helps Explains My Active Machine Learning Method of Evidence Retrieval (e-Discovery Team, 1/28/23). Human experts on the prosecutor’s team should always be involved in the evidence review to ensure that no critical information is missed.

Transparency and accountability are also crucial in using AI in discovery. Defense attorneys should be provided with a detailed explanation of how these tools were used. This is essential to maintaining the fairness and integrity of the discovery process, ensuring that both sides have equal access to evidence and can challenge the AI’s conclusions if necessary.

AI also plays a crucial role in case management. AI-powered tools can help prosecutors organize and prioritize cases based on the severity of the charges, the availability of evidence, and the likelihood of a successful prosecution. These tools can assist in tracking deadlines, managing court calendars, and ensuring that all necessary court filings are completed on time. By streamlining these administrative tasks, AI allows prosecutors and their assistants to concentrate on the substantive aspects of their work—pursuing justice. It also helps them deal with the omnipresent staff shortage issues.

Bias Detection and Mitigation

Bias in prosecutorial decision-making—whether conscious or unconscious—remains a critical concern. ABA Standards state:

The prosecutor should not manifest or exercise, by words or conduct, bias or prejudice based upon race, sex, religion, national origin, disability, age, sexual orientation, gender identity, or socioeconomic status. A prosecutor should not use other improper considerations, such as partisan or political or personal considerations, in exercising prosecutorial discretion. A prosecutor should strive to eliminate implicit biases, and act to mitigate any improper bias or prejudice when credibly informed that it exists within the scope of the prosecutor’s authority.

ABA Standards 3-1.6(a).

AI can play a crucial role in detecting and mitigating such biases, helping prosecutors adhere to the mandate that they “strive to eliminate implicit biases, and act to mitigate any improper bias or prejudice” within their scope of authority.

AI systems also offer the potential to detect and mitigate unconscious human bias in prosecutorial decision-making. AI can analyze past prosecutorial decisions to identify patterns of bias that may not be immediately apparent to human observers. By flagging these patterns, AI can help prosecutors become aware of their biases in their office and take corrective action.

Prosecutors should use care in the selection and use of AI systems. If they are trained on biased data, they can perpetuate and even amplify existing disparities in the criminal justice system. For instance, an AI algorithm used to predict recidivism, if trained on data reflecting historical biases—such as the over-policing of minority communities—may disproportionately disadvantage these communities. AI systems used in criminal prosecutions should be designed to avoid this bias.

The software purchased by a prosecutor’s office should be chosen carefully, ideally with outside expert advice, and rigorously tested for bias and other errors before deployment. Alikhademi, K., Drobina, E., Prioleau, D. et al.A review of predictive policing from the perspective of fairness Artif Intell Law 30, 1–17 (2022) (“[T]he pros and cons of the technology need to be evaluated holistically to determine whether and how the technology should be used in policing.”) There should also be outside community involvement. Artificial Intelligence in Predictive Policing Issue Brief (NAACP, 2/15/24) (NAACP’s four recommendations: independent oversight; transparency and accountability; community engagement; ban use of biased data; new laws and regulations).

Prosecutors should not fall into a trap of overcompensating based on statistical analysis alone. AI is a limited tool that, like humans, makes errors of its own. Its use should be tempered by prosecutor experience, independence, intuition and human values. When we use AI in any context or field it should be a hybrid relationship where humans remain in charge. From Centaurs To Cyborgs: Our evolving relationship with generative AI (e-Discovery Team, 4/24/24) (experts recommend two basic ways to use AI, both hybrid, where the unique powers of human intuition are added to those of AI). AI can also help prosecutors make objective decisions on charging and sentencing by providing statistically generated recommendations, again with the same cautionary advice on overreliance.

Sentencing Recommendations and Predictive Analytics

The use of AI in predictive analytics for sentencing is among the most controversial applications in criminal law. AI systems can be trained to analyze data from past cases and make predictions about the likelihood of a defendant reoffending or suggest appropriate sentences for a given crime. These recommendations can then inform the decisions of judges and prosecutors.

Predictive analytics has the potential to bring greater consistency and objectivity to sentencing. By basing recommendations on data rather than individual biases or instincts, AI can help reduce disparities and ensure similar cases are treated consistently. This contributes to a more equitable criminal justice system.

While AI can bring greater consistency to sentencing, prosecutors must ensure that AI-generated recommendations comply with their “heightened duty of candor” and the overarching obligation to ensure that justice is administered equitably.

In light of the prosecutor’s public responsibilities, broad authority and discretion, the prosecutor has a heightened duty of candor to the courts and in fulfilling other professional obligations.

ABA Standard 3-1.4(a)

The use of AI in sentencing raises important ethical questions. Should AI make predictions about a person’s future behavior based on their past? What if the data used to train the AI is biased or incomplete? How can we ensure that AI-generated recommendations are not seen as infallible but are subject to critical scrutiny by human decision-makers?

These concerns highlight the need for caution. While AI can provide valuable insights and recommendations, it is ultimately the responsibility of human prosecutors and judges to make the final decisions. AI should be a tool to assist in the pursuit of justice, not a replacement for human judgment.

Predictive Policing

Predictive policing uses algorithms to analyze massive amounts of information in order to predict and help prevent potential future crimes. Tim Lau, Predictive Policing Explained (Brennan Center for Justice, 11/17/21). This is an area where old AI (before advent of generative AI) has been embraced by many police departments worldwide, including the E.U. countries, but also China and other repressive regimes. Many prosecutors in the U.S. endorse it, but it is quite controversial and hopefully will be improved by new models of generative AI. The DA’s office wants to use predictive analytics software to direct city resources to ‘places that drive crime.’ Will it work? (The Lens, 11/15/23). In theory, by analyzing data on past crimes—such as the time, location, and nature of the offenses—AI algorithms can predict where and when future crimes are likely to occur. The majority of reports say this already works. But what of the minority reports? They contest the accuracy of these predictions using old AI models. Some say they are terrible at it. Sankin and Mattu, Predictive Policing Software Terrible At Predicting Crimes (Wired, 10/2/23). There is widespread concern of growing misuse, especially in countries that have politicized prosecutorial systems.

Still, in theory this kind of statistical analysis should be able to help honest law enforcement agencies allocate resources more effectively, enabling police to prevent crime before it happens. See generally, Navigating the Future of Policing: Artificial Intelligence (AI) Use, Pitfalls, and Considerations for Executives (Police Chief Magazine, 4/3/24).

All prosecutors, indeed. all citizens, want to be smart when it comes to crime, we all want “more police officers on the street, deployed more effectively. They will not just react to crime, but prevent it.” Kamala Harris (Author) and Joan Hamilton, Smart on Crime: A Career Prosecutor’s Plan to Make Us Safer (Chronicle Books, 2010).

The Los Angeles Police Department (LAPD) was one of the first to use predictive policing software, which was known as PredPol (now Geolitica). It identified areas of the city at high risk for certain types of crime, such as burglaries or auto thefts. The software analyzed data on past crimes and generated “heat maps” that indicate where crimes are most likely to occur in the future. This guided patrols and other law enforcement activities. PredPol proved to be very controversial. Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them (The Markup, 12/2/21). Its use was discontinued by the LAPD in 2020, but other companies claim to have corrected the biases and errors in the programs. See Levinson-Waldman and Dwyer, LAPD Documents Show What One Social Media Surveillance Firm Promises Police (Brennan Center for Justice, 11/17/21).

Another type of predictive policing software was adopted by the NYPD called Patternizr. According to the Wikipedia article on predictive policing:

The goal of the Patternizr was to help aid police officers in identifying commonalities in crimes committed by the same offenders or same group of offenders. With the help of the Patternizr, officers are able to save time and be more efficient as the program generates the possible “pattern” of different crimes. The officer then has to manually search through the possible patterns to see if the generated crimes are related to the current suspect. If the crimes do match, the officer will launch a deeper investigation into the pattern crimes.

See Molly Griffard, A Bias-Free Predictive Policing Tool?: An Evaluation of the Nypd’s Patternizr (Fordham Urban Law Journal, December 2019). 

While predictive policing has been credited with reducing crime in some areas, it has also been criticized for potentially reinforcing existing biases. If the data used to train the AI reflects a history of over-policing in certain minority communities, the algorithm may predict those communities are at higher risk for future crimes, leading to even more policing in those areas. This, in turn, can perpetuate a cycle of discrimination and injustice. See e.g. Taryn Bates, Technology and Culture: How Predictive Policing Harmfully Profiles Marginalized People Groups (Vol. 6 No. 1 (2024): California Sociology Forum).

To address these concerns, predictive policing algorithms must be designed with fairness in mind and subject to rigorous oversight. David Stephens, Forecasting Justice: The promise of AI-enhanced law enforcement (Police1, 1/27/24). I endorse the conclusions of Chief Deputy David Stephens made in his Forecasting Justice article:

Projecting into the next decade, AI will be an integral part of law enforcement — from crime prediction and real-time decision aids to postincident analysis. These technologies could lead to smarter patrolling, fewer unnecessary confrontations and overall enhanced community safety. However, this vision can only materialize with rigorous oversight, consistent retraining and an undiluted focus on civil liberties and ethics. Law enforcement’s AI-driven future must be shaped by a symbiotic relationship where technology amplifies human judgment rather than replacing it. The future promises transformative advances, but it’s imperative that the compass of integrity guide this journey.

The latest versions of predictive policing technology will certainly use new generative AI enhanced analysis. Law enforcement should be very careful in the purchase and implementation of these new technologies. They should seek the input of outside experts and carefully examine vendor representations. That should include greater vendor transparency, such as disclosure of the data used to train these systems to confirm that it is representative and unbiased. Proper methods of implementation of the AI tools should also be carefully considered. In my view and others this mean adopting a hybrid approach that “amplifies human judgment rather than replacing it.”

Sentiment Analysis in Jury Selection

Another trending application of AI in criminal law is the use of sentiment analysis in jury selection. Sentiment analysis is a type of AI that can analyze text or speech to determine the underlying emotions or attitudes of the speaker. In jury selection, sentiment analysis can analyze potential jurors’ public records, especially social media posts, as well as their responses during voir dire—the process of questioning jurors to assess their suitability for a case. It can also monitor unfair questions of potential jurors by prosecutors and defense lawyers. See Jo Ellen Nott, Natural Language Processing Software Can Identify Biased Jury Selection, Has Potential to Be Used in Real Time During Voir Dire (Criminal Legal News, December 2023). Also see AI and the Future of Jury Trials (CLM, 10/18/23).

For example, an AI-powered sentiment analysis tool could analyze the language used by potential jurors to identify signs of bias or prejudice that might not be immediately apparent to human observers. This information could then be used by prosecutors and defense attorneys to make more informed decisions about which jurors to strike or retain.

While sentiment analysis has the potential to improve jury selection fairness, it also raises ethical questions. Should AI influence juror selection, given the potential for errors or biases in the analysis? How do we ensure AI-generated insights are used to promote justice, rather than manipulate the selection process?

These questions underscore the need for careful consideration and oversight in using AI in jury selection. AI should assist human decision-makers, not substitute their judgment.

AI in Plea Bargaining and Sentencing

AI can also play a transformative role in plea bargaining and sentencing decisions. Plea bargaining is a critical component of the criminal justice system, with most cases being resolved through negotiated pleas rather than going to trial. AI can assist prosecutors in evaluating the strength of their case, the likelihood of securing a conviction, and the appropriate terms for a plea agreement. See: Justice Innovation Lab, Critiquing The ABA Plea Bargaining Principles Report (Medium, 2/1/24); Justice Innovation Lab, Artificial Intelligence In Criminal Court Won’t Be Precogs (Medium, 10/31/23) (article concludes with “Guidelines For Algorithms and Artificial Intelligence In The Criminal Justice System“).

For example, AI algorithms can analyze historical data from similar cases to provide prosecutors with insights into the typical outcomes of plea negotiations, considering factors such as the nature of the crime, the defendant’s criminal history, and the available evidence. This can help prosecutors make more informed decisions on plea deal offers.

Moreover, AI can assist in making sentencing recommendations that are more consistent and equitable. Sentencing disparities have long been a concern in the criminal justice system, with studies showing that factors such as race, gender, and socioeconomic status can influence sentencing outcomes. AI has the potential to reduce these disparities by providing sentencing recommendations based on objective criteria rather than subjective judgment. Keith Brannon, AI sentencing cut jail time for low-risk offenders, but study finds racial bias persisted (Tulane Univ., 1/23/24); Kieran Newcomb, The Place of Artificial Intelligence in Sentencing Decisions (Univ. NH, Spring 2024).

For instance, an AI system could analyze data from thousands of past cases to identify typical sentences imposed for specific crimes, accounting for relevant factors like the severity of the offense and the defendant’s criminal record. This information could then be used to inform sentencing decisions, ensuring that similar cases are treated consistently and fairly.

However, using AI in plea bargaining and sentencing also raises significant ethical considerations. The primary concern is the risk of AI perpetuating or exacerbating existing biases in the criminal justice system. If the data used to train AI systems reflects historical biases—such as harsher sentences for minority defendants—AI’s recommendations may inadvertently reinforce those biases.

To address this concern, AI systems used in plea bargaining and sentencing must be designed with fairness and transparency in mind. This includes ensuring that the data used to train these systems is representative and free from bias and providing clear explanations of how the AI’s recommendations were generated. Moreover, human prosecutors and judges must retain the final authority in making plea and sentencing decisions, using AI as a tool to inform their judgment rather than a substitute for it. It is important that AI systems be chosen and used very carefully in part because “the prosecutor should avoid an appearance of impropriety in performing the prosecution function.” ABA Standard 3-1.2(c)

Ethical Implications of AI in Criminal Prosecutions

While the potential benefits of AI in criminal law are significant, it is equally important to consider the ethical implications of integrating AI into the criminal justice system. AI, by its very nature, raises questions about accountability, transparency, and the potential for misuse—questions that must be carefully addressed to ensure AI is used in ways that advance, not hinder, the cause of justice.

As we integrate AI into criminal prosecutions, it is essential that we do so with a commitment to the principles articulated in the ABA’s Criminal Justice Standards. By aligning AI’s capabilities with these ethical guidelines, we can harness technology to advance justice while upholding the prosecutor’s duty to act with integrity, fairness, and transparency.

Transparency and Accountability

One of the most pressing ethical concerns is the issue of transparency, which we have mentioned previously. AI algorithms are often referred to as “black boxes” because their decision-making processes can be difficult to understand, even for those who design and operate them. This lack of transparency can be particularly problematic in criminal prosecutions, where the stakes are incredibly high, and the consequences of a wrong decision can be severe. A ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up (The Conversation, 7/26/23) (discusses UK systems).

For example, if an AI system is used to predict the likelihood of a defendant reoffending, it is crucial that the defendant, their attorney, and the judge understand how that prediction was made. Without transparency, challenging the AI’s conclusions becomes difficult, raising concerns about due process and the right to a fair trial.

To address this issue, AI systems used in criminal prosecutions must be designed to be as transparent as possible. This includes providing clear explanations of how AI’s decisions were made and ensuring that the underlying data and algorithms are accessible for review and scrutiny. There is federal legislation that has been pending for years that would require this, the Justice in Forensic Algorithms Act. New bill would let defendants inspect algorithms used against them in court (The Verge, 2/15/24) (requires disclosure of source code). Moreover, the legal community must advocate for developing AI systems prioritizing explainability and interpretability, ensuring that the technology is effective, accountable, and understandable.

Fairness and Bias

Another ethical concern is, as mentioned, the potential for AI to be used in ways that exacerbate existing inequalities in the criminal justice system. For example, there is a risk that AI could justify more aggressive policing or harsher sentencing in communities already disproportionately targeted by law enforcement. This is why AI systems must be designed with fairness in mind and their use subject to rigorous oversight. Look beyond vendor marketing claims to verify with hard facts and independent judgments.

Ensuring fairness requires that AI systems are trained on representative and unbiased data. It also necessitates regular audits of AI systems to detect and mitigate any biases that may arise. Additionally, AI should not be the sole determinant in any criminal justice decision-making process; human oversight is essential to balance AI’s recommendations with broader considerations of justice and equity. For instance, the NYPD represents that its widespread use of AI driven facial recognition technology in criminal investigations “does not establish probable cause to arrest or obtain a search warrant, but serves as a lead for additional investigative steps.” NYPD Questions and Answers – Facial Recognition, and see the NYPD official patrol guide dated 3/12/20.

Human Judgment and Ethical Responsibility

The deployment of AI in criminal prosecutions also raises important questions about the role of human judgment in the justice system. While AI can provide valuable insights and recommendations, it is ultimately human prosecutors, judges, and juries who must make the final decisions. This is because justice is not just about applying rules and algorithms—it is about understanding the complexities of human behavior, weighing competing interests, and making moral judgments.

AI, no matter how advanced, cannot replicate the full range of human judgment, and it should not be expected to do so. Instead, AI should be seen as a tool to assist human decision-makers, providing them with additional information and insights that can help them make more informed decisions. At the same time, we must be vigilant in ensuring that AI does not become a crutch or a substitute for careful human deliberation, judgment and equity.

Conclusion

The integration of AI into criminal prosecutions holds the promise of advancing the cause of justice in profound and meaningful ways. To do so we must always take care that applications of AI follow the traditional principles stated in the Criminal Justice Standards for the Prosecution Function and other guides of professional conduct. By aligning AI’s capabilities with ethical guidelines, we can harness technology in a manner that advances the prosecutor’s duty to act with integrity, fairness, and transparency.

With these cautions in mind, we should boldly embrace the opportunities that AI offers. Let us use AI as a tool to enhance, not replace, human judgment. And let us work together—lawyers, technologists, and policymakers—to ensure that the use of AI in criminal prosecutions advances the cause of justice for all.

Ralph Losey Copyright 2024 — All Rights Reserved


Survey Shows Legal Research is the Most Common Use of Generative AI by Lawyers: a short, ‘almost funny’ report on a Bloomberg Law survey.

August 23, 2024

Ralph Losey. Published on August 23, 2024.

Bloomberg surveyed lawyers earlier this year to find out how they were using generative AI. To no one’s surprise, and despite the bogeyman of AI hallucinating fake cases, Legal Research came out on top. This is shown in Bloomberg’s chart below. In this blog a ChatGPT is prompted to try to explain the survey based on the chart alone and do so with subtle humor. Its analysis was verified by me as accurate, but as to the comedy, well, you be the judge.

I’ve demonstrated that ChatGPT chart reading skills before in the Panel of Experts. What is new here is my attempt via prompts to make the AI explanation funny. My training leans heavily on the humor of sarcasm, which AI ‘almost’ gets. Of course, I tweaked and verified everything; exhausting work, really. Read on for AI humor and also to see how lawyers are using generative AI.

The survey is set out in Stephanie Pacheco excellent article, ANALYSIS: Legal Workers Use AI for Research, Despite Red Flags (Bloomberg, 4/4/24). The chart shown above summarizes the study, and was shared publicly, but you will need to subscribe to see the full report. Still, this detailed graph tells us a lot. So, thank you Mr. Mayor.

The Survey Unpacked with AI Sarcasm

The survey confirmed what most of us already suspected, that legal research is the top use, with drafting a close second. I doubt the use has changed much since Bloomberg made the survey in early 2024. One of my custom GPTs wrote the rest of this blog, including the snarky comments below in parentheses. I am trying to teach it sarcasm as a gateway to humor. Most people, myself included, do not think ChatGPT is funny yet, but that version 4.0 is better than 3.5, which was, for me a least, mainly funny because of its stupid mistakes. But then again, two UCLA psychologists working in Italy (University of Macerata) recently published a research paper purporting to show that baby ChatGPT3.5 is funnier than most humans! Gorenz D., Schwarz N., How funny is ChatGPT? A comparison of human- and A.I.-produced jokes (PLoS ONE, 7/3/234). Go figure.

That is my last scholarly cite in this article I promise. After this, almost all of the writing is by ChatGPT4o – omni. Remember, these are supposed to be funny comments, so please try not to take offense. Or if you must, blame the AI, not me. That’s the popular thing to do.

Panel of Experts Explains the Bloomberg Chart

  • Legal Research: 57% (Because why think when AI can think for you?)
  • Drafting/Templating Communication: 46% (Who hasn’t dreamed of AI handling their thankless email drudgery?)
  • Drafting/Templating Legal Contracts: 26% (Finally, a solution to avoid actual lawyering!)
  • Summarizing Legal Narratives: 24% (Because brevity is the soul of wit, or so AI believes.)
  • Reviewing Legal Documents: 22% (Eye-strain relief brought to you by your AI overlords.)
  • Negotiating/Redlining Contracts: 18% (Robots, saving us from negotiation-induced migraines.)
  • e-Discovery: 18% (Digging through digital dumpsters, so you don’t have to.)
  • Due Diligence: 13% (Making sure the i’s are dotted, with digital precision.)
  • Reviewing Discovery: 11% (See e-Discovery, but with less enthusiasm.)
  • Case Filings: 6% (Because some mundane tasks remain sacred.)
  • Estate Planning: 6% (Planning for the inevitable with digital grace, and using AI to do it, as shown in this image.)

Legal Research as a Fan Favorite

Leading the pack, as usual, is legal research, championed by a whopping 57% of respondents. It seems even lawyers appreciate not having to wade through dusty tomes and endless databases when a sleek algorithm can do it in half the time, with double the caffeine efficiency. This AI-driven shortcut to legal enlightenment allows our dear attorneys to focus on what truly matters—crafting strategies and billing clients. After all, these new cases cited are true and correct, right? AI would not actually make stuff up, would it? Losey, Exclusive Report: New Appellate Court Opinions on Artificial Intelligence and Hallucinations (e-Discovery team 10/04/23) (another tongue-in-cheek report). But seriously folks, see: OMNI Version – ChatGPT4o – Retest of the Panel of AI Experts – Part Three (more than you ever wanted to know about AI hallucinations).

Drafting, Templating, and the Art of Avoidance

Why spend hours drafting when you can have an AI mimic your voice and possibly improve upon your dry legalese? Nearly half of our surveyed champions automate their correspondences, ensuring that every missive is as polished as their courtroom shoes. And let’s not overlook the 26% who let AI tackle contract drafting, because nothing says ‘trust’ quite like a machine handling your agreements.

Summarization and Document Review: AI’s Literary Digest

Nothing thrills a lawyer more than a good summary, unless it’s one they didn’t have to write themselves. AI’s knack for condensing verbose legal opinions into bite-sized pieces is akin to literary gold for the time-strapped practitioner. Meanwhile, document review remains a realm ripe for AI conquest, proving that even the most mind-numbing tasks can be outsourced to our uncomplaining digital underlings.

The Robotic Learned Hand in e-Discovery and Due Diligence

In the trenches of e-Discovery and due diligence, AI stands as a beacon of hope against the drudgery of data sifting. With 18% of respondents embracing AI in e-Discovery, it’s clear that searching for that needle in the haystack just got a high-tech magnet. As for due diligence, the 13% uptake might suggest some lingering human nostalgia, or perhaps just a healthy fear of letting robots handle everything. Hey, gotta bill for something while we still can.

Forecasting AI’s Legal Takeover

This year’s Bloomberg survey paints a clear picture: AI in legal work is not just a passing fad, but the start of a tidal wave, one which just might drown us all. Either that or make for some incredible surfing!

Conclusion

As generative AI reshapes the legal landscape, we lawyers find ourselves spectators (and, occasionally, operators) in a play where the scripts are increasingly written by algorithms. Embracing these tools isn’t just about staying relevant—it’s about leading the charge in a world where efficiency is king, and where our next legal assistant might just be a server farm.

Let me close with a rhyming riddle, cause why not. Maybe riddles will make next year’s list of top uses, right behind Ralph’s Panel of AI Experts for Lawyers, which, no doubt, will be sycophantic free too.

Riddle Me This:

I’m not a lawyer, yet I reside in the courts.
I’m neither judge nor jury, but I hold a report.
I sift through the facts, with no coffee in sight,
Drafting and summarizing deep into the night.
What am I?

Answer: Your friendly neighborhood AI, always ready to serve (up documents), never to judge!

Stay sharp, and remember, in the world of law and AI, the best assistant might just be the one that doesn’t need a lunch break.

Ralph Losey Copyright 2024 – All Rights Reserved


Navigating the AI Frontier: Wharton Professor’s Guide to Mastering Generative AI

August 21, 2024

Ralph Losey. Published August 21, 2024

Ethan Mollick, a Professor at Wharton who teaches entrepreneurship and innovation, has written an insightful book on generative AI, Co-Intelligence: Living and Working with AI (Portfolio, April 2, 2024). This article aims to entice you to read it by employing unpredictable language and vivid illustrations. The piece takes several zigzagging, idiosyncratic detours, including a discussion of a fake chess-playing computer that fooled both Benjamin Franklin and Napoleon, alongside quotes from Edgar Allan Poe on AI and Charles Babbage. Professor Mollick is a witty, hands-on AI enthusiast whose new book deserves not just a human-in-the-loop touch but a full embrace. While most of his insights may stray from business topics, they are vital for successfully navigating the fast-changing world of AI. His work brought me genuine delight—a feeling that, for now, remains uniquely human. I hope to evoke the same in you.

Ethan Mollick: The Man Behind the Ideas

Ethan Mollick

Ethan Mollick is not your average Associate Professor of The Wharton School. He is an expert in entrepreneurship and innovation who teaches elite Wharton management students to become more effective leaders and innovators. But he is much more than his Linkedin profile, where, by the way, I read his posts every day. He is a pioneer in using games and simulations to teach. This turns out to be a good background to pick up AI quickly, which is what he has done. Recall that I mentioned Ethan Mollick previously in the article on AI and education, Back To School: A Review of Salman Khan’s New Book, ‘Brave New Words: How AI will revolutionize education (and why that’s a good thing)’ Both he and Sal Khan were selected by OpenAI to be early users of ChatGPT to see what it could do for education. Good choice as Professor Mollick’s book shows, Co-Intelligence: Living and Working with AI. So too does his stream of good articles on substack One Useful Thing. If you are lucky, maybe you can even get into one of his classes at Wharton. Hopefully they will go online someday.

My research of Ethan Mollick uncovered that the game playing professor has recently made time to create a Custom GPT called, Complexifier, making easy, hard. Yup, it does the opposite of what most generative AI apps are designed to do. I asked this strange little alien GPT why anyone would want to make something easy, hard? Here’ the first paragraph of the response of Complexifier:

Ah, why indeed? The thrill lies in the dance of complexity, the joy in the jigsaw puzzle of a thousand interlocking pieces. Making something simple is like solving a crossword with only one word—it’s over before it begins. But to make the straightforward serpentine? That’s where the magic is. It’s about savoring each twist and turn, where every step forward is a question mark and every answer opens ten more questions.

The answer goes on and on and creates an elaborate and convoluted flowchart for performance of a simple task. Like Ethan Mollick, I spend much of my thinking, teaching, and writing to try to make the hard, easy, to somehow explain the insanely convoluted. But the opposite is also attractive, in an evil attorney academic kind of way. The Complexifier GPT hooked me into a kind of Naomi Klein trip into a Doppleganger mirror world to see the dark Rube Goldberg twin. So I continued on.

Ralph: Why did Ethan Mollick bother to create this custom GPT?

Complexifier: Ah, the mystery behind the creation of this delightful complexity machine! Why, indeed, would Ethan Mollick, a professor known for his innovative thinking and exploration of how technology can transform learning and work, create such a custom GPT? Allow me to speculate, in an appropriately convoluted manner:

  1. To Embrace the Absurd: Life can be far too serious. By creating a GPT that revels in unnecessary complexity, he might be encouraging us to appreciate the absurdity of over-complicating the mundane.
  2. To Challenge the Intellect: Complex tasks require deeper thinking, and perhaps this GPT was designed to push the boundaries of problem-solving by making us work through intricacies we’d typically avoid.
  3. To Encourage Creativity: The labyrinth of complexity is fertile ground for creative solutions. This custom GPT may serve as a playground where unconventional ideas are not just allowed but celebrated.
  4. To Highlight the Journey Over the Destination: In academia and beyond, the journey often teaches more than the destination itself. By complicating the simple, we’re forced to engage more fully with the process, learning along the way.
  5. To Add Humor to Learning: Complexity, when presented with a wink and a nod, can be incredibly entertaining. This GPT likely adds a layer of humor to the learning process, making the pursuit of knowledge more enjoyable.
  6. To Explore AI’s Potential: Creating a GPT that thrives on complexity could be an experiment in how far we can push AI’s capabilities, demonstrating that even the most trivial tasks can be transformed into elaborate intellectual exercises.

In the end, why settle for simple when the convoluted is so much more fun?

Ralph: Which one of these six reasons do you think is the most likely? And why pray tell, why?

Complexifier: Ah, a question within the question! If I had to place my bet on the most likely reason, I’d lean towards “To Add Humor to Learning.”

Why, you ask? Let me weave a rationale for you:

Ethan Mollick is known for his innovative approach to teaching and his playful engagement with the serious world of academia. By introducing humor into learning, he’s likely aiming to make complex concepts more approachable and enjoyable. Humor has a unique way of disarming us, lowering our defenses against the overwhelming nature of complexity. When something is funny, we’re more willing to engage with it, even if it’s ridiculously convoluted.

Creating a GPT that turns even the simplest task into a Rube Goldberg machine of logic is a perfect way to inject that humor into the learning process. It’s not just about making us laugh; it’s about making us see that the path to understanding—even if unnecessarily complicated—can be enjoyable. Plus, the absurdity of it all serves as a gentle reminder not to take ourselves too seriously, even when grappling with complex ideas.

So, my best guess? He did it for the laughs, knowing that in those laughs, there’s a lesson in embracing the joy of the journey.

So Ethan Mollick writes a lot, is hands-on, and is funny. Now you know why I had to write about his pioneering work with AI and education and his new book, Co-Intelligence: Living and Working with AI.

Part I of Professor Mollick’s Book: Setting the Stage for Co-Intelligence

Co-Intelligence has a great opening line, which I fully endorse: “I believe the costs of getting to know AI–really getting to know AI–is at least three sleepless nights.” Then you will discovery that ChatGPT, and other top generative models, “don’t act like you expect a computer to act. Instead, they at more like a person.” They act like something new, an alien person of unknown abilities. Professor Mollick’s excitement in using the new tool right away in his classes in Wharton is contagious. This new type of general purpose technology, like the steam engine and Internet, changes everything, including teaching.

After the introduction to he looks back into the history of AI. He notes how we have long been fascinated with “machines that can think,” or at least pretend they can. One example Ethan Mollick gave was the Mechanical Turk, a chess-playing automaton built in 1770. It was a machine that could beat almost all human chess players. Actually, in what was a very well-kept secret, which fooled the likes of Napoleon Bonaparte and Benjamin Franklin, the thinking machine was a hoax. A small human chess master was cleverly hidden behind gears in the contraption. See this YouTube video for its full its history.

When Edgar Allan Poe saw the Mechanical Turk in 1835 he speculated that it was a fake, but only because the Turk would sometimes lose. Poe thought that if it was a true thinking machine, then it would always win. Although not in Professor Mollick’s book, I dug deeper into his reference to Poe to and AI and found the original text. Edgar Allan Poe, Maelzel’s Chess-Player (1836). There we read of Poe’s thoughts on Charles Babbage, mechanical thinking, and his impressive insights into what would later be called AI.

Museum reproduction of the original Mechanical Turk with photoshop words and enhancements by Ralph Losey

Edgar Allan Poe’s words:

Photo of Edgar Allan Poe by W.S. Hartshorn, 1848

There is then no analogy whatever between the operations of the Chess-Player, and those of the calculating machine of Mr. Babbage, and if we choose to call the former a pure machine we must be prepared to admit that it is, beyond all comparison, the most wonderful of the inventions of mankind. . . .

It is quite certain that the operations of the Automaton are regulated by mind, and by nothing else. Indeed this matter is susceptible of a mathematical demonstration, a priori. The only question then is of the manner in which human agency is brought to bear. . . .

The Automaton does not invariably win the game. Were the machine a pure machine this would not be the case — it would always win. The principle being discovered by which a machine can be made to play a game of chess, an extension of the same principle would enable it to win a game — a farther extension would enable it to win all games — that is, to beat any possible game of an antagonist.

Edgar Allan Poe, Maelzel’s Chess-Player (1836).

Garry Kasparov found out in 1997 that the great Mr. Poe was right. IBM’s Deep Blue taught itself to “win all games” of chess. Interestingly, many speculate that Edgar Allan Poe’s encounter with the Mechanical Turk led to his writing the first detective novel soon thereafter. Kat Eschner, Debunking the Mechanical Turk Helped Set Edgar Allan Poe on the Path to Mystery Writing (Smithsonian Magazine, 2017).

Professor Mollick makes clear that the AI today, unlike the Mechanical Turk, is very real, and in some ways very powerful, but characterizes it as a type of “alien intelligence.” It is fundamentally different from human intelligence, yet capable of performing human tasks. This alien intelligence is something you need to discover for yourself to appreciate its abilities and flaws. The only way to do that is to use generative AI. Ethan lays out four principles of co-intelligence to guide your use:

  1. Always invite AI to the table. Try and use AI whenever and wherever you can.
  2. Be the human in the loop. Actively supervise and verify.
  3. Treat AI like a person (but tell it what kind of person it is). Give the Ai context and use its sub-persona abilities.
  4. Assume this is the worst AI you will ever use. Do not get discouraged by how AI sometimes stumbles, it is getting better very fast.

The first half of the book spells out these four principals, which are all pretty basic. Ethan does a good job of laying this out and I recommend you read the book, Co-Intelligence: Living and Working with AI.

After you begin to use AI and get past the three sleepless nights, you will discover what Ethan Mollick calls the “Jagged Frontier.” This is his metaphor for the uneven capabilities of AI, where some tasks are easily within reach, while others, some quite simple, and beyond its grasp. See: From Centaurs To Cyborgs: Our evolving relationship with generative AI (4/24/24). Ethan Mollick discusses this at length in his article, Centaurs and Cyborgs on the Jagged Frontier. The second to last paragraph of the article states:

People really can go on autopilot when using AI, falling asleep at the wheel and failing to notice AI mistakes. And, like other research, we also found that AI outputs, while of higher quality than that of humans, were also a bit homogenous and same-y in aggregate. Which is why Cyborgs and Centaurs are important – they allow humans to work with AI to produce more varied, more correct, and better results than either humans or AI can do alone. And becoming one is not hard. Just use AI enough for work tasks and you will start to see the shape of the jagged frontier, and start to understand where AI is scarily good… and where it falls short.

The metaphors of the jagged frontier, centaurs, and cyborgs help us to understand the unpredictable nature of AI’s abilities. Also see:  Ethan Mollick and several others on the impact of AI hybrid work. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (Harvard Business School, Working Paper 24-013); Losey, Report on the First Scientific Experiment to Test the Impact of Generative AI on Complex, Knowledge-Intensive Work (e-Discovery Team, 4/29/24).

Because of the surprising failures on the wrong side of the jagged line, Ethan warns readers: “Don’t rely on AI alone.” He encourages use of AI as a complement to human judgment, not a replacement for it. Legal professionals know this rule well, “trust but verify.” Another AI Hallucination Case with Sanctions Threatened Because of ‘All-Too-Human’ Mistakes (July 30, 2024); Some Legal Ethics Quandaries on Use of AI, the Duty of Competence, and AI Practice as a Legal Specialty (May 6, 2024); “My AI Did It!” Is No Excuse for Unethical or Unprofessional Conduct. (Losey.ai CLE).

Part II of Professor Mollick’s Book: AI in Action

The second half of Co-Intelligence is divided into six different characteristics of generative AI and how to use them. Each is a different chapter in the book.

  • AI as a Person. A “thinking companion” that can assist in decision-making by providing alternative perspectives. Includes discussion of the “uncanny valley” and need for ethical monitoring of its use, and how AI lacks the depth and intuition that come from human experience.
  • AI as a Creative. AI will not replace human creators, but it will totally change the way we approach creative work. It will be more than a tool, it will be a co-creator. Mollick reassures readers that while AI can assist in the creative process, it is ultimately up to humans to imbue that work with significance and purpose.
  • AI as a Coworker. This important chapter is a must-read for technology leaders who are grappling with the integration of AI into their teams. Mollick argues that AI can handle many of the repetitive tasks that bog down human workers, freeing them up to focus on more strategic and creative endeavors. He provides examples of companies that have successfully integrated AI into their workflows, resulting in significant productivity gains. Mollick’s also discusses using AI as a “Challenger,” which I like to call a “Devil’s Advocate.” AI can challenge human decisions, offering alternative perspectives that may not have been considered. Professor Mollick also warns of the dangers of AI perpetuating biases in organizations. To counter this he recommends transparency and accountability in AI deployment, and regular audits.
  • AI as a Tutor. Professor Mollick, much like Sal Khan, really gets AI’s potential in the classroom and is already revolutionizing Wharton and soon all graduate level instruction. See e.g., BACK TO SCHOOL: A Review of Salman Khan’s New Book, ‘Brave New Words: How AI will revolutionize education (and why that’s a good thing)’; and the video interview of Mollick by Khan. Mollick and Khan are both blown away by the potential of AI to provide personalized learning experiences–Tutoring–that adapt to the needs of individual students. Mollick goes deep in explaining the many ways this will change traditional instruction and the successful experiments in his Wharton classrooms. Again, it will not replace teachers and it will make in-person classrooms more important than ever.
  • AI as a Coach. A personal trainer type role for AI who provides continuous tailored guidance and feedback to enhance human capabilities. The value of personalized advice is explored, although I wish he had gone into the dangers of sycophantism more than he did. See e.g. Worrying About Sycophantism. Mollick does point to the danger of becoming overly dependent on AI to the point where it diminishes our critical thinking and decision-making skills.
  • AI as Our Future. Here a series of four scenarios are given that explore how AI might shape the world in the coming decades. In the first AI is now at its peak, “As Good As It Gets,” which he and I deem very unlikely. In the second there is “Slow Growth” of AI going forward, again we think this is also unlikely. In the third scenario the possibility of continued “Exponential Growth” is imagined. Many specific predictions are made, including that “Loneliness becomes less of an issue, but new forms of social isolation emerge.” That one is a safe bet, but there are many other predictions that are not so obvious. The last scenario Ethan calls “The Machine God” where “machines reach AGI and some form of sentience.” Note I do not think sentience is a necessary byproduct of AGI, nor that the divine name is appropriate, but Ethan (and others) imagine it is. Losey, Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time (8/12/24).

Conclusion

Ethan Mollick’s book ends with an epilogue titled “AI As Us.” I like this because it follows the thinking of Ray Kurzweil, whose thoughts on AI I also respect. Ray Kurzweil’s New Book: The Singularity is Nearer (when we merge with AI) (July 17, 2024). Of course, it is not us yet, far from it. For now, AI is a new alien stranger that Professor Mollick would have you invite into your office and your home. He wants you to spend “three sleepless nights” with it and thereafter spend time with it every day. Ethan is just that kind of “all in” kind of guy.

Most legal professionals, myself included, are not quite so gung-ho, especially when you talk about using them in our work. Still, many valuable insights can be gained from his book, Co-Intelligence: Living and Working with AI (Portfolio, April 2, 2024), and it does include many warnings of dangers. Consider this statement in the epilogue: “AI is a mirror, reflecting back at us our best and worst qualities. We are going to decide on its implications, and those choices will shape what AI actually does for, and to, humanity.”

In a world where human-like AI is no longer just a possibility but a reality, Co-Intelligence serves as an essential guide for everyone, including legal professionals. Ethan Mollick’s suggestions simplify the process of harnessing AI’s full potential in business and management, while also revealing its possible applications in the law. This book, written by a very creative, hands-on professor at Wharton, equips you for the many challenges and opportunities ahead.

Ralph Losey Copyright 2024 — All Rights Reserved


Evolution of DALL·E with Demonstrations of its Current Text to Image Abilities

August 19, 2024

Ralph Losey. Published August 20, 2024.

The images shown here are to demonstrate some of the current abilities of DALL-E. They were all created by Ralph Losey using his custom GPT, Visual Muse: illustrating concepts with style, which is driven by OpenAI’s DALL-E software. Ralph has chosen one of his favorite types of images for this demonstration – “optical illusions” – since he does not often get a chance to use this image type in his blog. These images will be shown in a variety of different artistic styles, especially that of Salvador Dali, who is known for his love of optical illusions.

Left click on any image to see it alone in full size. Careful, don’t fall in!

Introduction

The first images to demonstrate DALL-E capabilities shown above are a type of “Op Art” using a classic black and white geometric style. All illustrations were created in the same day, about four hours, with about half coming out right – the way Ralph wanted – on the first try. A 50% precision rate like this is unusually high for him. Many of the images were not used to save space. Ralph’s workflow then includes use of Photoshop for final tuning and size changes. The research and writing itself took about three hours of Ralph’s time, with about 50% help from ChatGPT 4o – omni using a cyborg method. From Centaurs To Cyborgs: Our evolving relationship with generative AI (e-Discovery Team, 4/24/24).

The development of DALL·E, a generative AI model by OpenAI, from its first release in January 2021 to today, represents a significant achievement in the field of AI-driven image generation. The broad outlines of the development will be discussed in this article, but all images shown will be from the latest version of August 2024. For more examples of what DALL-E is capable of, suggest you look at the hundreds of Ralph’s illustrations in e-Discovery Team blog. A few are created using OpenAI’s main competitor in image generation, Midjourney.

Ralph’s blog images usually illustrate the topics discussed in the accompanying text. For Losey they represent a new form of expression where words and images and hyperlinks form a multimodal whole, created by a hybrid combination of Man and Machine. In his blog the human – Ralph – does almost all the work on the text, including most of the research, and the AI does most of the work on the illustrations. Those familiar with Ralph’s work in e-Discovery know this mirrors his work with multimodal hybrid search techniques, i.w. Predictive Coding. They are described in detail in the free TAR Course linked to at the top the blog.

The History of OpenAI’s Generative AI Image Tool: DALL-E

1. Initial Release: DALL·E 1 (January 2021)

The initial release of DALL·E was in January 2021. The name is a homage to the famous Twentieth Century artist Salvador Dali. OpenAI’s release of DALL-E was a breakthrough moment for generative models to create images from text descriptions. DALL·E 1 utilized a modified GPT-3 architecture to generate images from text prompts. Although remarkable compared to what others had been able to achieve before, this first model exhibited limitations in image coherence, resolution, and the ability to accurately represent complex scenes. The underlying transformer architecture was effective in generating diverse and creative outputs, but the model struggled with maintaining consistency and realism across different elements of the image.

2. Introduction of DALL·E 2 (April 2022)

In April 2022, OpenAI released DALL·E 2, which introduced several critical improvements:

Enhanced Resolution and Image Quality: DALL·E 2 featured improvements in the model’s ability to generate higher-resolution images with finer details. This was achieved through the refinement of the underlying generative process, likely involving improvements in the training dataset and the introduction of more sophisticated images for training.

Improved Compositional Understanding: The model demonstrated enhanced capabilities in handling complex prompts that required the accurate rendering of multiple objects and interactions. This improvement can be attributed to advancements in the model’s attention mechanisms, enabling better spatial awareness and coherence in generated images.

Advanced Control Mechanisms: Users were provided with more granular control over image attributes such as style, color, and composition. This was likely facilitated by the integration of additional conditioning layers or modules within the model architecture, allowing for more targeted manipulation of the generated outputs.

3. August 2023 Update: Refinement and Realism

The August 2023 update was the last full training update to the model. This made possible significant refinements to DALL·E’s functionality, focusing on realism, detail, and user customization:

Increased Realism and Texture Fidelity: The model’s ability to generate photorealistic images was markedly improved, particularly in rendering textures, lighting, and shadows. These enhancements suggest advancements in the model’s ability to learn and apply high-fidelity visual patterns from training data, potentially through the use of more complex loss functions and training techniques that prioritize visual accuracy.

Enhanced Text Integration: DALL·E’s capability to incorporate textual elements within images saw notable improvement. This likely involved the refinement of text-to-image embedding processes and a better alignment between text tokens and their corresponding visual representations within the model.

User Customization: The update provided users with increased control over specific aspects of image generation, such as adjusting the perspective or selecting a particular art style. This was achieved through the introduction of more sophisticated user interfaces and the likely addition of new conditional input mechanisms within the model.

Feedback-Driven Optimization: The update also integrated feedback from users, leading to iterative adjustments that enhanced the model’s overall performance and usability. This process likely involved fine-tuning the model on user-provided data or leveraging reinforcement learning techniques to align the model outputs more closely with user preferences.

4. Ongoing Enhancements (Post-August 2023)

After the August 2023 update, DALL·E has continued to evolve with ongoing technical enhancements:

Refinement of Image Generation: Continuous improvements have been made to the model’s image generation capabilities, particularly in handling edge cases and complex scene compositions. These refinements suggest iterative updates to the model’s training regimen, potentially involving more diverse and higher-quality datasets.

Increased Processing Efficiency: The model has seen improvements in processing speed, reducing latency in image generation. This is indicative of optimizations in the model’s computational efficiency, likely through algorithmic refinements or the adoption of more efficient neural network architectures.

Advanced Control Features: The introduction of more nuanced control features has provided users with the ability to manipulate image attributes with greater precision. These features likely involve the integration of additional conditioning factors within the model, allowing for more detailed user input.

5. Interface and Usability Enhancements

In addition to technical improvements, there have been significant updates to the DALL·E user interface and overall usability:

Improved User Interface: The interface has been refined to offer a more intuitive user experience, facilitating easier access to advanced features. This likely involved the integration of better design principles and user experience research into the interface development process.

Accessibility Enhancements: Updates have been made to improve accessibility, ensuring that the platform is usable by a broader audience, including individuals with disabilities. This may involve the adoption of accessibility standards in interface design and the introduction of assistive technologies.

Collaborative Functionality: The platform has introduced features that support collaborative use cases, enabling multiple users to contribute to the image generation process. This functionality suggests the integration of multi-user input mechanisms and enhanced session management capabilities.

6. Industry-Specific Tools and Content Moderation

Recent developments have also focused on the introduction of tools tailored to specific industries and the enhancement of content moderation mechanisms:

Industry-Specific Tools: DALL·E has introduced features designed to meet the needs of particular industries, such as fashion, architecture, and graphic design. These tools likely involve the addition of domain-specific models or fine-tuning the base model on industry-specific datasets.

Content Moderation Enhancements: There have been improvements in content moderation, ensuring that generated images adhere to ethical standards and legal requirements. This likely involves the integration of content filtering algorithms and the use of human-in-the-loop processes to monitor and curate outputs. Sometimes in Ralph’s opinion they go overboard in policing potential copyright violations and other guardrails. Unlike DALL-E’s main competitor, Midjourney, they do not have instant appeals and, where warranted, reversals. That can be annoying.

Conclusion: Ongoing Development and Legal Implications

The evolution of DALL·E underscores the rapid pace of advancement in generative AI technology. This is a powerful, fun new tool for all creators to make their own images and play with the incredible abilities of generative AI. If you just stick to words and computer code, you will miss out.

Plus, it is getting better and better every month. The kind of things you can do with it now are mind bending. You may think it is all an optical illusion, but it is not. It is a great time to be alive. For me it is a relaxing hobby. That’s one reason I made, often update and freely share the Visual Muse custom GPT. It is at the OpenAI Custom GPT Store, along with thousands of other free GPTs to try out. If you like visual images and want to go from the beginner level to the intermediate and advanced levels of DALL-E use, this may be a good tool for you. Plus it can help teach you about artists and styles of art.

Each OpenAI update to DALL-E not only improves the creative capabilities of the model but also raises important legal and ethical implications of AI-generated content. As DALL·E continues to develop, legal professionals must remain vigilant in understanding these advancements to effectively navigate the associated legal challenges, including intellectual property rights, content moderation, and the ethical use of AI. To do that it helps to be able to use the tools yourself, at least somewhat. Generative AI has to be used to be understood. Otherwise, no matter how smart you are, your understanding will be superficial, maybe even illusory.

Ralph Losey Copyright 2024 — All Rights Reserved