Report on the First Scientific Experiment to Test the Impact of Generative AI on Complex, Knowledge-Intensive Work

April 29, 2024

A first of its kind experiment testing use of AI found a 40% increase in quality and 12% increase in productivity. The tests involved 18 different realistic tasks assigned to 244 different consultants in the Boston Consulting Group. The Harvard Business School has published a preliminary report of the mammoth study. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (Harvard Business School, Working Paper 24-013) (hereinafter “Working Paper”). The Working Paper is analyzed here with an eye on its significance for the legal profession.

My last article, From Centaurs To Cyborgs: Our evolving relationship with generative AI, explained that you should expect the unexpected when using generative AI. It also promised that use of sound hybrid prompt engineering methods, such as the Centaur and Cyborg methods, would bring more delight than fright. The Working Paper provides solid evidence of that claim. It reports on a scientific study conducted by AI experts, work experts and experimental scientists. They tested 244 consultants from the Boston Consulting Group (“BCG”). The Working Paper, although still in draft form, shares the key data from the experiment. Appendix E of the Working Paper discusses the conceptual model of the Centaur and Cyborg methods of AI usage, which I wrote about in From Centaurs To Cyborgs.

Harvard, Wharton, Warwick, MIT and BCG Experiment

This was an impressive scientific experiment involving a very large research group. The co-authors of the Working Paper are: Harvard’s Fabrizio Dell’AcquaEdward McFowland III, and Karim Lakhani; Warwick Business School’s Hila Lifshitz-Assaf; Wharton’s Ethan Mollick; and MIT’s Katherine Kellogg. Further, Saran RajendranLisa Krayer, and François Candelon ran the experiment on the BCG side. The generative AI used and tested here was ChatGPT4 (April 2023 version with no special training). For more background and detail on the Working Paper see the video lecture by Professor Ethan Mollick to Stanford students, Navigating the Jagged Technological Frontier.” (details of experiment set up starting at 18:15).

The 244 high-level BCG consultants were a diverse group who volunteered from offices around the world. They dedicated substantial time performing the 18 assigned tasks under the close supervision of the Working Paper author-scientists. Try getting that many lawyers in a global law firm to do the same.

The experiment included several important control groups and other rigorous experimental controls. The primary control was the unfortunate group of randomly selected BCG consultants who were not given ChatGPT4. They had to perform a series of assigned tasks in their usual manner, with computers of course, but without a generative AI tool. The control group comparisons provide strong evidence that use of AI tools on appropriate consulting tasks significantly improve both quality and productivity.

That qualification of “appropriate tasks” is important and involves another control group of tasks. The scientists designed, and included in the experiment, work tasks that they knew could not be done well with the help of AI, that is, not without extensive guidance, which was not provided. They knew that although these tasks were problematic for ChatGPT4, they could be done, and done well, without the use of AI. Working Paper at pg. 13. Pretty devious type of test for the poor guinea pig consultants. The authors called the tasks assigned that they knew to be beyond ChatGPT4’s then current abilities to be work “beyond the jagged technological frontier.” In the authors’ words:

Our results demonstrate that AI capabilities cover an expanding, but uneven, set of knowledge work we call a “jagged technological frontier.” Within this growing frontier, AI can complement or even displace human work; outside of the frontier, AI output is inaccurate, less useful, and degrades human performance. However, because the capabilities of AI are rapidly evolving and poorly understood, it can be hard for professionals to grasp exactly what the boundary of this frontier might be at a given. (sic)

Working Paper at pg. 1.

The improvement in quality for tasks appropriate for GPT4 – work tasks inside the frontier – was remarkable, overall 40%, although somewhat inconsistent between sub-groups as will be explained. Productivity also went up, although to a lesser degree. There was no increase in quality or productivity for workers trying to use GPT4 for tasks beyond the AI’s ability, those outside the frontier. In fact, when GPT4 was used for those outside tasks, the answers of the AI assisted consultants were 19% less likely to be correct. That is an important take-away lesson for legal professionals. Know what LLMs can do reliably, and what they cannot.

The scientists who designed these experiments themselves had difficulty coming up with work tasks that they knew would be outside ChatGPT4’s abilities:

In our study, since AI proved surprisingly capable, it was difficult to design a task in this experiment outside the AI’s frontier where humans with high human capital doing their job would consistently outperform AI.

Working Paper at pg. 19. It was hard, but the business experts finally came up with a consulting task that would make little ChatGPT4 look like a dunce.

The authors were obtuse in this draft report about the specific tasks “outside the frontier” used in the tests and I hope this is clarified, since it is very important. But it looks like they designed an experiment where consultants with ChatGPT4 would use it to analyze data in a spreadsheet and omit important details found only in interviews with “company insiders.” The AI and consultants relying on the AI were likely to miss important details in the interviews and so make errors in recommendations. To quote the Working Paper at page 13:

To be able to solve the task correctly, participants would have to look at the quantitative data using subtle but clear insights from the interviews. While the spreadsheet data alone was designed to seem to be comprehensive, a careful review of the interview notes revealed crucial details. When considered in totality, this information led to a contrasting conclusion to what would have been provided by AI when prompted with the exercise instructions, the given data, and the accompanying interviews.

In other words, it looks like the Working Paper authors designed tasks where they knew ChatGPT4 would likely make errors and gloss over important details in interview summaries. They knew that the human-only expert control group would likely notice the importance of these details in the interviews and so make better recommendations in their final reports. Working Paper, Section 3.2 – Quality Disruptor – Outside the frontier at pages 13-15.

This is comparable to an attorney relying solely on ChatGPT4 to study a transcript of a deposition that they did not take or attend, and ask GPT4 to summarize it. If the attorney only reads the summary, and the summary misses key details, which is known to happen, especially in long transcripts and where insider facts and language are involved, then the attorney can miss key facts and make incorrect conclusions. This is a case of over-delegation to an AI, past the jagged frontier. Attorneys should read the transcript, or have been at the deposition and so recall key insider facts, and thereby be in a position to evaluate the accuracy and completeness of the AI summary. Trust but verify.

The 19% decline in performance for work outside the frontier is a big warning flag to be careful, to go slow at first and know what generative AI can and cannot do well. See: Losey, From Centaurs To Cyborgs (4/24/24). Humans must remain the loop for many of the tasks of complex knowledge work.

Still, the positive findings of increased quality and productivity for appropriate tasks, those within the jagged frontier, are very encouraging to workers in the consulting fields, including attorneys. This large experiment on volunteer BCG guinea pigs provides the first controlled experimental evidence of the impact of ChatGPT4 on various kinds of consulting work. It confirms the many ad hoc reports that generative AI allows you to improve both the quality and productivity of your work, faster and better. You just have to know what you are doing, know the jagged line, and intelligently use both Centaur and Cyborg type methods.

Appendix E of the Working Paper discusses these methods. To quote from Appendix E – Centaur and Cyborg Practices:

By studying the knowledge work of 244 professional consultants as they used AI to complete a realworld, analytic task, we found that new human-AI collaboration practices and reconfigurations are emerging as humans attempt to navigate the jagged frontier. Here, we detail a typology of practices we observed, which we conceptualize as Centaur and Cyborg practices.

Centaur behavior. … Users with this strategy switch between AI and human tasks, allocating responsibilities based on the strengths and capabilities of each entity. They discern which tasks are best suited for human intervention and which can be efficiently managed by AI. From a frontier perspective, they are highly attuned to the jaggedness of the frontier and not conducting full sub-tasks with genAI but rather dividing the tasks into sub-tasks where the core of the task is done by them or genAI. Still, they use genAI to improve the output of many sub-tasks, even those led by them.

Cyborg behavior. … Users do not just have a clear division of labor here between genAI and themselves; they intertwine their efforts with AI at the very frontier of capabilities. This manifests at the subtask level, when for an external observer it might even be hard to demarcate whether the output was produced by the human or the AI as they worked tightly on each of the activities related to the sub task.

As discussed at length in my many articles on generative AI, close supervision and verification is required from most of the work by legal professionals. It is an ethical imperative. For instance, no new case found by AI should ever be cited without human verification. The Working Paper calls this blurred division of labor Cyborg behavior.

Excerpts from the Working Paper

Here are a few more excerpts from the Working Paper and a key chart. Readers are encouraged to read the full report. The details are important, as the outside the frontier tests showed. I begin with a lengthy quote from the Abstract. (The image inserted is my own, generated using my GPT for Dall-E, Visual Muse: illustrating concepts with style.)

In our study conducted with Boston Consulting Group, a global management consulting firm, we examine the performance implications of AI on realistic, complex, and knowledge-intensive tasks. The pre-registered experiment involved 758 consultants comprising about 7% of the individual contributor-level consultants at the company. After establishing a performance baseline on a similar task, subjects were randomly assigned to one of three conditions: no AI access, GPT-4 AI access, or GPT-4 AI access with a prompt engineering overview.

We suggest that the capabilities of AI create a “jagged technological frontier” where some tasks are easily done by AI, while others, though seemingly similar in difficulty level, are outside the current capability of AI.

For each one of a set of 18 realistic consulting tasks within the frontier of AI capabilities, consultants using AI were significantly more productive (they completed 12.2% more tasks on average, and completed tasks 25.1% more quickly), and produced significantly higher quality results (more than 40% higher quality compared to a control group). Consultants across the skills distribution benefited significantly from having AI augmentation, with those below the average performance threshold increasing by 43% and those above increasing by 17% compared to their own scores.

For a task selected to be outside the frontier, however, consultants using AI were 19 percentage points less likely to produce correct solutions compared to those without AI. Further, our analysis shows the emergence of two distinctive patterns of successful AI use by humans along a spectrum of human-AI integration. One set of consultants acted as “Centaurs,” like the mythical half-horse/half-human creature, dividing and delegating their solution-creation activities to the AI or to themselves. Another set of consultants acted more like “Cyborgs,” completely integrating their task flow with the AI and continually interacting with the technology.

Key Chart Showing Quality Improvements

The key chart in the Working Paper is Figure 2, found at at pages 9 and 28. It shows the underlying data of quality improvement. In the words of the Working Paper:

Figure 2 uses the composite human grader score and visually represents the performance distribution across the three experimental groups, with the average score plotted on the y-axis. A comparison of the dashed lines and the overall distributions of the experimental conditions clearly illustrates the significant performance enhancements associated with the use of GPT-4. Both AI conditions show clear superior performance to the control group not using GPT-4.

The version of the chart shown below has additions by one of the coauthors, Professor Ethan Mollick (Wharton), who put the red arrow comments not found in the published version. (Note the “y-axis” in the chart is the vertical scale labeled “Density.” In XY charts “Density” generally refers to distribution of variables, i.w. probability of data distribution. The horizontal “x-axis: is the overall quality performance measurement.)

Professor Mollick provides this helpful highlight of the main findings of the study, both quality and productivity:

[F]or 18 different tasks selected to be realistic samples of the kinds of work done at an elite consulting company, consultants using ChatGPT-4 outperformed those who did not, by a lot. On every dimension. Every way we measured performance. Consultants using AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without. Those are some very big impacts.

Centaurs and Cyborgs on the Jagged Frontier, (One Useful Thing, 9/16/23).

Preliminary Analysis of the Working Paper

I was surprised at first to see that the quality of the “some additional training group” did not go up more than the approximate 8% shown in the chart. In digging deeper I found a YouTube video by Professor Mollick on this study where he said at 19:14 that the training, which he created, only consisted of a five to ten minute seminar. In other words, very cursory and yet it still had an impact on performance.

Another thing to emphasize about the study is how carefully the tasks for the tests were selected and how realistic the challenges were. Again, here is a quote from Ethan Mollick‘s excellent article. Centaurs and Cyborgs on the Jagged Frontier, (One Useful Thing, 9/16/23). Also see Mollick’s interesting new book, Co-Intelligence: Living and Working with AI (4/2/24).

To test the true impact of AI on knowledge work, we took hundreds of consultants and randomized whether they were allowed to use AI. We gave those who were allowed to use AI access to GPT-4 . . . We then did a lot of pre-testing and surveying to establish baselines, and asked consultants to do a wide variety of work for a fictional shoe company, work that the BCG team had selected to accurately represent what consultants do. There were creative tasks (“Propose at least 10 ideas for a new shoe targeting an underserved market or sport.”), analytical tasks (“Segment the footwear industry market based on users.”), writing and marketing tasks (“Draft a press release marketing copy for your product.”), and persuasiveness tasks (“Pen an inspirational memo to employees detailing why your product would outshine competitors.”). We even checked with a shoe company executive to ensure that this work was realistic – they were. And, knowing AI, these are tasks that we might expect to be inside the frontier.

Most of the tasks listed for this particular test do not seem like legal work, but there are several general similarities. For example, the creative task of brainstorming of new ideas, the analytical tasks and the persuasiveness tasks. Legal professionals do not write inspirational memos to employees, like BCG consultants, but we do write memos to judges trying to persuade them to rule in our favor.

Another surprising finding of the Working Paper is that use of ChatGPT by BCG consultants on average reduced the range of ideas that the subjects generated. This is shown in the below Figure 1.

Figure 1. Distribution of Average Within Subject Semantic Similarity by experimental condition: Group A (Access to ChatGPT), Group B (Access to ChatGPT + Training), Group C (No access to ChatGPT), and GPT Only (Simulated ChatGPT Sessions).

We also observe that the GPT Only group has the highest degree of between semantic similarity, measured across each of the simulated subjects. These two results taken together point toward an interesting conclusion: the variation across responses produced by ChatGPT is smaller than what human subjects would produce on their own, and as a result when human subjects use ChatGPT there is a reduction in the variation in the eventual ideas they produce. This result is perhaps surprising. One would assume that ChatGPT, with its expansive knowledge base, would instead be able to produce many very distinct ideas, compared to human subjects alone. Moreover, the assumption is that when a human subject is also paired with ChatGPT the diversity of their ideas would increase.

While Figure 1 indicates access to ChatGPT reduces variation in the human-generated ideas, it provides no commentary on the underlying quality of the submitted ideas. We obtained evaluations of each subject’s idea list along the dimension of creativity, ranging from 1 to 10, and present these results in Table 1. The idea lists provided by subjects with access to ChatGPT are evaluated as having significantly higher quality than those subjects without ChatGPT. Taken in conjunction with the between semantic similarity results, it appears that access to ChatGPT helps each individual construct higher quality ideas lists on average; however, these ideas are less variable and therefore are at risk of being more redundant.

So there is hope for creative brainstormers, at least with GPT4 level of generative AI. Generative AI is clearly more redundant than humans. As quoted in my last article, Professor Mollick says they are a bit homogenous and same-y in aggregate. Losey, From Centaurs To Cyborgs: Our evolving relationship with generative AI (04/24/24). Great phrase that ChatGPT4 could never have come up with.

Also see: Mika Koivisto and Simone Grassini, Best humans still outperform artificial intelligence in a creative divergent thinking task (Nature, Scientific Reports, 2/20/24) (“AI has reached at least the same level, or even surpassed, the average human’s ability to generate ideas in the most typical test of creative thinking. Although AI chatbots on average outperform humans, the best humans can still compete with them.“); Losey, ChatGPT-4 Scores in the Top One Percent of Standard Creativity Tests (e-Discovery Team, 7/21/23) (“Generative Ai is still far from the quality of the best human artists. Not yet. … Still, the day may come when Ai can compete with the greatest human creatives in all fields. … More likely, the top 1% in all fields will be humans and Ai working together in a hybrid manner.”).

AI As a ‘Skill Leveler’

As mentioned, the improvement in quality was not consistent between subgroups. The consultants with the lowest pre-AI tests scores improved the most with AI. They became much better than they were before. The same goes for the middle of the pack pre-AI scorers. They also improved, but by a lesser amount. The consultants at the top end of pre-AI scores also improved, but by an even smaller amount than those behind them. Still, with their small AI improvements, the pre-AI winners maintained their leadership. The same consulting experts still outscored everyone. No one caught up with them. What are the implications of this finding on future work? On training programs? On hiring decisions?

Here is Professor Ethan Mollick’s take on the significance of this finding.

It (AI) works as a skill leveler. The consultants who scored the worst when we assessed them at the start of the experiment had the biggest jump in their performance, 43%, when they got to use AI. The top consultants still got a boost, but less of one. Looking at these results, I do not think enough people are considering what it means when a technology raises all workers to the top tiers of performance. It may be like how it used to matter whether miners were good or bad at digging through rock… until the steam shovel was invented and now differences in digging ability do not matter anymore. AI is not quite at that level of change, but skill leveling is going to have a big impact.

Ethan Mollick, Centaurs and Cyborgs on the Jagged Frontier: I think we have an answer on whether AIs will reshape work (One Useful Thing, 9/16/23).

My only criticism of Professor Mollick’s analysis is that it glosses over the differences that remained after AI between the very best, and the rest. In the field I know, law, not business consulting, the differences between the very good, the B or B+ lawyers, and great lawyers, the A or A+, is still very significant. All attorneys with skill levels in the B – A+ range can legitimately be considered top tier legal professionals, especially as compared to the majority of lawyers in the average and below average range. But the impact of these skill differences on client services can still be tremendous, especially in matters of great complexity or importance. Just watch when two top tier lawyers go against each another in court, one good and one truly great.

Further Analysis of Skill Leveling

What does the leveling phenomena of “average becoming good” mean to the future of work? Does it mean that every business consultant with ChatGPT will soon be able to provide top tier consulting advice. Will every business consultant on the street with ChatGPT soon be able to “pen an inspirational memo to employees detailing why your product would outshine competitors“? Will their lower priced memos be just as good as top tier BCG memos? Is generative AI setting the stage for a new type of John Henry moment for knowledge workers, as Professor Mollick suggests? Will this near leveling of the playing field hold true for all types of knowledge workers, not only business consultants, but also doctors and lawyers?

To answer these questions it is important to note that the results in this first study on business consultant work does not show a complete leveling. Not all of the consultants became John Henry superstars. Instead, the study showed the differences continued, but were less pronounced. The gap narrowed, but did not disappear. The race only became more competitive.

Moreover, the names of the individual winners and also-rans remained the same. It is just that the “losers” (seems like too harsh a term) now did not “lose” by as much. In the race to quality the same consultants were still leading, but the rest of the pack was not as far behind. Everyone got a boost, even the best. But will this continue as AI advances? Or eventually will some knowledge workers do far better with the AI steam hammers or shovels than others, no matter where they started out? Moreover, under what circumstances, including pricing differentials, do consumers choose the good professionals who are not quite as good as those on the medalist stand?

The study results show that the pre-AI winners, those at the very top of their fields before the generative AI revolution, were able to use the new AI tools as well as the others. For that reason, their quality and productivity was also enhanced. They still remained on top, still kept their edge. But in the future, assuming AI gets better, will that edge continue? Will there be new winners and also-rans? Or eventually will everyone tie for first, at least in so far as quality and productivity are concerned? Will all knowledge workers end up the same, all equal in quality and productivity.

That seems unlikely, no matter how good AI gets. I cannot see this happening anytime soon, at least in the legal field. (I assume the same is also true for the medical field.) In law the analysis and persuasion challenges are far greater than those in most other knowledge fields. The legal profession is far too complex for AI to create a complete leveling of performance, at least not in the foreseeable future. I expect the differentials among medical professionals will also continue.

Moreover, although not studied in this report, it seems obvious that some legal workers will become far better at using AI than others. In this first study of business consultants, all started on the same level of inexperience using generative AI. Only a few were given training. The training provided, only five to ten minutes, was still enough to move the needle. The control group with this almost trivial amount of training did perform better, although not enough to close the gap.

With significant training, or experience, the improvements should be much greater. Maybe quality will increase by 70%, instead of the 40% we saw with little or no training. Maybe productivity will increase by at least 50%, instead of just 12%. That is what I would expect based on my experience with lawyers since 2012 using predictive coding. After lawyer skill-sets develop for use of generative AI, all of the performance metrics may soar.

Conclusion

In this experiment where some professionals were given access to ChatGPT4 and some were not, a significant, but not complete leveling of performance was measured. It was not a complete leveling because the names at the very top of the leaderboard of quality and productivity remained the same. I believe this is because the test subjects were all ChatGPT virgins. They had not previously learned prompt engineering methods, even the beginning basics of Centaur or Cyborg approaches. It was all new to them.

As part of the experiment some were given ten minutes of basic training in prompt engineering and some were given none. In the next few years some professionals will receive substantial GPT training and attain mastery of the new AI tools. Many will not. When that happens, the names on the top of the leaderboard will likely change, and change dramatically.

History shows that times of great change are times of opportunity. The deck will be reshuffled. Who will learn and readily adapt to the AI enhancements and who will not? Which corporations and law firms will prosper in the age of generative AI, and which will fail? The only certainty here is the uncertainty of surprising change.

In the future every business may well have access to top tier business consultants. All may be able to pen an inspirational memo to employees. But will this near leveling between the best, and the rest, have the same impact on the legal profession? The medical profession? I think not. Especially as some in the profession gain skills in generative AI much faster than others. The competition between lawyers and law firms will remain, but the names on the top of the leader board will change.

From a big picture perspective the small differentials between good and great lawyers are not that important. Of far greater importance is the likely social impact of the near leveling of lawyers. The gain in skills of the vast majority of lawyers will make it possible, for the first time, for high quality legal services to become available to all.

Consumer law and other legal services could become available to everyone, at affordable rates, and without a big reduction in quality. In the future, as AI creates a more level playing field, the poor and middle class will have access to good lawyers too. These will be affordable good lawyers who, when ethically assisted by AI, are made far more productive. This can be accomplished by responsible use of AI. This positive social change seems likely. Equal justice for all will then become a common reality, not just an ideal.

Ralph Losey Copyright 2024. All Rights Reserved.


From Centaurs To Cyborgs: Our evolving relationship with generative AI

April 24, 2024

Centaurs are mythological creatures with a human’s upper body, and a horse’s lower body. They symbolize a union of human intellect and animal strength. In AI technology, Centaurs refers to a type of hybrid usage of generative AI that combines human and AI capabilities. It does so by maintaining a clear division of labor between the two, like a centaur’s divided body. The Cyborgs by contrast have no such clear division and the human and AI tasks are closely intertwined.

A centaur method is designed so there is one work task for the human and another for the AI. For example, creation of a strategy is typically a task done by the human alone. It is separate task for the AI to write an explanation of the strategy devised by the human. The lines between the tasks are clear and distinct, just like the dividing line between the human and horse in a Centaur.

This concept is shown by the above image. It was devised by Ralph Losey and then generated by his AI ChatGPT4 model, Visual Muse. The AI had no part in devising the strategy and no part in the idea of putting the image of a Centaur here. It was also Ralph sole idea to have the human half appear in robotic form and to use a watercolor style of illustration. The AI’s only task was to generate the image. That was the separate task of the AI. Unfortunately, it turns out AI is not good at making Centaurs, especially ones with a robot top, instead of a human head, like the following image.

It made this image after only a few tries. But the first image of the Centaur with a robot top was a struggle. I can usually generate the image I have in mind, often even better than what I first conceived, in just a few prompts. But here, with a half robot Centaur, it took 118 attempts to generate the desired image! I tried many, many different prompts. I even used two different image generative programs, Dall-E and Midjourney. I tried 96 times with Midjourney (it generates fast) and never could get it to make a Centaur with a robot top half. But it did make quite a few funny mistakes, and a few scary ones too. Shown below are a few of the 117 AI bloopers. I note that overall Dall-E did much better that Midjourney, which never did seem to “get it.” The one Dall-E example of a blooper is bottom right, pretty close. The rest are all by Midjourney. I especially like the robot head on the butt of the the sort-of robot horse. It is the bass-ackwards version of what I requested!

After 22 tries with Dall-E I finally got it to make the image I wanted.

The point of this story is that the Centaur method failed to make the Centaur. I was forced to work very closely and directly with the AI to get the image I wanted, I was forced to switch to the Cyborg method. I did not want to, but the Cyborg method was the only way I could get the AI to make a Centaur with a robotic top. Back and forth I went, 118 times. The irony is clear. But there is a deeper lesson here that emerged from the frustration, which I will come back to in the conclusion.

Background on the Centaur and Cyborg as Images of Hybrid Computer Use

The idea to use the Centaur symbol to describe an AI method is credited to chess grand master, Garry Kasparov. He is famous in AI history for his losing battle in 1997 with IBM’s Deep Blue, He retired from chess competition immediately thereafter. Kasparov returned a few years later with computer in hand, with the idea that man and computer could beat any computer alone. It worked, a redemption of sorts. Kasparov ended up calling this Centaur team chess, where human-machine teams play each other online. It is still actively played today. Many claim it is still played at a level beyond that of any supercomputer today, although this is untested. See e.g. The Real Threat From ChatGPT Isn’t AI…It’s Centaurs (PCGamer, 2/13/23).

The use of the term Centaur was expanded and explained by Harvard Professor, Soroush Saghafian, in his article Effective Generative AI: The Human-Algorithm Centaur (Harvard DASH, 10/2023). He explains the hybrid relationship as one where the unique powers of intuition of humans are added to those of artificial intelligence. In a medical study he did at his Harvard lab with the Mayo Clinic they analyzed the results of doctors using LLM AI in a centaur-type model. The goal was to try to reduce readmission risks for a patients who underwent organ transplants.

We found that combining human experts’ intuition with the power of a strong machine learning algorithm through a human-algorithm centaur model can outperform both the best algorithm and the best human experts. . . .

In this article, we focus on recent advancements in Generative AI, and especially in Large Language Models (LLMs). We first present a framework that allows understanding the core characteristics of centaurs. We argue that symbiotic learning and incorporation of human intuition are two main characteristics of centaurs that distinguish them from other models in Machine Learning (ML) and AI. 

Id. at pg. 2  

The Cyborg model is a slightly different in that man and machine work even more closely together. The concept of a cyborg, a mechanical man, also has its origins with the ancient Greek myths: Talos. He was supposedly a giant bronze mechanical man built by Hephaestus, the Greek god of invention, blacksmithing and volcanos. The Roman equivalent God was Vulcan, who was supposedly ugly, but there are no stories of his having pointy ears. You would think that techies might seize upon the name Vulcan, or Talos, to symbolize the other method of hybrid AI use, where tasks are closely connected. But they did not, they went with the much more modern day term – Cyborg.

The word was first coined in 1960 (before StarTrek) by two dreamy AI scientists who combined the root words CYBernetic and ORGanism to describe a being with both organic and biomechatronic body parts. Here is Ralph Losey’s image of a Cyborg, which, again ironically, he created quickly with a simple Centaur method in just a few tries. Obviously the internet, which trained these LLM AIs, has many more cyborg-like android images than centaurs.

More On the Cyborg Method

The Cyborg method supposedly has no clear cut divisions between human and AI work, like the Centaur. Instead, Cyborg work and tasks are all closely related, like a cybernetic organism. People and ChatGPTs usual say that the Cyborg approach involves a deep integration of AI into the human workflow. The goal is a blend where AI and human intelligences constantly interact and complement each other. In contrast to the Centaur method, the Cyborg does not distinctly separate tasks between AI and humans. For instance, in Cyborg a human might start a task, and AI might refine or advance it, or vice versa. This approach is said to be particularly valuable in dynamic environments where continuous adaptation and real-time collaboration between human and AI are crucial. See e.g. Center for Centaurs and Cyborgs OpenAI GPT version (Free GPT version by Community Builder that we recommend. Try asking it more about Cyborgs and Centaurs). Also see: Emily Reigart, A Cyborg and a Centaur Walk Into an Office (NAB Amplify, 9/24/23); Ethan Mollick, Centaurs and Cyborgs on the Jagged Frontier: I think we have an answer on whether AIs will reshape work (One Useful Thing, 9/16/23).

Ethan Mollick is a Wharton Professor who is heavily involved with hands-on AI research in the work environment. To quote the second to last paragraph of his article (emphasis added):

People really can go on autopilot when using AI, falling asleep at the wheel and failing to notice AI mistakes. And, like other research, we also found that AI outputs, while of higher quality than that of humans, were also a bit homogenous and same-y in aggregate. Which is why Cyborgs and Centaurs are important – they allow humans to work with AI to produce more varied, more correct, and better results than either humans or AI can do alone. And becoming one is not hard. Just use AI enough for work tasks and you will start to see the shape of the jagged frontier, and start to understand where AI is scarily good… and where it falls short.

Asleep at the Wheel

Obviously, falling asleep at the wheel is what we have seen in the hallucinating AI fake citations cases. Mata v. Avianca, Inc., 22-cv-1461 (S.D.N.Y. June 22, 2023) (first in a growing list of sanctioned attorney cases). Also see: Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024). But see: United States of America v. Michael Cohen (SDNY, 3/20/24) (Cohen’s attorney not sanctioned. “His citation to non-existent cases is embarrassing and certainly negligent, perhaps even grossly negligent. But the Court cannot find that it was done in bad faith.”)

These lawyers were not only asleep at the wheel, they had no idea what they were driving, nor that they needed a driving lesson. It is not surprising they crashed and burned. It is like the first automobile drivers who would instinctively pull back on the steering wheel in an emergency to get their horses to stop. That may be the legal profession’s instinct as well, to try to stop AI, to pull back from the future. But it is shortsighted, at best. The only viable solution is training and, perhaps, licensing of some kind. These horseless buggies can be dangerous.

Skilled legal professionals who have studied prompt engineering, either methodically or through a longer trial and error process, write prompts that lead to fewer mistakes. Strategic use of prompts can significantly reduce the number and type of mistakes. Still, surprise errors by generative AI cannot be eliminated altogether. Just look at the trouble I had generating a half robot Centaur. LLM language and image generators are masters of surprise. Still, with hybrid prompting skills the surprise results typically bring more delight than fright.

That was certainly the case in a recent study by Professor Ethan Mollick and several others on the impact of AI hybrid work. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (Harvard Business School, Working Paper 24-013). I will write a full article on this soon. As a quick summary, researchers from multiple schools collaborated with the Boston Consulting Group and found a surprisingly high increase in productivity by consultants using AI. The study was based on controlled tests of a AI hybrid team approach to specific consulting work tasks. The results also showed that, even though the specific work tasks tested were performed much faster, the quality was maintained, and for some consultants, increased significantly.

Although we do not have a formal study yet to prove this, it is the supposition of most everyone in the legal profession that is now using AI, that lawyers can also improve productivity and maintain quality. Of course, careful double-checking of AI work product is required to catch errors to maintain quality. This applies not only the obvious case hallucinations, but also to what Professor Mollick called AI’s tendency to be “homogenous and same-y in aggregate” writing. Also See: Losey, Stochastic Parrots: How to tell if something was written by an AI or a human? (common “tell” words used way too often by generative AIs). Lawyers who use AI attentively, without over-delegation to AI, can maintain high quality work, meet all of their ethical duties, and still increase productivity.

The hybrid approach to use of generative AI, both Centaur and Cyborg, have been shown to significantly enhance consulting work. Many legal professionals using AI are seeing the same results in legal work. Lawyers using AI properly can significantly increase productivity and maintain quality. For most of the Boston Consulting Group consultants tested, their quality of work actually went up. There were, however, a few exceptional outliers whose test quality was already at the top. The AI did not make the work of these elite few any better. The same may be true of lawyers.

Transition form Centaur to Cyborg

Experience shows that lawyers who do not use AI properly, typically by over-delegation and inadequate supervision, may increase productivity, but do so at the price of increased negligent output. That is too high a price. Moreover, legal ethics, including Model Rule 1.1, requires competence. I conclude, along with most everyone in the legal profession, that stopping the use of AI by lawyers is futile, but at the same time, we should not rush into negligent use of this powerful tool. Lawyers should go slow and delegate to AI on a very limited basis at first. That is the Centaur approach. Again, like most everyone else, my opinion is to start slow and begin to use AI in a piecemeal fashion. For that reason you should begin now and avoid death by committee, or as lawyers like to call it, paralysis by analysis.

Then, as your experience and competence grows, slowly increase your use of generative AI and experiment with applying it to more and more tasks. You will start to be more Cyborg like. Soon enough you will have the AI competitive edge that so many outside experts over-promise.

Vendors and outside experts can be a big help in implementing generative AI, but remember, this is your legal work. For software, look at the subscription license terms carefully. Note any gaps between what marketing promises and the superseding agreements deliver. Pick and choose your generative AI software applications carefully. Use the same care in picking the tasks to begin to implement official AI usage. You know your practice and capabilities better than any outside expert offering cookie-cutter solutions.

Use the same care and intelligence in selecting the best, most qualified people in your firm or group to train and investigate possible purchases. Here the super-nerds should rule, not the powerful personalities, nor even necessarily the best attorneys. New skill sets will be needed. Look for the fast learners and the AI enthusiasts. Start soon, within the next few months.

Conclusion

According to Wharton Professor Ethan Mollick, secret use and false claims of personal work product have already begun in many large corporations. In his YouTube at 53:30 he shares a funny story of a friend in a big bank. She secretly uses AI all of the time to do her work. Ironically, she was the person selected to write a policy to prohibit the use of AI. She did as requested, but did not want to be bothered to do it herself, so she directed a GPT on her personal phone do it. She sent the GPT written policy prohibiting use of GPTs to her corporate email account and turned it in. The clueless boss was happy, probably impressed by how well it was written. Mollick claims that secret, unauthorized use of AI in big corporations is widespread.

This reminds me of the time I personally heard the GC of a big national bank, now defunct, proudly say that he was going to ban the use of email by his law department. We all smiled, but did not say no to mister big. After he left, we LOL’ed about the dinosaur for weeks. Decades later I still remember it well.

So do not be foolish or left behind. Proceed expeditiously, but carefully. Then you will know for yourself, from first-hand experience, the opportunities and the dangers to look out for. And remember, no matter what any expert may suggest to the contrary, you must always supervise the legal work done in your name.

There is a learning curve in the careful, self-knowledge approach, but eventually the productivity will kick in, and with no loss of quality, nor embarrassing public mistakes. For most professionals, there should also be an increase in quality, not just quantity or speed of performance. In some areas of practice, there may be both a substantial improvement in productivity and quality. It all depends on the particular tasks and the circumstances of each project. Lawyers, like life, are complex and diverse with ever changing environments and facts.

My image generation failure is a good example. I expected a Centaur like delegation to AI would result in a good image of a Centaur with a robotic top half. Maybe I would need to make a few adjustments and tries, but I never would have guessed I would have to make 118 attempts before I got it right. My efforts with Visual Muse and Midjourney are typically full of pleasant surprises, with only a few frustrating failures. (Although the failure images are sometimes quite funny.) So I was somewhat surprised to have to spend an hour to bring my desired cyber Centaur to life. Somewhat, but not totally surprised. I know from experience that just happens sometimes with generative AI. It is the nature of the beast. Some uncertainty is a certainty.

As is often the case, the hardship did lead to a new insight into the relationship between the two types of hybrid AIs — Centaur and Cyborg. I realized they are not a duality, but more of a skill-set evolution. They have different timings, purposes and require different prompting skill levels. On a learning curve basis, we all start as Centaurs. With experience we slowly become more Cyborg like. We can step in with close Cyborg processes when the Centaur approach does not work well for some reason. We can cycle in and out between the two hybrid approaches.

There is a sequential reality to first use. Our adoption of generative AI should begin slowly, like a Centaur, not a Cyborg. It should be done with detachment and separation into distinct, easy tasks. Also you should start with the most boring repetitive tasks first. See eg. Ralph Losey’s GPT model, Innovation Interviewer (work in progress, but available at the ChatGPT store).

Our mantra as a beginner Centaur should be a constant whisper of trust, but verify. Check the AI work, learn the mistakes and impose policy and procedures to guard against them. That is what good Centaurs do. But as personal and group expertise grows, the hybrid relations will naturally grow stronger. We will work closer and closer with AI over time. It will be safe and ethical to speed up because we will learn its eccentricities, its strengths and weaknesses. We will begin to use AI in more and more work tasks. We will slowly, but surely, transform into a cyborg work style. Still, as legal professionals, our work will be ever mindful of our duties to client and courts.

More machine attuned than before, we will become like Cyborgs, but still remain human. We will step into a Cyborg mind-set to get the job done, but will bring our intuition, feelings and other special human qualities with us.

I agree with Ray Kurzweil that we will ultimately merge with AI, but disagree that it will come by nanobots in your blood or other physical alterations. I think it is much more likely to come from wearables, such as special glasses and AI connectivity devices. It will be more like the 2013 movie HER, which is Sam Altman’s favorite, with an AI operating system and constant companion cell-phone (the inseparable cell phone part has already come true). It will, I predict, be more like that, than the wearables shown in the Avengers movies, the Tony Stark flying Iron Man suit.

But probably it will look nothing like either of those Hollywood visions. The real future has yet to be invented. It is in your hands.

Ralph Losey Copyright 2024. — All Rights Reserved


Stochastic Parrots: How to tell if something was written by an AI or a human?

April 5, 2024

There are two types of “tells” as to whether a writing is a fake, just another LLM created parrot, or whether its real, a bonafide human creation. One is to look at the structure and style of the writing and the other is to look at the words used. This blog examines both GPT detectors, starting with the tell words.

All of the words used here are “real,” written by Ralph Losey, whereas all of the images are “fake,” prompted into existence by Ralph using various image generation tools, including his own GPT4: Visual Muse: illustrating concepts with style. This blog is a continuation, a part three, of two other recent blogs, Stochastic Parrots: the hidden bias of large language model AI, and Navigating the High Seas of AI: Ethical Dilemmas in the Age of Stochastic Parrots.

The ‘Tell Words’ favored by Stochastic Parrots

We begin with an introductory video of a Pirate who looks sort of like Ralph talking about AI tell words. The pirate, of course, is fake, created by Ralph Losey, but all of the words were personally written by him.

The ‘Tell Words’ favored by Stochastic Parrots as told by a Pirate created by Ralph Losey. Just click to watch Matey.

Transcript of the Pirate Video on AI Tell Words

Ahoy Mates!

Did a real person write my pirate script? Or was it written by an AI? A Stochastic Parrot on me shoulder?  Aye, Matey, it happens all the time these days. Who, is telling who, what to say?  Blimey! How can you tell real writing from fake? Arr, matey, listen up.  I’ll give ye some tips on how to tell the difference!

Here are the top “tell words” favored by our phony feathered friends.  Its not foolproof, as even I have used these cliche words, that AI seems to love so much. Yee be a teacher or supervisor concerned about AI plagiarism?  Then you might save this list. These damn AIs. Stochastic Parrots, they be called, often use these words  incorrectly, or too much. Here they be. In rough order of parrot popularity.

Synergy, that be the worst of all, followed by Blockchain, which is often used inappropriately. Here are some more beauties to look out for: Leverage. Innovative. Disruptive or Disrupt. AI-driven. Pivot. Oh, I hate that one!

Here are more parrot nasties. Scale. Agile. Think outside the box. Paradigm shift. Bandwidth. Deep dive. Shiver me timbers! Are you starting to feel sick from all the vague cliches?

Aye. But There be more, many more. They all need to go to Davey Jones Locker: Ecosystem. Due diligence. Empower. Holistic. Ah, that was once a good word.

Blimey! Here’s  a really bad one. Game-changer. This makes me heave ho! Follow the link below to hear the rest!

List of “Tell Words” Indicative of AI Writing

Thanks again to the “No More Delve” GPT for helping me with this. I recommend this program. These are roughly ranked in order of misuse. This list does not purport to be complete nor based on scientific studies.

  1. Synergy – Often used to describe the potential benefits of combining efforts or entities, but frequently seen as vague.
  2. Leverage – Intended to convey using something to its maximum advantage, but often considered jargon when overused.
  3. Innovative – While it’s meant to describe something novel or original, its overuse has dulled its impact.
  4. Disruptive – Used to describe products or services that radically change an industry or technology, but now seen as clichéd.
  5. Blockchain – Specific to technology but has been overused to the point of becoming a buzzword even in irrelevant contexts.
  6. AI-driven – Meant to highlight the use of artificial intelligence, but often used without clear relevance to actual AI capabilities.
  7. Pivot – Originally meant to describe a significant strategy change, now often used for any minor adjustment.
  8. Scale – In business, it’s about growth, but its frequent use has made it a buzzword.
  9. Agile – A specific project management method that’s now broadly used to describe any flexible approach, diluting its meaning.
  10. Think outside the box – Intended to encourage creative thinking, it has become a cliché itself.
  11. Paradigm shift – Used to signify a fundamental change in approach or underlying assumptions, but now often seen as pretentious.
  12. Bandwidth – Borrowed from technology to describe personal or team capacity, its metaphorical use is now considered clichéd.
  13. Deep dive – Meant to indicate a thorough exploration, but often used unnecessarily.
  14. Ecosystem – In technology, refers to a complex network or interconnected system, but often used vaguely.
  15. Due diligence – Critical in legal contexts but used broadly and sometimes inaccurately in business.
  16. Blockchain – Bears repeating due to its pervasive use beyond relevant contexts.
  17. Empower – Intended to convey delegation or giving power to others, it’s now seen as an empty buzzword.
  18. Holistic – Meant to indicate consideration of the whole instead of just parts, but often used vaguely.
  19. Game-changer – Used to describe something that significantly alters the current scenario, but now seen as hyperbolic.
  20. Touch base – Intended as a casual way to say “let’s communicate,” but often viewed as unnecessarily jargony.
  21. Blockchain – Bears repeating due to its pervasive use beyond relevant contexts.
  22. Delve: Often overused to suggest a deep exploration, diminishing its impact.
  23. Journey: Used metaphorically to describe processes or experiences, becoming a cliché.
  24. Supercharge: Tends to overpromise on the impact of strategies or tools.
  25. Embrace: Frequently employed to suggest acceptance or adoption, often without specificity.
  26. Burning question: A dramatic way to highlight an issue, but overuse dilutes its urgency.
  27. Unlock: Commonly used to imply revealing or unleashing potential, becoming worn out.
  28. Roadmap: Overused in business and technology to describe plans or strategies, losing its originality.
  29. Uplevel: Buzzword suggesting improvement or upgrade, often vague.
  30. Future-proof: Used to describe strategies or technologies, but often without clear methodology.
  31. Revolutionize: Promises transformative change, but overuse has made it less meaningful.
  32. Navigate: Frequently used to describe maneuvering through challenges, becoming clichéd.
  33. Harness: Suggests utilizing resources or forces, but overused to the point of vagueness.
  34. Transform: A catch-all term for change, its impact has been diluted through overuse.
  35. Drives: Often used to denote motivation or causation, but has become a buzzword.
  36. Realm: Used to describe fields or areas of interest, but has grown to be seen as pretentious.
  37. Vibrant: A go-to adjective for lively or bright descriptions, now seen as overused.
  38. Innovation: Once meaningful, now a generic term for anything new or updated.
  39. Foster: Commonly used for encouraging development, but its impact is lessened through overuse.
  40. Elevate: Used to suggest improvement or enhancement, often without clear context.
  41. In summary: Overused transition that can be seen as unnecessary filler.
  42. In conclusion: Another filler transition that may unnecessarily signal the end of a discussion.
  43. Testament: Often used to prove or demonstrate, but has become clichéd.
  44. Unleash: Implies releasing potential or power, but overuse has weakened its effect.
  45. Trenches: Metaphorically used to describe deep involvement, now seen as overdone.
  46. Distilled: Suggests purification or simplification, but often used vaguely.
  47. Spearhead: Used to denote leadership or initiative, but has become buzzwordy.
  48. Revolution: Promises dramatic change, but is overused and often hyperbolic.
  49. Landscape: Used to describe the overview of a field or area, but now feels worn out.
  50. Imagine this: An attempt to draw the reader in, but can feel contrived.
  51. Master: Suggests a high level of skill or understanding, but often used imprecisely.
  52. Treasure trove: A clichéd way to describe a rich source or collection.
  53. Masterclass: Intended to denote top-tier instruction, but has become a marketing cliché.
  54. Optimize: Common in business and tech to describe making things as effective as possible, now overused.
  55. Pioneering: Meant to convey innovation or trailblazing, but diluted by frequent use.
  56. Groundbreaking: Similar to “pioneering,” its overuse has lessened its impact.
  57. Cutting-edge: Used to describe the forefront of technology or ideas, now a cliché.
  58. Impactful: Intended to denote significant effect or influence, but overuse has rendered it vague.
  59. Thought leader: Aimed to describe influential individuals, but has become a self-applied and diluted term.
  60. Value-add: Used to highlight additional benefits, but has become a buzzword with diluted meaning.
  61. Big data: Meant to describe vast data sets that can reveal patterns, trends, and associations, but now often used as a buzzword irrespective of the scale or complexity of the data analysis.
  62. Thought leadership: Intended to denote influential and innovative ideas, but overuse has made it a nebulous term often devoid of evidence of leading or innovative thinking. (By the way, see thought leader on a Linkedin profile, better run!)

Looking Beyond the AI Favored Words to AI Typical Writing Styles

  1. Generalized Statements: AI-generated content often leans on generalized or vague statements rather than specific, detailed examples. When you read an AI written article, notice the details, or lack thereof. Fake LLM writing are often generalized and overly formulaic. They string many words together in an appearance of learned comprehension, but in actuality, they say little. I call it more fluff than substance. In other words, they talk like a typical politician, with lots of words, but little meaning. This gets even worse when the AI does not have access to up-to-date information, or is generating content on a topic on which its data is limited, like lots of “insider baseball” talk. In my field, that includes the kinds of things that lawyers and judges privately say to each other, but are seldom, if ever, written down, much less published. Just go to a bar at a Bar convention. This limited detailed knowledge makes it easy for human experts in a field to detect fake writing in their own area, but outside their field, not so much.
  2. Neutral and Diplomatic Language: AI often defaults to very neutral and diplomatic language, sometimes excessively so, in an effort to avoid making controversial or unsubstantiated claims. “WTF” is not a phrase, or even initials they are likely to use. The software manufacturers try to filter out the profanities found in many parts of the internet, which is typically a good thing. Still, the result is often unnaturally squeaky clean language and style that make friggen fake language easy to spot.
  3. Excessive Politeness: Especially in responses or interactive content, LLMs use phrases that seem overly polite or formal, such as frequent use of “Thank you for your question,” or “I’m sorry, but I’m not able to.” Miss manners in writing is a dead giveaway. Plus, its so damn annoying to real people, thank you very much!
  4. No Real Humor or Wit. The kind of snarky, subtle, almost funny remarks that permeate my writings seem to be beyond the grasp of AI. Much like the AI Android “Data” in StarTrek, LLM’s just don’t grok humor and their jokes usually suck.
  5. Lack of Emotion. Kind of obvious, but robot writing often has a tell of being robotic, overly structured, intellectual and emotionless. Personality and emotion seem irrelevant to these writing algorithms, although we are seeing new types of LLMs that specialize in emotion, so be careful, they can be charming and seductive too. See: Code of Ethics for “Empathetic” Generative AI.
  6. Too Perfect Spelling. Real humans make typographical errors, and, even with spell checkers, there are often a few mistakes that slip by. My blog is a good example of this. A typo in a text is a good indicator that it was human written. Of course, this again is just a tell. We writers always strive to be perfect and smart computers can help us with that.
  7. Lack of Personal Experience: AI-generated texts often lack personal anecdotes, experiences, or strong opinionated statements, unless specifically programmed to simulate such content. This reminds me of lectures I have sat through by self proclaimed e-discovery experts who have never personally done a document review. I could go on and on with examples, if I wanted, because I have a lifetime of experience and my learning is hands-on, just just academic. Remember, no AI has personal experience of anything. It is all second hand book learning, albeit millions of books.
  8. Lack of Opinions: AIs are often trained by their makers not to be opinionated. After all, opinions might possibly offend someone. Can’t have that. That is one reason AI writing is often bland, which is another tell. Real humans have opinions, lots of them, many wrong. But that’s one reason we are such a charming species and our writing is so much more enjoyable to read. AI talk is not only general and vague, it is often stogy, over polite and politically correct. Also, try getting an AI to give you its legal opinion. Thank goodness for lawyers like me they refuse and say instead to speak to a real lawyer. There are, of course, many ways to trick them into giving you a legal opinion anyway, but that’s another story, one that you will have to retain me to tell you.
  9. AI Avoids Slang and Uncommon Idioms. Since GPTs are trained on public data, they use the most common words, the ones they have literally read a billion times. I may be tilting at windmills, but in my experience GPTs are as blind as a bat to many idioms, phrases and words. We are talking about speech that is too regional, subcultural, seldom used, or still too new, the latest vocab. Unfortunately, I’ve checked, matey, and GPTs do speak fluent pirate. Arr, there be many of us pirates about.
  10. Repetitive language: AI written content may repeat words, phrases, or sentences. Yeah man, like over and over. In fake writings you may also see the same sentence structure repeated in different paragraphs. They repeat themselves and don’t seem to care, over and over. That is one reason fake AI talk can be so boring. I mean, they just keep saying things to death. Enough already!
  11. Lack of creativity: AI writing is often too predictable. Well, duh or course. LLM intelligence does come from predicting the next word you know. The Top_p or Creativity settings may be too low. Again, the sooo boring speech results. Humans are typically more enjoyable to read. That takes us back to know-it-all Data on StarTrek who does not understand humor. Subtle humor is just not something they grok. Maybe someday. 
  12. Error Patterns in Complex Constructs: AI may struggle with complex language constructs or highly nuanced topics, such as law (which is one reason they wisely refrain from giving out legal opinions). This mental challenge when dealing with complex ideas, will, in my opinion, be corrected soon. But in the meantime this limitation in intelligence leads to errors in speech like topic relevance and basic coherence. The LLMs now often may sound like a 1L trying to explain the elements of contract when cold called in Contract One, or like a newbie lawyer trying to argue a subject to a judge where they have only read the Gilbert’s. Note this last run-on sentence could not possibly have been written by an LLM.
  13. Overuse of Transitional Phrases: AI tends to overuse transitional phrases such as “Furthermore,” “Moreover,” “In addition,” and “On the other hand,” in an attempt to create cohesion. They seem to lack creativity in that regard, or fear simply doing without transitions. In any event, they often screw up transitions.
  14. Hedging Language: AI often uses hedging language like “It might be the case,” “It is often thought,” or “One could argue,” to avoid making definitive statements that could be incorrect. It all depends if this is appropriate or not. Just ask any lawyer. We and our insurers frigging love qualifiers.
  15. Synonym Swapping: To avoid repetition, AI might use synonyms excessively or in slightly unusual contexts. This can lead to awkward phrasing or slightly off usage of certain terms. Comes from their being so damned repetitive and too general. The result is often the use of unnecessary “fancy words,” typical of a linguistic show-off.
  16. Repetitive Qualifiers: AI might use repetitive qualifiers like “very,” “extremely,” “significantly,” more than a typical human writer would, to emphasize points. I am very guilty of that myself.
  17. Standardized Introductions and Conclusions: AI-generated content might start with very formulaic introductions and end with standardized conclusions, often summarizing content in a predictable manner (e.g., “In conclusion,” followed by a restatement of key points). The fondness of opening and closing statements is much like a lawyer in at trail or in legal memorandums, but AI does a poor job at it. I always end my blog with a conclusion and I like to think that most of my blogs do not read anything like fake talk by a LLM. AI writing with catchy, not mere formalistic conclusions, will improve in the future, of that I have no doubt. But in the meantime, this is yet another trait that is a dead giveaway.

Conclusion

These tells words and styles must all be taken with a grain of salt. They are only indicators. None are dispositive, but, taken all together, they can help you to determine if a writing is real or fake. All of these indicators should be used as part of a broader assessment of real or fake, rather than a one and done acid test. As AI technology evolves, the patterns and tells will likely become less noticeable. For this reason, I predict that orals exams will become more popular. Teachers on all levels will be forced to revert to the medieval guild traditions of oral defense of knowledge and skills, as has always been required of PhD candidates.

The obvious should also be stated here. For most people the greatest tell of all of fake writing is how good it is compared to past efforts, education and experience. A sudden improvement in a student’s writing is a strong tell of plagiarism. Indeed, aside from the few humans that professionally, most flesh and blood intelligences are relatively poor writers as compared to LLM writers. Look at the history and qualifications of the writer. It may make it obvious that their writing is too good to be true.

Still, any objective observer, no matter how much they may dislike AI, would have to concede that GPTs can already write better than most people; most, but not all. GPTs are still, at best, mere B-grade writers, often far worse, for the reasons here listed. The can be vacuous blowhards, mere parrots, pushing watered down Gilberts. They can be all style over substance, not to mention inaccurate and sometimes hallucinatory. They are not even close to the best humans writing in their fields of special expertise, legal or otherwise. Yes, I am expressing a strong opinion here that might offend some. Don’t like it? Go back to reading the bland crap of stochastic parrots!

Still, despite these protestations, I love these odd birds for their great potential as hybrid tools. They should be used to help us to speak and think better, and make better decisions. But Stochastic Parrots should not dictate what we say or do, especially those of us engaged in critical fields like law and medicine. It is one thing to use LLMs for writing fiction, quite another to make life and death decisions in courts and hospitals. AI should be skillfully used by professionals as a consulting tool to assist, not replace, good human judgement and empathy.

Ralph Losey Copyright 2024 — All Rights Reserved


Navigating the High Seas of AI: Ethical Dilemmas in the Age of Stochastic Parrots

April 3, 2024

Large Language Model generative AIs are well-described metaphorically as “stochastic parrots.” In fact, the American Dialect Society, selected stochastic parrot as its AI word of the year for 2023, just ahead of the runners-up “ChatGPT, hallucination, LLM and prompt engineer.” These genius stochastic parrots can be of significant value to all legal professionals, even those who don’t like pirates. You may want one on your shoulder soon, or at least in your computers and phone. But, as you embrace them, you should know that these parrots can bite. You should be aware of the issues of bias and fairness problems inherent in these new technical systems.

The ethical issues were raised in my last blog and video, Stochastic Parrots: the hidden bias of large language model AI. In the video blog an avatar, which looks something like me with a parrot on his shoulder, quoted the famous article on LLM AI bias, and briefly discussed how the prejudices are baked into the training data. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (FAccT ’21, (3/1/21) by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell. In this followup blog I dig a little deeper into the article and the controversies surrounding it.

Article Co-Author Timnit Gebru

First of all, it is interesting to note the internet rumor, based on a few tweets, concerning one of the lead authors of the Stochastic Parrots article, Timnit Gebru. She was a well-known leader of Google’s ethical AI team at the time she co-wrote it. She was allegedly forced to leave Google because its upper management didn’t like it. See: Karen Hao, We read the paper that forced Timnit Gebru out of Google. (MIT Technology Review, 12/04/2020);  Shirin Ghaffary, The controversy behind a star Google AI researcher’s departure (Vox, 12/09/20). According Karen Ho’s article, more than 1,400 Google staff members and 1,900 other supporters signed a letter of protest about the alleged firing of Timnit Gebru as an act of research censorship. The rumor is Google tried to stop the publication of the article, but obviously the article was in fact published on March 1, 2021.

According to the MIT Technology Review article, Google did not like all four points of criticism of LLMs that were made in the Parrot article:

  1. Environmental and financial costs. Pertaining to the need for vast amount of computing to create the LLMs and the energy costs, the carbon footprint.
  2. Massive data, inscrutable models. The training data mainly comes from the internet and so contains racist, sexist, and otherwise abusive language. Moreover, the vast amount of data used makes the LLMs hard to audit and eliminate embedded biases.
  3. Research opportunity costs. Basically complaining that too much money was spent on LLMs, and not enough on other types of AI. Note this complaint was made before the unexpected LLM breakthroughs in 2022 and 2023.
  4. Illusions of meaning. In the words of Karen Ho, the Parrot article complained that the problem with LLM models is that they are “so good at mimicking real human language, it’s easy to use them to fool people.” 

Moreover, as the Hao article in the MIT Technology Review points out, Google’s then head of AI, well-known scientist, Jeff Dean, claimed that the research behind the article “didn’t meet our bar” and “ignored too much relevant research.” Specifically, he said it didn’t mention more recent work on how to make large language models more energy efficient and mitigate problems of bias. Maybe they didn’t know?

Criticisms of the Stochastic Parrot Article

The main article I found criticizing the Stochastic Parrot also has a weird name: “The Slodderwetenschap (Sloppy Science) of Stochastic Parrots – A Plea for Science to NOT take the Route Advocated by Gebru and Bender” (2021). The author, Michael Lissack, challenges the ethical “woke” stance of the original “Parrot Paper,” and suggests a reevaluation of the argumentation. It should be noted that Gebru has accused Lissack of stalking her and colleagues. See: Claire Goforth, Men in tech are harassing Black female computer scientist after her Google ouster (Daily Dot, 2/5/21) (Michael Lissack has tweeted about Timnit Gebru thousands of times). By the way, “slodderwetenschap” is Dutch for slop-science.

Here are the three criticisms that Lissack makes of Stochastic Parrots:

What is missing in the Parrot Paper are three critical elements: 1) acknowledgment that it is a position paper/advocacy piece rather than research, 2) explicit articulation of the critical presuppositions, and 3) explicit consideration of cost/benefit trade-offs rather than a mere recitation of potential “harms” as if benefits did not matter. To leave out these three elements is not good practice for either science or research.

Lissack, The Slodderwetenschap (Sloppy Science) of Stochastic Parrots, abstract.

Others have spoken in favor of Lissack’s criticisms of the Stochastic Parrots, including most notably, Pedro Domingos. Supra, Men in Tech (includes collection of Domingos’ tweets).

It should also be noted that Lissack’s article includes several positive comments about the Stochastic Parrots work:

The very topic of the Parrot Paper is an ethics question: does the current focus on “language models” of an ever-increasing size in the AI/NLP community need a grounding against potential questions of harm, unintended consequences, and “is bigger really better?” The authors thereby raise important issues that the community itself might use as a basis for self-examination. To the extent that the authors of the Parrot Paper succeed in getting the community to pay more attention to these issues, they will be performing a public service. . . .

The Parrot Paper correctly identifies an “elephant in the room” for the MI/ML/AI/NLP community: the very basis by which these large language models are created and implemented can be seen as multilayer neural network-based black boxes – the input is observable, the programming algorithm readable, the output observable, but HOW the algorithm inside that black box produces the output is no articulable in terms humans can comprehend. [10] What we know is some form of “it works.” The Parrot Paper authors prompt readers to examine what is meant by “it works.” Again, a valuable public service is being performed by surfacing that question. . . .

Most importantly, in my view, the Parrot Paper authors remind readers that potential harm lies in both the careless use/abuse of these language models and in the manner by which the outputs of those models are presented to and perceived by the general public. They quote Prabhu and Birhane echoing Ruha Benjamin: “Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is a fantasy.” [PP lines 565-567, 11, 12] The danger they cite is quite real. When “users” are unaware of the limitations of the models and their outputs, it is all too easy to confuse seeming coherence and exactness for verisimilitude. Indeed, Dr. Gebru first came to public attention highlighting similar dangers with respect to facial recognition software (a danger which remains, unfortunately, with us [13, 14].

The Slodderwetenschap (Sloppy Science) of Stochastic Parrots at pages 2-3.

Lissack’s main objection appears to be the argumentative nature of what the article presents as science, and the many subjective opinions underlying the Parrot article. He argues that the paper itself is “ethically flawed.”

Talking Stochastic Parrots Have No Understanding

Artificial intelligences like ChatGPT4 may sound like they know what they are talking about, but they don’t. There is no understanding at all in the human sense; it is all just probability calculations of coherent speech. No self awareness, no sense of space and time, no feelings, no senses (yet) and no intuition – just math.

It is important to make a clear distinction between human cognitive processes, which are deeply linked and arise out of bodily experiences and the external world, and computational models that lack a real world, experiential basis. As lawyers we must recognize the limits of mere machine tools. We cannot over-delegate to them just because they sound good, especially when acting as legal counselors, judges, and mediators. See e.g. Yann Lecun and Browning, AI And The Limits Of Language (Noema, 8/23/22) (“An artificial intelligence system trained on words and sentences alone will never approximate human understanding.”); Valmeekam, et al, On the Planning Abilities of Large Language Models (arXiv, 2/13/23) (poor at planning capabilities); Dissociating language and thought in large language models (arXiv, 3/23/24) (poor at functional competence tasks).

Getting back to the metaphor, a parrot may not understand the words it speaks, but they at least have some self awareness and consciousness. An AI has none. As one thoughtful Canadian writer put it:

Though the output of a chatbot may appear meaningful, that meaning exists solely in the mind of the human who reads or hears that output, and not in the artificial mind that stitched the words together. If the AI Industrial Complex deploys “counterfeit people” who pass as real people, we shouldn’t expect peace and love and understanding. When a chatbot tries to convince us that it really cares about our faulty new microwave or about the time we are waiting on hold for answers, we should not be fooled.

Bart Hawkins Kreps, Beware of WEIRD Stochastic Parrots (Resilience, 2/15/24).

For interesting background, see The New Yorker article of 11/15/2023, by Angie Wang, Is My Toddler a Stochastic Parrot? Also see: Scientific research article on the lack of diversity in internet model training, Which Humans? by Mohammad Atari, et al. (arXiv, 9/23/23) (“Technical reports often compare LLMs’ outputs with “human” performance on various tests. Here, we ask, “Which humans?”“).

I also suggest you look at the often cited technical blog post by the great contemporary mathematician, Stephen Wolfram What Is ChatGPT Doing … and Why Does It Work?. As Wolfram states in the conclusion ChatGPT isjust saying things that “sound right” based on what things “sounded like” in its training material.” Yes, it sounds good, but nobody’s home, no real meaning. That is ultimately why the fears of AI replacing human employment are way overblown. It is also why LLM based plagiarism is usually easy to recognize, especially by experts in the field under discussion. The Chatbot writing is obvious by its style over substance language, which is high on fluff and stereotypical language, and overuse of certain “tell” words. More on this in my next blog on how to spot stochastic parrots.

Personally, I’m already sick of the bland, low meaning, fluffy content news and analysis writing now flooding the internet, including legal writing. It is almost as bad as ChatGPT writing for political propaganda and sales. It is not only biased, and riddled with errors, it is mediocre and boring.

Conclusion

Everyone agrees that LLM AIs will, if left unchecked, reproduce biases and inaccuracies contained in the original training data. This inevitably leads to the generation of false information – to skewed output to prompts – and that in turn can lead to poor human decisions made in reliance on biased output. This can be disastrous in sensitive applications like law and medicine.

Everyone also agrees that this problem requires AI software manufacturers to model designs to curb these biases, and to monitor and test to ensure the effectiveness and trustworthiness of LLMs.

The disagreement seems to be in evaluation of the severity of the problem, and the priority that should it be given to its mitigation. There is also disagreement as to the degree of success made to date in correcting this problem, and whether the problem can even be fixed at all.

My view is that these issues can be significantly reduced, but I doubt that LLMs will ever be perfect and entirely free of all bias, even though they may become better than the average human. See e.g. New Study Shows AIs are Genuinely Nicer than Most People – ‘More Human Than Human’.

Moreover, I believe that users of LLMs, especially lawyers, judges and other legal professionals, can be sensitized to these bias issues. They can learn to recognize previously unconscious bias in the data and in themselves. The sensitivity to the bias issues can then help AI users to recognize and overcome these challenges. They can realize when the responses given by an AI are wrong and must be corrected.

The language of a ChatGPT may correctly echo what most people in the past said, but that does not, in itself, make it the right answer for today. As lawyers we need the true, correct and bias free answers, the just and fair answers, not the most popular answers of the past. We have an ethical duty of competence to double check the mindless speech of our stochastic parrots. We should question why Polly always wants a cracker?

Ralph Losey Copyright 2024 – All Rights Reserved