From Centaurs To Cyborgs: Our evolving relationship with generative AI

April 24, 2024

Centaurs are mythological creatures with a human’s upper body, and a horse’s lower body. They symbolize a union of human intellect and animal strength. In AI technology, Centaurs refers to a type of hybrid usage of generative AI that combines human and AI capabilities. It does so by maintaining a clear division of labor between the two, like a centaur’s divided body. The Cyborgs by contrast have no such clear division and the human and AI tasks are closely intertwined.

A centaur method is designed so there is one work task for the human and another for the AI. For example, creation of a strategy is typically a task done by the human alone. It is separate task for the AI to write an explanation of the strategy devised by the human. The lines between the tasks are clear and distinct, just like the dividing line between the human and horse in a Centaur.

This concept is shown by the above image. It was devised by Ralph Losey and then generated by his AI ChatGPT4 model, Visual Muse. The AI had no part in devising the strategy and no part in the idea of putting the image of a Centaur here. It was also Ralph sole idea to have the human half appear in robotic form and to use a watercolor style of illustration. The AI’s only task was to generate the image. That was the separate task of the AI. Unfortunately, it turns out AI is not good at making Centaurs, especially ones with a robot top, instead of a human head, like the following image.

It made this image after only a few tries. But the first image of the Centaur with a robot top was a struggle. I can usually generate the image I have in mind, often even better than what I first conceived, in just a few prompts. But here, with a half robot Centaur, it took 118 attempts to generate the desired image! I tried many, many different prompts. I even used two different image generative programs, Dall-E and Midjourney. I tried 96 times with Midjourney (it generates fast) and never could get it to make a Centaur with a robot top half. But it did make quite a few funny mistakes, and a few scary ones too. Shown below are a few of the 117 AI bloopers. I note that overall Dall-E did much better that Midjourney, which never did seem to “get it.” The one Dall-E example of a blooper is bottom right, pretty close. The rest are all by Midjourney. I especially like the robot head on the butt of the the sort-of robot horse. It is the bass-ackwards version of what I requested!

After 22 tries with Dall-E I finally got it to make the image I wanted.

The point of this story is that the Centaur method failed to make the Centaur. I was forced to work very closely and directly with the AI to get the image I wanted, I was forced to switch to the Cyborg method. I did not want to, but the Cyborg method was the only way I could get the AI to make a Centaur with a robotic top. Back and forth I went, 118 times. The irony is clear. But there is a deeper lesson here that emerged from the frustration, which I will come back to in the conclusion.

Background on the Centaur and Cyborg as Images of Hybrid Computer Use

The idea to use the Centaur symbol to describe an AI method is credited to chess grand master, Garry Kasparov. He is famous in AI history for his losing battle in 1997 with IBM’s Deep Blue, He retired from chess competition immediately thereafter. Kasparov returned a few years later with computer in hand, with the idea that man and computer could beat any computer alone. It worked, a redemption of sorts. Kasparov ended up calling this Centaur team chess, where human-machine teams play each other online. It is still actively played today. Many claim it is still played at a level beyond that of any supercomputer today, although this is untested. See e.g. The Real Threat From ChatGPT Isn’t AI…It’s Centaurs (PCGamer, 2/13/23).

The use of the term Centaur was expanded and explained by Harvard Professor, Soroush Saghafian, in his article Effective Generative AI: The Human-Algorithm Centaur (Harvard DASH, 10/2023). He explains the hybrid relationship as one where the unique powers of intuition of humans are added to those of artificial intelligence. In a medical study he did at his Harvard lab with the Mayo Clinic they analyzed the results of doctors using LLM AI in a centaur-type model. The goal was to try to reduce readmission risks for a patients who underwent organ transplants.

We found that combining human experts’ intuition with the power of a strong machine learning algorithm through a human-algorithm centaur model can outperform both the best algorithm and the best human experts. . . .

In this article, we focus on recent advancements in Generative AI, and especially in Large Language Models (LLMs). We first present a framework that allows understanding the core characteristics of centaurs. We argue that symbiotic learning and incorporation of human intuition are two main characteristics of centaurs that distinguish them from other models in Machine Learning (ML) and AI. 

Id. at pg. 2  

The Cyborg model is a slightly different in that man and machine work even more closely together. The concept of a cyborg, a mechanical man, also has its origins with the ancient Greek myths: Talos. He was supposedly a giant bronze mechanical man built by Hephaestus, the Greek god of invention, blacksmithing and volcanos. The Roman equivalent God was Vulcan, who was supposedly ugly, but there are no stories of his having pointy ears. You would think that techies might seize upon the name Vulcan, or Talos, to symbolize the other method of hybrid AI use, where tasks are closely connected. But they did not, they went with the much more modern day term – Cyborg.

The word was first coined in 1960 (before StarTrek) by two dreamy AI scientists who combined the root words CYBernetic and ORGanism to describe a being with both organic and biomechatronic body parts. Here is Ralph Losey’s image of a Cyborg, which, again ironically, he created quickly with a simple Centaur method in just a few tries. Obviously the internet, which trained these LLM AIs, has many more cyborg-like android images than centaurs.

More On the Cyborg Method

The Cyborg method supposedly has no clear cut divisions between human and AI work, like the Centaur. Instead, Cyborg work and tasks are all closely related, like a cybernetic organism. People and ChatGPTs usual say that the Cyborg approach involves a deep integration of AI into the human workflow. The goal is a blend where AI and human intelligences constantly interact and complement each other. In contrast to the Centaur method, the Cyborg does not distinctly separate tasks between AI and humans. For instance, in Cyborg a human might start a task, and AI might refine or advance it, or vice versa. This approach is said to be particularly valuable in dynamic environments where continuous adaptation and real-time collaboration between human and AI are crucial. See e.g. Center for Centaurs and Cyborgs OpenAI GPT version (Free GPT version by Community Builder that we recommend. Try asking it more about Cyborgs and Centaurs). Also see: Emily Reigart, A Cyborg and a Centaur Walk Into an Office (NAB Amplify, 9/24/23); Ethan Mollick, Centaurs and Cyborgs on the Jagged Frontier: I think we have an answer on whether AIs will reshape work (One Useful Thing, 9/16/23).

Ethan Mollick is a Wharton Professor who is heavily involved with hands-on AI research in the work environment. To quote the second to last paragraph of his article (emphasis added):

People really can go on autopilot when using AI, falling asleep at the wheel and failing to notice AI mistakes. And, like other research, we also found that AI outputs, while of higher quality than that of humans, were also a bit homogenous and same-y in aggregate. Which is why Cyborgs and Centaurs are important – they allow humans to work with AI to produce more varied, more correct, and better results than either humans or AI can do alone. And becoming one is not hard. Just use AI enough for work tasks and you will start to see the shape of the jagged frontier, and start to understand where AI is scarily good… and where it falls short.

Asleep at the Wheel

Obviously, falling asleep at the wheel is what we have seen in the hallucinating AI fake citations cases. Mata v. Avianca, Inc., 22-cv-1461 (S.D.N.Y. June 22, 2023) (first in a growing list of sanctioned attorney cases). Also see: Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024). But see: United States of America v. Michael Cohen (SDNY, 3/20/24) (Cohen’s attorney not sanctioned. “His citation to non-existent cases is embarrassing and certainly negligent, perhaps even grossly negligent. But the Court cannot find that it was done in bad faith.”)

These lawyers were not only asleep at the wheel, they had no idea what they were driving, nor that they needed a driving lesson. It is not surprising they crashed and burned. It is like the first automobile drivers who would instinctively pull back on the steering wheel in an emergency to get their horses to stop. That may be the legal profession’s instinct as well, to try to stop AI, to pull back from the future. But it is shortsighted, at best. The only viable solution is training and, perhaps, licensing of some kind. These horseless buggies can be dangerous.

Skilled legal professionals who have studied prompt engineering, either methodically or through a longer trial and error process, write prompts that lead to fewer mistakes. Strategic use of prompts can significantly reduce the number and type of mistakes. Still, surprise errors by generative AI cannot be eliminated altogether. Just look at the trouble I had generating a half robot Centaur. LLM language and image generators are masters of surprise. Still, with hybrid prompting skills the surprise results typically bring more delight than fright.

That was certainly the case in a recent study by Professor Ethan Mollick and several others on the impact of AI hybrid work. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (Harvard Business School, Working Paper 24-013). I will write a full article on this soon. As a quick summary, researchers from multiple schools collaborated with the Boston Consulting Group and found a surprisingly high increase in productivity by consultants using AI. The study was based on controlled tests of a AI hybrid team approach to specific consulting work tasks. The results also showed that, even though the specific work tasks tested were performed much faster, the quality was maintained, and for some consultants, increased significantly.

Although we do not have a formal study yet to prove this, it is the supposition of most everyone in the legal profession that is now using AI, that lawyers can also improve productivity and maintain quality. Of course, careful double-checking of AI work product is required to catch errors to maintain quality. This applies not only the obvious case hallucinations, but also to what Professor Mollick called AI’s tendency to be “homogenous and same-y in aggregate” writing. Also See: Losey, Stochastic Parrots: How to tell if something was written by an AI or a human? (common “tell” words used way too often by generative AIs). Lawyers who use AI attentively, without over-delegation to AI, can maintain high quality work, meet all of their ethical duties, and still increase productivity.

The hybrid approach to use of generative AI, both Centaur and Cyborg, have been shown to significantly enhance consulting work. Many legal professionals using AI are seeing the same results in legal work. Lawyers using AI properly can significantly increase productivity and maintain quality. For most of the Boston Consulting Group consultants tested, their quality of work actually went up. There were, however, a few exceptional outliers whose test quality was already at the top. The AI did not make the work of these elite few any better. The same may be true of lawyers.

Transition form Centaur to Cyborg

Experience shows that lawyers who do not use AI properly, typically by over-delegation and inadequate supervision, may increase productivity, but do so at the price of increased negligent output. That is too high a price. Moreover, legal ethics, including Model Rule 1.1, requires competence. I conclude, along with most everyone in the legal profession, that stopping the use of AI by lawyers is futile, but at the same time, we should not rush into negligent use of this powerful tool. Lawyers should go slow and delegate to AI on a very limited basis at first. That is the Centaur approach. Again, like most everyone else, my opinion is to start slow and begin to use AI in a piecemeal fashion. For that reason you should begin now and avoid death by committee, or as lawyers like to call it, paralysis by analysis.

Then, as your experience and competence grows, slowly increase your use of generative AI and experiment with applying it to more and more tasks. You will start to be more Cyborg like. Soon enough you will have the AI competitive edge that so many outside experts over-promise.

Vendors and outside experts can be a big help in implementing generative AI, but remember, this is your legal work. For software, look at the subscription license terms carefully. Note any gaps between what marketing promises and the superseding agreements deliver. Pick and choose your generative AI software applications carefully. Use the same care in picking the tasks to begin to implement official AI usage. You know your practice and capabilities better than any outside expert offering cookie-cutter solutions.

Use the same care and intelligence in selecting the best, most qualified people in your firm or group to train and investigate possible purchases. Here the super-nerds should rule, not the powerful personalities, nor even necessarily the best attorneys. New skill sets will be needed. Look for the fast learners and the AI enthusiasts. Start soon, within the next few months.

Conclusion

According to Wharton Professor Ethan Mollick, secret use and false claims of personal work product have already begun in many large corporations. In his YouTube at 53:30 he shares a funny story of a friend in a big bank. She secretly uses AI all of the time to do her work. Ironically, she was the person selected to write a policy to prohibit the use of AI. She did as requested, but did not want to be bothered to do it herself, so she directed a GPT on her personal phone do it. She sent the GPT written policy prohibiting use of GPTs to her corporate email account and turned it in. The clueless boss was happy, probably impressed by how well it was written. Mollick claims that secret, unauthorized use of AI in big corporations is widespread.

This reminds me of the time I personally heard the GC of a big national bank, now defunct, proudly say that he was going to ban the use of email by his law department. We all smiled, but did not say no to mister big. After he left, we LOL’ed about the dinosaur for weeks. Decades later I still remember it well.

So do not be foolish or left behind. Proceed expeditiously, but carefully. Then you will know for yourself, from first-hand experience, the opportunities and the dangers to look out for. And remember, no matter what any expert may suggest to the contrary, you must always supervise the legal work done in your name.

There is a learning curve in the careful, self-knowledge approach, but eventually the productivity will kick in, and with no loss of quality, nor embarrassing public mistakes. For most professionals, there should also be an increase in quality, not just quantity or speed of performance. In some areas of practice, there may be both a substantial improvement in productivity and quality. It all depends on the particular tasks and the circumstances of each project. Lawyers, like life, are complex and diverse with ever changing environments and facts.

My image generation failure is a good example. I expected a Centaur like delegation to AI would result in a good image of a Centaur with a robotic top half. Maybe I would need to make a few adjustments and tries, but I never would have guessed I would have to make 118 attempts before I got it right. My efforts with Visual Muse and Midjourney are typically full of pleasant surprises, with only a few frustrating failures. (Although the failure images are sometimes quite funny.) So I was somewhat surprised to have to spend an hour to bring my desired cyber Centaur to life. Somewhat, but not totally surprised. I know from experience that just happens sometimes with generative AI. It is the nature of the beast. Some uncertainty is a certainty.

As is often the case, the hardship did lead to a new insight into the relationship between the two types of hybrid AIs — Centaur and Cyborg. I realized they are not a duality, but more of a skill-set evolution. They have different timings, purposes and require different prompting skill levels. On a learning curve basis, we all start as Centaurs. With experience we slowly become more Cyborg like. We can step in with close Cyborg processes when the Centaur approach does not work well for some reason. We can cycle in and out between the two hybrid approaches.

There is a sequential reality to first use. Our adoption of generative AI should begin slowly, like a Centaur, not a Cyborg. It should be done with detachment and separation into distinct, easy tasks. Also you should start with the most boring repetitive tasks first. See eg. Ralph Losey’s GPT model, Innovation Interviewer (work in progress, but available at the ChatGPT store).

Our mantra as a beginner Centaur should be a constant whisper of trust, but verify. Check the AI work, learn the mistakes and impose policy and procedures to guard against them. That is what good Centaurs do. But as personal and group expertise grows, the hybrid relations will naturally grow stronger. We will work closer and closer with AI over time. It will be safe and ethical to speed up because we will learn its eccentricities, its strengths and weaknesses. We will begin to use AI in more and more work tasks. We will slowly, but surely, transform into a cyborg work style. Still, as legal professionals, our work will be ever mindful of our duties to client and courts.

More machine attuned than before, we will become like Cyborgs, but still remain human. We will step into a Cyborg mind-set to get the job done, but will bring our intuition, feelings and other special human qualities with us.

I agree with Ray Kurzweil that we will ultimately merge with AI, but disagree that it will come by nanobots in your blood or other physical alterations. I think it is much more likely to come from wearables, such as special glasses and AI connectivity devices. It will be more like the 2013 movie HER, which is Sam Altman’s favorite, with an AI operating system and constant companion cell-phone (the inseparable cell phone part has already come true). It will, I predict, be more like that, than the wearables shown in the Avengers movies, the Tony Stark flying Iron Man suit.

But probably it will look nothing like either of those Hollywood visions. The real future has yet to be invented. It is in your hands.

Ralph Losey Copyright 2024. — All Rights Reserved


Stochastic Parrots: How to tell if something was written by an AI or a human?

April 5, 2024

There are two types of “tells” as to whether a writing is a fake, just another LLM created parrot, or whether its real, a bonafide human creation. One is to look at the structure and style of the writing and the other is to look at the words used. This blog examines both GPT detectors, starting with the tell words.

All of the words used here are “real,” written by Ralph Losey, whereas all of the images are “fake,” prompted into existence by Ralph using various image generation tools, including his own GPT4: Visual Muse: illustrating concepts with style. This blog is a continuation, a part three, of two other recent blogs, Stochastic Parrots: the hidden bias of large language model AI, and Navigating the High Seas of AI: Ethical Dilemmas in the Age of Stochastic Parrots.

The ‘Tell Words’ favored by Stochastic Parrots

We begin with an introductory video of a Pirate who looks sort of like Ralph talking about AI tell words. The pirate, of course, is fake, created by Ralph Losey, but all of the words were personally written by him.

The ‘Tell Words’ favored by Stochastic Parrots as told by a Pirate created by Ralph Losey. Just click to watch Matey.

Transcript of the Pirate Video on AI Tell Words

Ahoy Mates!

Did a real person write my pirate script? Or was it written by an AI? A Stochastic Parrot on me shoulder?  Aye, Matey, it happens all the time these days. Who, is telling who, what to say?  Blimey! How can you tell real writing from fake? Arr, matey, listen up.  I’ll give ye some tips on how to tell the difference!

Here are the top “tell words” favored by our phony feathered friends.  Its not foolproof, as even I have used these cliche words, that AI seems to love so much. Yee be a teacher or supervisor concerned about AI plagiarism?  Then you might save this list. These damn AIs. Stochastic Parrots, they be called, often use these words  incorrectly, or too much. Here they be. In rough order of parrot popularity.

Synergy, that be the worst of all, followed by Blockchain, which is often used inappropriately. Here are some more beauties to look out for: Leverage. Innovative. Disruptive or Disrupt. AI-driven. Pivot. Oh, I hate that one!

Here are more parrot nasties. Scale. Agile. Think outside the box. Paradigm shift. Bandwidth. Deep dive. Shiver me timbers! Are you starting to feel sick from all the vague cliches?

Aye. But There be more, many more. They all need to go to Davey Jones Locker: Ecosystem. Due diligence. Empower. Holistic. Ah, that was once a good word.

Blimey! Here’s  a really bad one. Game-changer. This makes me heave ho! Follow the link below to hear the rest!

List of “Tell Words” Indicative of AI Writing

Thanks again to the “No More Delve” GPT for helping me with this. I recommend this program. These are roughly ranked in order of misuse. This list does not purport to be complete nor based on scientific studies.

  1. Synergy – Often used to describe the potential benefits of combining efforts or entities, but frequently seen as vague.
  2. Leverage – Intended to convey using something to its maximum advantage, but often considered jargon when overused.
  3. Innovative – While it’s meant to describe something novel or original, its overuse has dulled its impact.
  4. Disruptive – Used to describe products or services that radically change an industry or technology, but now seen as clichéd.
  5. Blockchain – Specific to technology but has been overused to the point of becoming a buzzword even in irrelevant contexts.
  6. AI-driven – Meant to highlight the use of artificial intelligence, but often used without clear relevance to actual AI capabilities.
  7. Pivot – Originally meant to describe a significant strategy change, now often used for any minor adjustment.
  8. Scale – In business, it’s about growth, but its frequent use has made it a buzzword.
  9. Agile – A specific project management method that’s now broadly used to describe any flexible approach, diluting its meaning.
  10. Think outside the box – Intended to encourage creative thinking, it has become a cliché itself.
  11. Paradigm shift – Used to signify a fundamental change in approach or underlying assumptions, but now often seen as pretentious.
  12. Bandwidth – Borrowed from technology to describe personal or team capacity, its metaphorical use is now considered clichéd.
  13. Deep dive – Meant to indicate a thorough exploration, but often used unnecessarily.
  14. Ecosystem – In technology, refers to a complex network or interconnected system, but often used vaguely.
  15. Due diligence – Critical in legal contexts but used broadly and sometimes inaccurately in business.
  16. Blockchain – Bears repeating due to its pervasive use beyond relevant contexts.
  17. Empower – Intended to convey delegation or giving power to others, it’s now seen as an empty buzzword.
  18. Holistic – Meant to indicate consideration of the whole instead of just parts, but often used vaguely.
  19. Game-changer – Used to describe something that significantly alters the current scenario, but now seen as hyperbolic.
  20. Touch base – Intended as a casual way to say “let’s communicate,” but often viewed as unnecessarily jargony.
  21. Blockchain – Bears repeating due to its pervasive use beyond relevant contexts.
  22. Delve: Often overused to suggest a deep exploration, diminishing its impact.
  23. Journey: Used metaphorically to describe processes or experiences, becoming a cliché.
  24. Supercharge: Tends to overpromise on the impact of strategies or tools.
  25. Embrace: Frequently employed to suggest acceptance or adoption, often without specificity.
  26. Burning question: A dramatic way to highlight an issue, but overuse dilutes its urgency.
  27. Unlock: Commonly used to imply revealing or unleashing potential, becoming worn out.
  28. Roadmap: Overused in business and technology to describe plans or strategies, losing its originality.
  29. Uplevel: Buzzword suggesting improvement or upgrade, often vague.
  30. Future-proof: Used to describe strategies or technologies, but often without clear methodology.
  31. Revolutionize: Promises transformative change, but overuse has made it less meaningful.
  32. Navigate: Frequently used to describe maneuvering through challenges, becoming clichéd.
  33. Harness: Suggests utilizing resources or forces, but overused to the point of vagueness.
  34. Transform: A catch-all term for change, its impact has been diluted through overuse.
  35. Drives: Often used to denote motivation or causation, but has become a buzzword.
  36. Realm: Used to describe fields or areas of interest, but has grown to be seen as pretentious.
  37. Vibrant: A go-to adjective for lively or bright descriptions, now seen as overused.
  38. Innovation: Once meaningful, now a generic term for anything new or updated.
  39. Foster: Commonly used for encouraging development, but its impact is lessened through overuse.
  40. Elevate: Used to suggest improvement or enhancement, often without clear context.
  41. In summary: Overused transition that can be seen as unnecessary filler.
  42. In conclusion: Another filler transition that may unnecessarily signal the end of a discussion.
  43. Testament: Often used to prove or demonstrate, but has become clichéd.
  44. Unleash: Implies releasing potential or power, but overuse has weakened its effect.
  45. Trenches: Metaphorically used to describe deep involvement, now seen as overdone.
  46. Distilled: Suggests purification or simplification, but often used vaguely.
  47. Spearhead: Used to denote leadership or initiative, but has become buzzwordy.
  48. Revolution: Promises dramatic change, but is overused and often hyperbolic.
  49. Landscape: Used to describe the overview of a field or area, but now feels worn out.
  50. Imagine this: An attempt to draw the reader in, but can feel contrived.
  51. Master: Suggests a high level of skill or understanding, but often used imprecisely.
  52. Treasure trove: A clichéd way to describe a rich source or collection.
  53. Masterclass: Intended to denote top-tier instruction, but has become a marketing cliché.
  54. Optimize: Common in business and tech to describe making things as effective as possible, now overused.
  55. Pioneering: Meant to convey innovation or trailblazing, but diluted by frequent use.
  56. Groundbreaking: Similar to “pioneering,” its overuse has lessened its impact.
  57. Cutting-edge: Used to describe the forefront of technology or ideas, now a cliché.
  58. Impactful: Intended to denote significant effect or influence, but overuse has rendered it vague.
  59. Thought leader: Aimed to describe influential individuals, but has become a self-applied and diluted term.
  60. Value-add: Used to highlight additional benefits, but has become a buzzword with diluted meaning.
  61. Big data: Meant to describe vast data sets that can reveal patterns, trends, and associations, but now often used as a buzzword irrespective of the scale or complexity of the data analysis.
  62. Thought leadership: Intended to denote influential and innovative ideas, but overuse has made it a nebulous term often devoid of evidence of leading or innovative thinking. (By the way, see thought leader on a Linkedin profile, better run!)

Looking Beyond the AI Favored Words to AI Typical Writing Styles

  1. Generalized Statements: AI-generated content often leans on generalized or vague statements rather than specific, detailed examples. When you read an AI written article, notice the details, or lack thereof. Fake LLM writing are often generalized and overly formulaic. They string many words together in an appearance of learned comprehension, but in actuality, they say little. I call it more fluff than substance. In other words, they talk like a typical politician, with lots of words, but little meaning. This gets even worse when the AI does not have access to up-to-date information, or is generating content on a topic on which its data is limited, like lots of “insider baseball” talk. In my field, that includes the kinds of things that lawyers and judges privately say to each other, but are seldom, if ever, written down, much less published. Just go to a bar at a Bar convention. This limited detailed knowledge makes it easy for human experts in a field to detect fake writing in their own area, but outside their field, not so much.
  2. Neutral and Diplomatic Language: AI often defaults to very neutral and diplomatic language, sometimes excessively so, in an effort to avoid making controversial or unsubstantiated claims. “WTF” is not a phrase, or even initials they are likely to use. The software manufacturers try to filter out the profanities found in many parts of the internet, which is typically a good thing. Still, the result is often unnaturally squeaky clean language and style that make friggen fake language easy to spot.
  3. Excessive Politeness: Especially in responses or interactive content, LLMs use phrases that seem overly polite or formal, such as frequent use of “Thank you for your question,” or “I’m sorry, but I’m not able to.” Miss manners in writing is a dead giveaway. Plus, its so damn annoying to real people, thank you very much!
  4. No Real Humor or Wit. The kind of snarky, subtle, almost funny remarks that permeate my writings seem to be beyond the grasp of AI. Much like the AI Android “Data” in StarTrek, LLM’s just don’t grok humor and their jokes usually suck.
  5. Lack of Emotion. Kind of obvious, but robot writing often has a tell of being robotic, overly structured, intellectual and emotionless. Personality and emotion seem irrelevant to these writing algorithms, although we are seeing new types of LLMs that specialize in emotion, so be careful, they can be charming and seductive too. See: Code of Ethics for “Empathetic” Generative AI.
  6. Too Perfect Spelling. Real humans make typographical errors, and, even with spell checkers, there are often a few mistakes that slip by. My blog is a good example of this. A typo in a text is a good indicator that it was human written. Of course, this again is just a tell. We writers always strive to be perfect and smart computers can help us with that.
  7. Lack of Personal Experience: AI-generated texts often lack personal anecdotes, experiences, or strong opinionated statements, unless specifically programmed to simulate such content. This reminds me of lectures I have sat through by self proclaimed e-discovery experts who have never personally done a document review. I could go on and on with examples, if I wanted, because I have a lifetime of experience and my learning is hands-on, just just academic. Remember, no AI has personal experience of anything. It is all second hand book learning, albeit millions of books.
  8. Lack of Opinions: AIs are often trained by their makers not to be opinionated. After all, opinions might possibly offend someone. Can’t have that. That is one reason AI writing is often bland, which is another tell. Real humans have opinions, lots of them, many wrong. But that’s one reason we are such a charming species and our writing is so much more enjoyable to read. AI talk is not only general and vague, it is often stogy, over polite and politically correct. Also, try getting an AI to give you its legal opinion. Thank goodness for lawyers like me they refuse and say instead to speak to a real lawyer. There are, of course, many ways to trick them into giving you a legal opinion anyway, but that’s another story, one that you will have to retain me to tell you.
  9. AI Avoids Slang and Uncommon Idioms. Since GPTs are trained on public data, they use the most common words, the ones they have literally read a billion times. I may be tilting at windmills, but in my experience GPTs are as blind as a bat to many idioms, phrases and words. We are talking about speech that is too regional, subcultural, seldom used, or still too new, the latest vocab. Unfortunately, I’ve checked, matey, and GPTs do speak fluent pirate. Arr, there be many of us pirates about.
  10. Repetitive language: AI written content may repeat words, phrases, or sentences. Yeah man, like over and over. In fake writings you may also see the same sentence structure repeated in different paragraphs. They repeat themselves and don’t seem to care, over and over. That is one reason fake AI talk can be so boring. I mean, they just keep saying things to death. Enough already!
  11. Lack of creativity: AI writing is often too predictable. Well, duh or course. LLM intelligence does come from predicting the next word you know. The Top_p or Creativity settings may be too low. Again, the sooo boring speech results. Humans are typically more enjoyable to read. That takes us back to know-it-all Data on StarTrek who does not understand humor. Subtle humor is just not something they grok. Maybe someday. 
  12. Error Patterns in Complex Constructs: AI may struggle with complex language constructs or highly nuanced topics, such as law (which is one reason they wisely refrain from giving out legal opinions). This mental challenge when dealing with complex ideas, will, in my opinion, be corrected soon. But in the meantime this limitation in intelligence leads to errors in speech like topic relevance and basic coherence. The LLMs now often may sound like a 1L trying to explain the elements of contract when cold called in Contract One, or like a newbie lawyer trying to argue a subject to a judge where they have only read the Gilbert’s. Note this last run-on sentence could not possibly have been written by an LLM.
  13. Overuse of Transitional Phrases: AI tends to overuse transitional phrases such as “Furthermore,” “Moreover,” “In addition,” and “On the other hand,” in an attempt to create cohesion. They seem to lack creativity in that regard, or fear simply doing without transitions. In any event, they often screw up transitions.
  14. Hedging Language: AI often uses hedging language like “It might be the case,” “It is often thought,” or “One could argue,” to avoid making definitive statements that could be incorrect. It all depends if this is appropriate or not. Just ask any lawyer. We and our insurers frigging love qualifiers.
  15. Synonym Swapping: To avoid repetition, AI might use synonyms excessively or in slightly unusual contexts. This can lead to awkward phrasing or slightly off usage of certain terms. Comes from their being so damned repetitive and too general. The result is often the use of unnecessary “fancy words,” typical of a linguistic show-off.
  16. Repetitive Qualifiers: AI might use repetitive qualifiers like “very,” “extremely,” “significantly,” more than a typical human writer would, to emphasize points. I am very guilty of that myself.
  17. Standardized Introductions and Conclusions: AI-generated content might start with very formulaic introductions and end with standardized conclusions, often summarizing content in a predictable manner (e.g., “In conclusion,” followed by a restatement of key points). The fondness of opening and closing statements is much like a lawyer in at trail or in legal memorandums, but AI does a poor job at it. I always end my blog with a conclusion and I like to think that most of my blogs do not read anything like fake talk by a LLM. AI writing with catchy, not mere formalistic conclusions, will improve in the future, of that I have no doubt. But in the meantime, this is yet another trait that is a dead giveaway.

Conclusion

These tells words and styles must all be taken with a grain of salt. They are only indicators. None are dispositive, but, taken all together, they can help you to determine if a writing is real or fake. All of these indicators should be used as part of a broader assessment of real or fake, rather than a one and done acid test. As AI technology evolves, the patterns and tells will likely become less noticeable. For this reason, I predict that orals exams will become more popular. Teachers on all levels will be forced to revert to the medieval guild traditions of oral defense of knowledge and skills, as has always been required of PhD candidates.

The obvious should also be stated here. For most people the greatest tell of all of fake writing is how good it is compared to past efforts, education and experience. A sudden improvement in a student’s writing is a strong tell of plagiarism. Indeed, aside from the few humans that professionally, most flesh and blood intelligences are relatively poor writers as compared to LLM writers. Look at the history and qualifications of the writer. It may make it obvious that their writing is too good to be true.

Still, any objective observer, no matter how much they may dislike AI, would have to concede that GPTs can already write better than most people; most, but not all. GPTs are still, at best, mere B-grade writers, often far worse, for the reasons here listed. The can be vacuous blowhards, mere parrots, pushing watered down Gilberts. They can be all style over substance, not to mention inaccurate and sometimes hallucinatory. They are not even close to the best humans writing in their fields of special expertise, legal or otherwise. Yes, I am expressing a strong opinion here that might offend some. Don’t like it? Go back to reading the bland crap of stochastic parrots!

Still, despite these protestations, I love these odd birds for their great potential as hybrid tools. They should be used to help us to speak and think better, and make better decisions. But Stochastic Parrots should not dictate what we say or do, especially those of us engaged in critical fields like law and medicine. It is one thing to use LLMs for writing fiction, quite another to make life and death decisions in courts and hospitals. AI should be skillfully used by professionals as a consulting tool to assist, not replace, good human judgement and empathy.

Ralph Losey Copyright 2024 — All Rights Reserved


Navigating the High Seas of AI: Ethical Dilemmas in the Age of Stochastic Parrots

April 3, 2024

Large Language Model generative AIs are well-described metaphorically as “stochastic parrots.” In fact, the American Dialect Society, selected stochastic parrot as its AI word of the year for 2023, just ahead of the runners-up “ChatGPT, hallucination, LLM and prompt engineer.” These genius stochastic parrots can be of significant value to all legal professionals, even those who don’t like pirates. You may want one on your shoulder soon, or at least in your computers and phone. But, as you embrace them, you should know that these parrots can bite. You should be aware of the issues of bias and fairness problems inherent in these new technical systems.

The ethical issues were raised in my last blog and video, Stochastic Parrots: the hidden bias of large language model AI. In the video blog an avatar, which looks something like me with a parrot on his shoulder, quoted the famous article on LLM AI bias, and briefly discussed how the prejudices are baked into the training data. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (FAccT ’21, (3/1/21) by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell. In this followup blog I dig a little deeper into the article and the controversies surrounding it.

Article Co-Author Timnit Gebru

First of all, it is interesting to note the internet rumor, based on a few tweets, concerning one of the lead authors of the Stochastic Parrots article, Timnit Gebru. She was a well-known leader of Google’s ethical AI team at the time she co-wrote it. She was allegedly forced to leave Google because its upper management didn’t like it. See: Karen Hao, We read the paper that forced Timnit Gebru out of Google. (MIT Technology Review, 12/04/2020);  Shirin Ghaffary, The controversy behind a star Google AI researcher’s departure (Vox, 12/09/20). According Karen Ho’s article, more than 1,400 Google staff members and 1,900 other supporters signed a letter of protest about the alleged firing of Timnit Gebru as an act of research censorship. The rumor is Google tried to stop the publication of the article, but obviously the article was in fact published on March 1, 2021.

According to the MIT Technology Review article, Google did not like all four points of criticism of LLMs that were made in the Parrot article:

  1. Environmental and financial costs. Pertaining to the need for vast amount of computing to create the LLMs and the energy costs, the carbon footprint.
  2. Massive data, inscrutable models. The training data mainly comes from the internet and so contains racist, sexist, and otherwise abusive language. Moreover, the vast amount of data used makes the LLMs hard to audit and eliminate embedded biases.
  3. Research opportunity costs. Basically complaining that too much money was spent on LLMs, and not enough on other types of AI. Note this complaint was made before the unexpected LLM breakthroughs in 2022 and 2023.
  4. Illusions of meaning. In the words of Karen Ho, the Parrot article complained that the problem with LLM models is that they are “so good at mimicking real human language, it’s easy to use them to fool people.” 

Moreover, as the Hao article in the MIT Technology Review points out, Google’s then head of AI, well-known scientist, Jeff Dean, claimed that the research behind the article “didn’t meet our bar” and “ignored too much relevant research.” Specifically, he said it didn’t mention more recent work on how to make large language models more energy efficient and mitigate problems of bias. Maybe they didn’t know?

Criticisms of the Stochastic Parrot Article

The main article I found criticizing the Stochastic Parrot also has a weird name: “The Slodderwetenschap (Sloppy Science) of Stochastic Parrots – A Plea for Science to NOT take the Route Advocated by Gebru and Bender” (2021). The author, Michael Lissack, challenges the ethical “woke” stance of the original “Parrot Paper,” and suggests a reevaluation of the argumentation. It should be noted that Gebru has accused Lissack of stalking her and colleagues. See: Claire Goforth, Men in tech are harassing Black female computer scientist after her Google ouster (Daily Dot, 2/5/21) (Michael Lissack has tweeted about Timnit Gebru thousands of times). By the way, “slodderwetenschap” is Dutch for slop-science.

Here are the three criticisms that Lissack makes of Stochastic Parrots:

What is missing in the Parrot Paper are three critical elements: 1) acknowledgment that it is a position paper/advocacy piece rather than research, 2) explicit articulation of the critical presuppositions, and 3) explicit consideration of cost/benefit trade-offs rather than a mere recitation of potential “harms” as if benefits did not matter. To leave out these three elements is not good practice for either science or research.

Lissack, The Slodderwetenschap (Sloppy Science) of Stochastic Parrots, abstract.

Others have spoken in favor of Lissack’s criticisms of the Stochastic Parrots, including most notably, Pedro Domingos. Supra, Men in Tech (includes collection of Domingos’ tweets).

It should also be noted that Lissack’s article includes several positive comments about the Stochastic Parrots work:

The very topic of the Parrot Paper is an ethics question: does the current focus on “language models” of an ever-increasing size in the AI/NLP community need a grounding against potential questions of harm, unintended consequences, and “is bigger really better?” The authors thereby raise important issues that the community itself might use as a basis for self-examination. To the extent that the authors of the Parrot Paper succeed in getting the community to pay more attention to these issues, they will be performing a public service. . . .

The Parrot Paper correctly identifies an “elephant in the room” for the MI/ML/AI/NLP community: the very basis by which these large language models are created and implemented can be seen as multilayer neural network-based black boxes – the input is observable, the programming algorithm readable, the output observable, but HOW the algorithm inside that black box produces the output is no articulable in terms humans can comprehend. [10] What we know is some form of “it works.” The Parrot Paper authors prompt readers to examine what is meant by “it works.” Again, a valuable public service is being performed by surfacing that question. . . .

Most importantly, in my view, the Parrot Paper authors remind readers that potential harm lies in both the careless use/abuse of these language models and in the manner by which the outputs of those models are presented to and perceived by the general public. They quote Prabhu and Birhane echoing Ruha Benjamin: “Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is a fantasy.” [PP lines 565-567, 11, 12] The danger they cite is quite real. When “users” are unaware of the limitations of the models and their outputs, it is all too easy to confuse seeming coherence and exactness for verisimilitude. Indeed, Dr. Gebru first came to public attention highlighting similar dangers with respect to facial recognition software (a danger which remains, unfortunately, with us [13, 14].

The Slodderwetenschap (Sloppy Science) of Stochastic Parrots at pages 2-3.

Lissack’s main objection appears to be the argumentative nature of what the article presents as science, and the many subjective opinions underlying the Parrot article. He argues that the paper itself is “ethically flawed.”

Talking Stochastic Parrots Have No Understanding

Artificial intelligences like ChatGPT4 may sound like they know what they are talking about, but they don’t. There is no understanding at all in the human sense; it is all just probability calculations of coherent speech. No self awareness, no sense of space and time, no feelings, no senses (yet) and no intuition – just math.

It is important to make a clear distinction between human cognitive processes, which are deeply linked and arise out of bodily experiences and the external world, and computational models that lack a real world, experiential basis. As lawyers we must recognize the limits of mere machine tools. We cannot over-delegate to them just because they sound good, especially when acting as legal counselors, judges, and mediators. See e.g. Yann Lecun and Browning, AI And The Limits Of Language (Noema, 8/23/22) (“An artificial intelligence system trained on words and sentences alone will never approximate human understanding.”); Valmeekam, et al, On the Planning Abilities of Large Language Models (arXiv, 2/13/23) (poor at planning capabilities); Dissociating language and thought in large language models (arXiv, 3/23/24) (poor at functional competence tasks).

Getting back to the metaphor, a parrot may not understand the words it speaks, but they at least have some self awareness and consciousness. An AI has none. As one thoughtful Canadian writer put it:

Though the output of a chatbot may appear meaningful, that meaning exists solely in the mind of the human who reads or hears that output, and not in the artificial mind that stitched the words together. If the AI Industrial Complex deploys “counterfeit people” who pass as real people, we shouldn’t expect peace and love and understanding. When a chatbot tries to convince us that it really cares about our faulty new microwave or about the time we are waiting on hold for answers, we should not be fooled.

Bart Hawkins Kreps, Beware of WEIRD Stochastic Parrots (Resilience, 2/15/24).

For interesting background, see The New Yorker article of 11/15/2023, by Angie Wang, Is My Toddler a Stochastic Parrot? Also see: Scientific research article on the lack of diversity in internet model training, Which Humans? by Mohammad Atari, et al. (arXiv, 9/23/23) (“Technical reports often compare LLMs’ outputs with “human” performance on various tests. Here, we ask, “Which humans?”“).

I also suggest you look at the often cited technical blog post by the great contemporary mathematician, Stephen Wolfram What Is ChatGPT Doing … and Why Does It Work?. As Wolfram states in the conclusion ChatGPT isjust saying things that “sound right” based on what things “sounded like” in its training material.” Yes, it sounds good, but nobody’s home, no real meaning. That is ultimately why the fears of AI replacing human employment are way overblown. It is also why LLM based plagiarism is usually easy to recognize, especially by experts in the field under discussion. The Chatbot writing is obvious by its style over substance language, which is high on fluff and stereotypical language, and overuse of certain “tell” words. More on this in my next blog on how to spot stochastic parrots.

Personally, I’m already sick of the bland, low meaning, fluffy content news and analysis writing now flooding the internet, including legal writing. It is almost as bad as ChatGPT writing for political propaganda and sales. It is not only biased, and riddled with errors, it is mediocre and boring.

Conclusion

Everyone agrees that LLM AIs will, if left unchecked, reproduce biases and inaccuracies contained in the original training data. This inevitably leads to the generation of false information – to skewed output to prompts – and that in turn can lead to poor human decisions made in reliance on biased output. This can be disastrous in sensitive applications like law and medicine.

Everyone also agrees that this problem requires AI software manufacturers to model designs to curb these biases, and to monitor and test to ensure the effectiveness and trustworthiness of LLMs.

The disagreement seems to be in evaluation of the severity of the problem, and the priority that should it be given to its mitigation. There is also disagreement as to the degree of success made to date in correcting this problem, and whether the problem can even be fixed at all.

My view is that these issues can be significantly reduced, but I doubt that LLMs will ever be perfect and entirely free of all bias, even though they may become better than the average human. See e.g. New Study Shows AIs are Genuinely Nicer than Most People – ‘More Human Than Human’.

Moreover, I believe that users of LLMs, especially lawyers, judges and other legal professionals, can be sensitized to these bias issues. They can learn to recognize previously unconscious bias in the data and in themselves. The sensitivity to the bias issues can then help AI users to recognize and overcome these challenges. They can realize when the responses given by an AI are wrong and must be corrected.

The language of a ChatGPT may correctly echo what most people in the past said, but that does not, in itself, make it the right answer for today. As lawyers we need the true, correct and bias free answers, the just and fair answers, not the most popular answers of the past. We have an ethical duty of competence to double check the mindless speech of our stochastic parrots. We should question why Polly always wants a cracker?

Ralph Losey Copyright 2024 – All Rights Reserved


AI Copyright and the Litigious Life of Harmenszoon van Rijn Rembrandt: as explained by a talking portrait of a robot

March 28, 2024
Video, AI image in style of Rembrandt, research and words by Ralph Losey, an admirer of Rembrandt who is sympathetic to his litigious life.

Here is the transcript of the five minute talk by the robot portrait. (⏱ = 0.5 second pause in speech)

Hi,

I am a robot image created by Ralph Losey, roughly in the style of Rembrandt, one of his favorite artists. I think I also look like the work of another Dutch Master, Vermeer.    My headphone is kind of like a big pearl earring?

Ralph used a variety of digital tools to make me, primarily an AI tool called Midjourney, but several others too. Ralph says they are like paint brushes and, like a typical lawyer, claims copyright.   It remains to be seen whether courts will agree with that position?

  Ralph has also created an AI tool of his own, a GPT designed to interface with the Dall-E software of OpenAI. He calls his software, Visual Muse.  And even claims copyright to that too!   

I wonder what Rembrandt would say about all of this? Unfortunately, he knew lawyers and litigation all too well.  

Rembrandt Harmenszoon van Rijn lived from 1606 to 1669.  He was a multimodal master of all of the visual media  of his day. Painting, printmaking and drawing.  He was also well known for a variety of themes and styles, including his many selfies,

Rembrandt enjoyed early success in painting and in marriage to Saskia.  She was the daughter of a successful Dutch lawyer .    He and Saskia lived extravagantly, at first, and he over-spent on a big house and many purchases of art.     Tragically, their first three children died shortly after birth. The fourth child survived, but Saskia died within a year from  tuberculosis. Rembrandt’s spent the rest of his life with fame and beautiful women, but no fortune. He was broke, worse than that, he was hounded by creditors and their lawyers.   

Rembrandt became embroiled in a never-ending series of law suits   a few years after his wife died. It all started from his seduction of the young woman employed in his mansion, Geertje Dircx.  She was employed  as a wet nurse for the child.  I can easily imagine how that affair came about.  Ironically, a few years later, Geertje became pregnant, and sued Rembrandt for breach of promise of marriage and sought alimony.   She had good lawyers.   He paid and agreed to alimony. Geertje later ended up in special women’s prison anyway, which cost Rembrandt still more money.

Then Rembrandt began a relationship with his 23-year-old maid, Hendrickje Stoffels. His young mistress, Hendrickje, was recognized as a nude in Rembrandt’s painting, Bathsheba at Her Bath. Based on that the Reform Church charged the girl with, quote, committing the acts of a whore with Rembrandt the painter.  She admitted her guilt and was banned from receiving communion.    Nothing happened to Rembrandt. 

Still, it was all downhill from there for Rembrandt, financially at least. He had another child with Hendrickje. More expenses, but he never married her,  Ultimately Rembrandt  filed for a type of voluntary bankruptcy, called an cessio bonorum, to avoid incarceration.    Yes, they would jail debtors then for failure to pay, even famous artists like Rembrandt.   The bankruptcy just delayed things.   When he died in 1669, he had outlived his major creditors, but was still buried in a rented grave.  Rented grave?  Who knew such a thing even existed?

As a result philandering and extravagant living, Rembrandt became all too familiar with lawyers, litigation and the protection and secretion of assets.  ⏱⏱ His difficult financial and family situation is one cause of his prodigious output of art. He had to keep working to pay his creditors, and his lawyers!  By some accounts  he created 600 paintings, 400 etchings and 2,000 drawings.  

No one would mistake me for a Rembrandt or Vermeer. But I wonder, am I even an original work? Can I be protected? Or can anyone steal me and do with me what they will? ⏱⏱ I certainly hope not. I would rather litigate than live like that! ⏱⏱ Wouldn’t you? ⏱⏱ 

Ralph Losey Copyright 2024 – All Rights Reserved