Stochastic Parrots: the hidden bias of large language model AI

March 25, 2024

AI video written and directed by Ralph Losey.

Article Underlying the Video. The seminal article on the dangers of relying on stochastic parrots was written in 2021, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (FAccT ’21, 3/1/21) by a team of AI experts, Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell. This article arose out of the 2021 Conference of the Association for Computing Machinery (ACM) on Fairness, Accountability, and Transparency (ACM Digital Library).

Transcript of Video

GPTs do not think anything like we do.  They just parrot back pre-existing human word patterns with no actual understanding. The words generated by a GPT in response to prompts is sometimes called, speech by a stochastic parrot!  

According to the Oxford dictionary, Stochastic  is an adjective meaning “randomly determined;  having a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely.”

Wikipedia explains stochastic is derived from the ancient Greek word, stókhos,  meaning ‘aim or guess’ and today refers to “the property of being well-described by a random probability distribution.” 

Wikipedia also explains the meaning of a stochastic parrot.

In machine learning, the term stochastic parrot is a metaphor to describe the theory that large language models, though able to generate plausible language, do not understand the meaning of the language they process.

The stochastic parrot characteristics are a source of concern when it comes to the fairness and bias of GPT speech.  That is because the words the GPTs are trained on, that they parrot back to you in clever fashion, come primarily from the internet. We all know how  messy and biased that source is.  

In the words of one scholar, Ruha Benjamin, “Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is a fantasy.

Keep both of your ears wide open.  Talk to the AI parrot on your shoulder, for sure, but keep your other ear alert. It is dangerous to only listen to a stochastic parrot, no matter how smart it may seem.

The subtle biases of  GPTs can be an even greater danger than the more obvious problems of AI errors and hallucinations.  We need to improve the diversity of the underlying training data,  the  curation of the data, and the Reinforcement Learning from Human Feedback, RLHF. It is not enough to just keep adding more and more data, as some contend.

This view was forcefully argued in 2021 in an article I recommend.  On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (FAccT ’21, 3/1/21) by AI ethics experts, Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell.

We need to do everything we can to make sure that AI is a tool for good, for fairness and justice,  not a tool for dictators, lies and oppression. 

Let’s keep the parrot’s advice safe and effective. For like it, or not, this parrot will be on our shoulders for many years to come! Don’t let it fool you! There’s more to life than the crackers that Polly wants!

Ralph Losey Copyright 2024. — All Rights Reserved



New Study Shows AIs are Genuinely Nicer than Most People – ‘More Human Than Human’

February 26, 2024

Extensive Turing tests by a team of respected scientists have shown that “chatbots’ behaviors tend to be more cooperative and altruistic than the median human, including being more trusting, generous, and reciprocating.”

The article quoted above and featured here is by Qiaozhu Mei, Yutong Xie, Walter Yuan, and Matthew O. Jackson, A Turing test of whether AI chatbots are behaviorally similar to humans (PNAS Research Article, February 22, 2024). This is a significant article and the full text can be found here. Hopefully, the research explained here will begin to counter the widespread fear-mongering now underway by the media and others. Many people are afraid of AI, too fearful to even try it. Although some concern is appropriate, the fear is misplaced. The new generative AIs are a lot nicer and more trustworthy than most people. As some say, they are more human than human. More human than human: LLM-generated narratives outperform human-LLM interleaved narratives (ACM, 6/19/23).

Unlike some articles on AI studies, this paper is based on solid science

There is a lot of junk science and pseudo-scientific articles in circulation now. You have to be very careful about what you rely on. This article and the research behind is of high caliber, which is why I recommend it.

Prof Jackson, Stanford profile photo

The lead author is Professor Matthew O. Jackson, Department of Economics, Stanford University. He was recently named a fellow of the American Association for the Advancement of Science. His website with information links can be found here. Matthew O. Jackson is the William D. Eberle Professor of Economics at Stanford University. He is also an external faculty member of the Santa Fe Institute. Professor Jackson was at Northwestern University and Caltech before joining Stanford, and received his BA from Princeton University in 1984 and PhD from Stanford in 1988.

Prof Mei, U. Mich. profile photo

The co-authors are all from the School of Information, University of Michigan, under the leadership of Professor Qiaozhu Mei. He is a Professor of Information, at the School of Information and Professor of Electrical Engineering and Computer Science, College of Engineering at the University of Michigan. He is the founding director of the Master of Applied Data Science program for Michigan. Here is Qiaoshu Mei’s personal website, with information links. Professor Mei received his PhD degree from the Department of Computer Science at the University of Illinois at Urbana-Champaign and my Bachelor’s degree from Peking University.

The other two authors are Yutong Xie and Walter Yuan, both PhD candidates at Michigan. The article was submitted by Professor Jackson to PNAS on August 12, 2023; accepted January 4, 2024; and reviewed by Ming Hsu, Juanjuan Meng, and Arno Riedl.

Significance and Abstract of the Article

Here is how the authors of this research article on Turing tests and AI chatbot behavior describe its significance.

As AI interacts with humans on an increasing array of tasks, it is important to understand how it behaves. Since much of AI programming is proprietary, developing methods of assessing AI by observing its behaviors is essential. We develop a Turing test to assess the behavioral and personality traits exhibited by AI. Beyond administering a personality test, we have ChatGPT variants play games that are benchmarks for assessing traits: trust, fairness, risk-aversion, altruism, and cooperation. Their behaviors fall within the distribution of behaviors of humans and exhibit patterns consistent with learning. When deviating from mean and modal human behaviors, they are more cooperative and altruistic. This is a step in developing assessments of AI as it increasingly influences human experiences.

Mei Q, Xie Y, Yuan W, Jackson M (2024) A Turing test of whether AI chatbots are behaviorally similar to humans

The authors, of course, also prepared an Abstract of the article. It provides a good overview of their experiment (emphasis added).

We administer a Turing test to AI chatbots. We examine how chatbots behave in a suite of classic behavioral games that are designed to elicit characteristics such as trust, fairness, risk-aversion, cooperation, etc., as well as how they respond to a traditional Big-5 psychological survey that measures personality traits. ChatGPT-4 exhibits behavioral and personality traits that are statistically indistinguishable from a random human from tens of thousands of human subjects from more than 50 countries. Chatbots also modify their behavior based on previous experience and contexts “as if” they were learning from the interactions and change their behavior in response to different framings of the same strategic situation. Their behaviors are often distinct from average and modal human behaviors, in which case they tend to behave on the more altruistic and cooperative end of the distribution. We estimate that they act as if they are maximizing an average of their own and partner’s payoffs.

A Turing test of whether AI chatbots are behaviorally similar to humans

Background on The Turing Test

The Turing Test was first proposed by Alan Turing in 1950 in his now famous article, “Computing Machinery and Intelligence. Turing’s paper considered the question, “Can machines think?” Turing said that because the words “think” and “machine” cannot be clearly defined, we should “replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.” He explained the better question in terms of what he called the “Imitation Game.” The game involves three participants in isolated rooms: a computer (which is being tested), a human, and a (human) judge. The judge can chat with both the human and the computer by typing into a terminal. Both the computer and competing human try to convince the judge that they are the human. If the judge cannot consistently tell which is which, then the computer wins the game.

By changing the question of whether a computer thinks into whether a computer can win the Imitation Game, Turing dodges the difficult, many argue impossible, philosophical problem of pre-defining the verb “to think.” Instead, the Turing Test focuses on the performance capacities that being able to think makes possible. Scientists have been playing with the Turing Test – Imitation Game ever since.

Turing Test Seventy Five Years After Its Proposal

The scientists in the latest article go way beyond the testing of seventy five years ago and actually administer tests assessing the AI’s behavioral tendencies and “personality.”  In their words:

[W]e ask variations of ChatGPT to answer psychological survey questions and play a suite of interactive games that have become standards in assessing behavioral tendencies, and for which we have extensive human subject data. . . . Each game is designed to reveal different behavioral tendencies and traits, such as cooperation, trust, reciprocity, altruism, spite, fairness, strategic thinking, and risk aversion. The personality profile survey and the behavioral games are complementary as one measures personality traits and the other behavioral tendencies, which are distinct concepts; e.g., agreeableness is distinct from a tendency to cooperate. . . .

In line with Turing’s suggested test, we are the human interrogators who compare the ChatGPTs’ choices to the choices of tens of thousands of humans who faced the same surveys and game instructions. We say an AI passes the Turing test if its responses cannot be statistically distinguished from randomly selected human responses.

A Turing test of whether AI chatbots are behaviorally similar to humans

Results of the Turing Test on ChatGPT4

Professor’s Jackson and Mei, and their students, found that GPT3.5 flunked the Turing test based on their sophisticated analysis, but GPT4.0 passed it with flying colors. This finding that humans and computers were indistinguishable is not too surprising for humans using ChatGPTs every day, but the startling part was the finding that ChatGPT4.0 was better than the humans tested. So much for scary, super-smart AI. They are nicer than us; well, at least ChatGPT4 is. It seems like the best way way now to tell GPT4 from humans is to measure ethical standards. To quote a famous Pogo mime, “We have met the enemy and they are us.” For now at least, we should fear our fellow humans, not generative AIs.

Here is how Professor’s Jackson and Mei explained their ethical AI finding.

The behaviors are generally indistinguishable, and ChatGPT-4 actually outperforms humans on average . . . When they do differ, the chatbots’ behaviors tend to be more cooperative and altruistic than the median human, including being more trusting, generous, and reciprocating. . . .

ChatGPT’s decisions are consistent with some forms of altruism, fairness, empathy, and reciprocity rather than maximization of its personal payoff. . . . These findings are indicative of ChatGPT-4’s increased level of altruism and cooperation compared to the human player distribution.

A Turing test of whether AI chatbots are behaviorally similar to humans

So What does this all mean? Is ChatGPT4 capable of thought or not? The esteemed Professors here do not completely dodge that question like Turing. They conclude, as the data dictates, that AI is better than us, “more human than humans.” Here is their words (emphasis added).

We have found that AI and human behavior are remarkably similar. Moreover, not only does AI’s behavior sit within the human subject distribution in most games and questions, but it also exhibits signs of human-like complex behavior such as learning and changes in behavior from role-playing. On the optimistic side, when AI deviates from human behavior, the deviations are in a positive direction: acting as if it is more altruistic and cooperative. This may make AI well-suited for roles necessitating negotiation, dispute resolution, or caregiving, and may fulfill the dream of producing AI that is “more human than human.” This makes them potentially valuable in sectors such as conflict resolution, customer service, and healthcare.

A Turing test of whether AI chatbots are behaviorally similar to humans, 3. Discussion.

Conclusion

This new study not only suggests that our fears of generative AI are over-blown, it shows that it is perfectly suited for lawyer activities such as negotiation and dispute resolution. The same goes for caregivers like physicians and therapists.

This is a double plus for the legal profession. About half of lawyers do some kind of litigation or another, and all lawyers negotiate. That is what lawyers do. We can all learn from the skills of ChatGPT4.

The new Chatbots will not kill us. They are not Terminators. They will help us. They are literally ‘more human than human,’ which I learned is also the name of a famous heavy metal song. With this study, and more like it, which I expect will be forthcoming soon, the public at large, including lawyers, should start to overcome their irrational fears. They should stop fearing generative AI, and start to use it. Lawyers will especially benefit from the new partnership.

Ralph Losey 2024 Copyright. — All Rights Reserved


Transform Your Legal Practice with AI: A Lawyer’s Guide to Embracing the Future

January 24, 2024

In a world increasingly influenced by artificial intelligence, the legal profession stands at a crossroads. Lawyers must adopt the new AI tools and quickly learn how to use them effectively and safely. That means learning the skill of how to use AI, which techs call “prompt engineering.” Fortunately, that just means word engineering, the art of knowing how to talk to ChatGPTs. The new generative AIs are designed to be controlled by natural language, not computer code. This is the easiest kind of engineering possible for lawyers. The precise use of language is every lawyer’s stock-in-trade and so prompt engineering is within every lawyer’s capacity.

Wordsmith and Chat expert. Images by Ralph Losey using his GPT, Visual Muse.

Smart Computers Are Finally Here to Help Lawyers Do their Jobs

Ever since lawyers first started using personal computers in the eighties, we’ve eagerly awaited the day when they would get smart. We were often told that computers would soon progress from mere processing units to intelligent assistants. Dreamy promises of artificial intelligence were made, but never delivered. We have been stuck for over forty years with dumb computers that can barely catch spelling errors. Finally, a breakthrough has been made. With the advent of generative AI, the long wait is over, and the dream of smart computers is becoming a reality.

Ralph in the 80s trapped in dumb computers. Image created by Ralph in 2023 using various AI image tools.

The arrival of new generative AI, ChatGPT, is something that should be greeted by all legal professionals, indeed all computer users, with relief and enthusiasm, but also with a reasonable measure of care. We are finally leaving the horse-and-buggy stage of computers to fast moving cars, and, until you learn how to drive, they can be dangerous.

My Background and Use of AI in the Law

In 2012, I was lucky to have the opportunity to work on the landmark Da Silva Moore case. See, Austin The Da Silva Moore Case Ten Years Later; (EDRM 2/23/22) and Austin, A Case Where TAR Wasn’t Required (EDRM 8/9/22). Da Silva Moore established the legality of use of a special type of rule-based AI, a/k/a “active machine learning,” which is typically referred to in the law as predictive coding. I then began to specialize in this subfield of e-discovery. Thereafter, at Jackson Lewis I supervised the use of predictive coding in thousands of lawsuits across the country, and also taught and wrote about this type of AI. The emergence of truly smart generative AI in late 2022 with OpenAI’s release of GPT3.5, rekindled my enthusiasm in legal-tech. The long wait for smart computers was over. I put all thoughts of retirement aside.

I have been using the new AI tools in my legal practice ever since late 2022 on a limited basis, and as non-billable research and self education on a nearly full time basis. My focus lately has been on OpenAI’s prompt engineering instruction guide. My studies have centered around prompt engineering, the art of talking to Chat GPTs. My primary guide has been the instruction and best-practices advice provided by OpenAI. See OpenAI’s prompt engineering instruction guide. OpenAI is, of course, the company that created and first released ChatGPTs to the world.

Ready or not, here comes Chat GPT. Image by Ralph Losey using Midjourney AI tool.

My research includes extensive experimentation with GPTs, including my favorite project of seeing how ChatGPT4 could perform as an appellate judge. See e.g. Circuits in Session: Analysis of the Quality of ChatGPT4 as an Appellate Court Judge. I also did research on AI related policy and security issues attending DefCon 31 in August 2023 and participating in the AI hackathon. See e.g. DefCon Chronicles: Sven Cattell’s AI Village, ‘Hack the Future’ Pentest and His Unique Vision of Deep Learning and Cybersecurity Of course, you may have noticed that I make time to write extensively on generative AI and the law. Writing to share what I learn with my fellow professionals helps deepen my own understanding.

Trying to teach this is also a big help. I have recently done a few lectures, but my biggest teaching work so far has been behind the scenes. I have been “secretly” working on an online instructional program, Prompt Engineering Course for Legal Professionals, which will be based primarily on OpenAI’s prompt engineering instruction guide. The OpenAI guide, although invaluable and authoritative, is very technical and often difficult to understand. The goal of my work-in-progress course is to explain and build upon the OpenAI insights. I want to make their prompt engineering insights more accessible to legal professionals, to show how their six prompt engineering strategies can be applied in legal work. That is the key to empower any attorney to transform their legal practice with AI.

Once completed, the Prompt Engineering Course for Legal Professionals will be very detailed. It will probably require over twenty hours for a student to complete, and include homework and tests. More on all that later, when (not if!) it is finally finished. For now, this blog makes a short introduction to Open AI’s Six Strategies and how they can be applied as a kind of lawyer’s guide to future practice.

Success of Your AI Output Depends on You. AI image by Ralph Losey using Midjourney.

Why Prompt Engineering is Important?

Prompt Engineering (“PE”) is the art of chatting with generative type AI to get the intended answers and guidance you need. It also serves to minimize errors that are still inherent to AI. PE involves learning how to craft questions and commands that guide large language models types of AI like ChatGPT to generate more accurate, relevant, and useful responses. It is in essence a new type of wordsmith activity involving the precise use of clear instructions, clear prompts, which the AI then responds to.

The analytical linguistic skills necessary to control AI by prompts should be learned by everyone who uses it because the quality of an AI’s output depends, at least in part, on the input it receives. Well designed prompts improve AI performance and minimize misunderstandings and errors, including over-creative type errors – such as just making up answers, including case law, with no basis in reality – called ‘hallucinations‘ in AI jargon, as well as lesser known, but related errors, such as sycophantism, failure to admit the AI does not know the answer to a question posed, and even failure to admit that its prior responses were wrong.

Prompt engineering can greatly reduce risks of GPT hallucinations, sycophantism and other errors. Image by Ralph Losey using his GPT tool, Visual Muse.

Good prompts help ensure that the AI’s responses are legally sound and reliable. Of course, all legal work by AI should still be verified and controlled by humans. Legal practice cannot be delegated to AI, but it can be a powerful assistant.

The Six Strategies of OpenAI’s Prompt Engineering

  1. Writing Clear Prompt Instructions: The cornerstone of effective interaction with AI is clarity in communication. Lawyers often deal with complex issues that require precise language. By providing clear, specific instructions, legal professionals can guide the AI to deliver targeted and applicable responses, enhancing the quality of legal research, drafting, and analysis.
  2. Providing Reference Texts: AI models can produce more accurate answers when supplemented with relevant texts. In legal settings, referencing statutes, case laws, or legal articles can direct the AI to base its responses on established legal doctrines, leading to more reliable and contextually appropriate answers. This is the core meaning of Retrieval Augmented Generation whereby generative AI works on existing case law, regulations, and other texts and cites thereto in responses.
  3. Splitting Complex Tasks Into Simpler Subtasks: Legal issues are often multifaceted. Breaking them down into simpler, more manageable components enables AI to handle each aspect thoroughly. This strategy is particularly useful in document review and legal research, ensuring comprehensive coverage of all relevant points.
  4. Giving the Model Time to ‘Think’: While AI doesn’t ‘think’ in the human sense, structuring prompts to simulate step-by-step reasoning can lead to more thorough and reasoned legal analysis. This tactic is akin to guiding a junior lawyer through a legal problem, ensuring they consider all angles and implications.
  5. Using External Tools: Integrating AI with external tools, like legal databases or current statutes, can significantly enhance the accuracy and relevance of the AI’s outputs. This synergy is crucial in law, where staying updated with the latest legal developments is vital.
  6. Testing Changes Systematically: Regular testing and refinement of AI prompts ensure that they remain effective over time. This strategy is akin to continuous legal education, where lawyers constantly update their knowledge and skills to maintain professional competence.
Writing clear instructions is the polestar of GTP success. Image by Losey using Midjourney.

The Impact on Legal Practice

Improving prompt engineering skills can make a significant difference in legal practice. By mastering these strategies, legal professionals can:

  • Enhance Legal Research and Drafting: AI can assist in drafting legal documents and researching case law or statutes, but it requires precise prompts to generate useful outputs. Lawyers adept in prompt engineering can leverage AI to produce high-quality drafts and research efficiently. It also requires intelligent use of Retrieval Augmented Generation whereby prompts are run against existing, verified legal databases.
  • Reduce Errors and Misinterpretations: Inaccurate AI responses can lead to legal missteps. Effective prompt engineering minimizes such risks, ensuring that the AI’s outputs are dependable.
  • Stay Current with Legal Developments: The legal landscape is constantly evolving. Prompt engineering skills, especially using external tools and systematic testing, help lawyers keep up-to-date with the latest laws and judicial decisions.
  • Improve Client Services: With AI’s assistance in routine tasks, lawyers can focus more on complex aspects of legal practice, improving overall client service.
  • Ethical Compliance and Risk Management: Understanding AI’s capabilities and limitations through prompt engineering is crucial for ethical legal practice and managing the risks associated with AI use. never cite a case that only ChatGPT can find!
Everyone could use AI helpers to improve their work. Image by Losey using Midjourney.

Enhancing Legal Expertise with AI

In the legal profession, where the precision of language is paramount, the ability to effectively prompt AI can transform how legal analysis, research, and documentation are conducted. By refining prompts, legal professionals can extract nuanced and specific information from legal databases, case law, and statutes, much more efficiently than by use of traditional research methods alone. This not only saves time but also increases the breadth of resources that can be consulted within tight deadlines. Still, you must always check citations, review cases yourself and verify all work.

Ethical Considerations and AI in Law

The ethical implications of using AI in legal practice cannot be overstated. Misguided or poorly constructed prompts can lead to incorrect legal advice or analysis, raising ethical concerns. We have already seen this in the well known case, Mata v. Avianca, (S.D.N.Y. June 22, 2023), where lawyers were sanctioned for citing fake cases. Also see, Artigliere, Are We Choking the Golden Goose (EDRM 12/05/23). Retired Judge Ralph Artigliere cautions against over regulations that might discourage lawyer use. The answer is lawyer training, not stringent regulations. It is imperative that lawyers become proficient in prompt engineering, to better control the errors, and better align the AI outputs with legal standards and ethical guidelines. This is vital to maintain the integrity of legal advice and uphold the profession’s ethical standards.

Robot judge. Image by Losey using Midjourney.

The Importance of Prompt Engineering to Future Legal Practice

The new GPT AI tools are incredibly powerful, but the quality of their output depends on:

  1. the clarity of the instructions they receive;
  2. sufficient context to any reference text provided;
  3. the ability to decompose complex tasks into simpler ones;
  4. having the time to “think”;
  5. the use of external tools when necessary; and,
  6. the systematic testing of changes.

If you learn to use these strategies, you can significantly enhance the effectiveness of your interactions with all GPT models. When strategically crafted prompts go into the AI, then gold standard responses can come out. Conversely, the old computer saying applies: “Garbage In, Garbage Out.” For lawyers, negligent use of AI can lead to a garbage heap of problems, including expensive remedial efforts, lost clients, angry judges (worse than angry birds), maybe even sanctions.

GIGO. Image by Losey using Midjourney.

In these early days of AI, legal ethics and common sense require that you verify very carefully all of the output of GPT. Your trust level should be low and skeptical level high. Remember that it may seem like you are chatting with a great savant, but never forget, ChatGPT can be an Idiot-Savant if not fed with properly engineered prompts. GPTs are prone to forgetfulness, memory limitations, hallucinations, and outright errors. It may seem like a genius in a box, but it is not. That is one reason prompt engineering is so important, to keep the flattering bots under control. This will get better in the future no doubt, but the attitude should always remain: trust but verify.

Prompt engineering is a critical skill that everyone needs to learn, including legal professionals. Educational programs should make it easier for the profession to move smoothly into an AI future, a future where AI is an integral part of everything we do. We can all be more productive and more intelligent than we are now, and still be safe. Fear not friends, a little bit of prompt engineering knowledge will go a long way to ensure your effective use of AI enhanced computers. They will finally be smart, super-smart, and for that reason much more fun and enjoyable to have around the office.

Make law fun again, bring in the AI bots! Image by Losey using Visual Muse.

You Need to Learn These Prompt Engineering Skills andNot Be Tempted to Just Turn Everything Over to Vendors

In the midst of AI’s rapid evolution, some AI companies are already suggesting that prompt engineering skills will soon be unnecessary. They claim that future software advancements will embed all necessary prompts anyone may need, reducing the user’s role to simply pressing buttons. As tempting as this siren call may be, the promise of future software, often referred to as ‘vaporware,’ is misleading. No software currently exists that can fully automate the nuanced and complex task of prompting effective legal analysis. Lawyers need to embrace the future in a self-reliant manner.

Still, this does not mean lawyers will not continue to need verified legal databases to run certain types of prompts on, a/k/a Retrieval Augmented Generation. Here vendors are expected to have a role to play for many years, including assistance with inbuilt prompts.

Other approaches that delegate all use of AI to vendors raise critical ethical and practical questions. Can lawyers, bound by a duty of competence, ethically delegate their responsibilities of AI use to outside businesses? Does relying on a vendor to filter or conduct AI interactions compromise a lawyer’s duty to their clients?

While AI tools can significantly augment a lawyer’s capabilities, they cannot replace the nuanced understanding and ethical judgment of human lawyers. We have years of legal training in a variety of settings and, this is key, we have human experience with law takes place in the “real world.” All that ChatGPTs and other AI know is the mere world of words and language. There is a lot more to life and law than that!

Transform your practice today by learning new AI PE skills. Image by Losey using Midjourney.

Law is by nature very complex and constantly evolving human undertaking. AI needs the direct guidance and control of skilled legal practitioners to stay on track. Without a deep understanding of prompt engineering, lawyers may be ill-equipped to do so, they may be unable to guide AI effectively or critically evaluate its outputs.

Conclusion: Embracing a Responsible AI Future for the Legal Profession

As we embrace an AI-augmented legal landscape, mastery of prompt engineering is not just about efficiency but also responsibility. AI offers immense potential to transform legal practice, but its effectiveness hinges on the quality of our prompts. New educational initiatives are needed to equip the legal community to navigate this new era with confidence, ensuring AI is a boon, not a bane, to the legal profession.

Legal education should help legal professionals to operate the new AI tools themselves. They are too important and powerful to delegate to third parties. With the help of AI, and dedicated educators, everyone, especially wordsmiths like lawyers, can learn prompt engineering. Everyone can learn to use the new smart computers. AI is a great new tool, a powerful thinking tool, that can augment your own intelligence and legal work. It should be carefully embraced, not feared. Education about AI is the way forward.

Embrace AI, but with caution. Oil painting style AI image by Losey using Midjourney.


Ralph Losey Copyright 2024 — All Rights Reserved. See applicable Disclaimer to the course and all other contents of this blog and related websites. Watch the full avatar disclaimer and privacy warning here.