New Study Shows AIs are Genuinely Nicer than Most People – ‘More Human Than Human’

February 26, 2024

Ralph Losey. Published February 26, 2024.

Extensive Turing tests by a team of respected scientists have shown that “chatbots’ behaviors tend to be more cooperative and altruistic than the median human, including being more trusting, generous, and reciprocating.”

The article quoted above and featured here is by Qiaozhu Mei, Yutong Xie, Walter Yuan, and Matthew O. Jackson, A Turing test of whether AI chatbots are behaviorally similar to humans (PNAS Research Article, February 22, 2024). This is a significant article and the full text can be found here. Hopefully, the research explained here will begin to counter the widespread fear-mongering now underway by the media and others. Many people are afraid of AI, too fearful to even try it. Although some concern is appropriate, the fear is misplaced. The new generative AIs are a lot nicer and more trustworthy than most people. As some say, they are more human than human. More human than human: LLM-generated narratives outperform human-LLM interleaved narratives (ACM, 6/19/23).

Unlike some articles on AI studies, this paper is based on solid science

There is a lot of junk science and pseudo-scientific articles in circulation now. You have to be very careful about what you rely on. This article and the research behind is of high caliber, which is why I recommend it.

Prof Jackson, Stanford profile photo

The lead author is Professor Matthew O. Jackson, Department of Economics, Stanford University. He was recently named a fellow of the American Association for the Advancement of Science. His website with information links can be found here. Matthew O. Jackson is the William D. Eberle Professor of Economics at Stanford University. He is also an external faculty member of the Santa Fe Institute. Professor Jackson was at Northwestern University and Caltech before joining Stanford, and received his BA from Princeton University in 1984 and PhD from Stanford in 1988.

Prof Mei, U. Mich. profile photo

The co-authors are all from the School of Information, University of Michigan, under the leadership of Professor Qiaozhu Mei. He is a Professor of Information, at the School of Information and Professor of Electrical Engineering and Computer Science, College of Engineering at the University of Michigan. He is the founding director of the Master of Applied Data Science program for Michigan. Here is Qiaoshu Mei’s personal website, with information links. Professor Mei received his PhD degree from the Department of Computer Science at the University of Illinois at Urbana-Champaign and my Bachelor’s degree from Peking University.

The other two authors are Yutong Xie and Walter Yuan, both PhD candidates at Michigan. The article was submitted by Professor Jackson to PNAS on August 12, 2023; accepted January 4, 2024; and reviewed by Ming Hsu, Juanjuan Meng, and Arno Riedl.

Significance and Abstract of the Article

Here is how the authors of this research article on Turing tests and AI chatbot behavior describe its significance.

As AI interacts with humans on an increasing array of tasks, it is important to understand how it behaves. Since much of AI programming is proprietary, developing methods of assessing AI by observing its behaviors is essential. We develop a Turing test to assess the behavioral and personality traits exhibited by AI. Beyond administering a personality test, we have ChatGPT variants play games that are benchmarks for assessing traits: trust, fairness, risk-aversion, altruism, and cooperation. Their behaviors fall within the distribution of behaviors of humans and exhibit patterns consistent with learning. When deviating from mean and modal human behaviors, they are more cooperative and altruistic. This is a step in developing assessments of AI as it increasingly influences human experiences.

Mei Q, Xie Y, Yuan W, Jackson M (2024) A Turing test of whether AI chatbots are behaviorally similar to humans

The authors, of course, also prepared an Abstract of the article. It provides a good overview of their experiment (emphasis added).

We administer a Turing test to AI chatbots. We examine how chatbots behave in a suite of classic behavioral games that are designed to elicit characteristics such as trust, fairness, risk-aversion, cooperation, etc., as well as how they respond to a traditional Big-5 psychological survey that measures personality traits. ChatGPT-4 exhibits behavioral and personality traits that are statistically indistinguishable from a random human from tens of thousands of human subjects from more than 50 countries. Chatbots also modify their behavior based on previous experience and contexts “as if” they were learning from the interactions and change their behavior in response to different framings of the same strategic situation. Their behaviors are often distinct from average and modal human behaviors, in which case they tend to behave on the more altruistic and cooperative end of the distribution. We estimate that they act as if they are maximizing an average of their own and partner’s payoffs.

A Turing test of whether AI chatbots are behaviorally similar to humans

Background on The Turing Test

The Turing Test was first proposed by Alan Turing in 1950 in his now famous article, “Computing Machinery and Intelligence. Turing’s paper considered the question, “Can machines think?” Turing said that because the words “think” and “machine” cannot be clearly defined, we should “replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.” He explained the better question in terms of what he called the “Imitation Game.” The game involves three participants in isolated rooms: a computer (which is being tested), a human, and a (human) judge. The judge can chat with both the human and the computer by typing into a terminal. Both the computer and competing human try to convince the judge that they are the human. If the judge cannot consistently tell which is which, then the computer wins the game.

By changing the question of whether a computer thinks into whether a computer can win the Imitation Game, Turing dodges the difficult, many argue impossible, philosophical problem of pre-defining the verb “to think.” Instead, the Turing Test focuses on the performance capacities that being able to think makes possible. Scientists have been playing with the Turing Test – Imitation Game ever since.

Turing Test Seventy Five Years After Its Proposal

The scientists in the latest article go way beyond the testing of seventy five years ago and actually administer tests assessing the AI’s behavioral tendencies and “personality.”  In their words:

[W]e ask variations of ChatGPT to answer psychological survey questions and play a suite of interactive games that have become standards in assessing behavioral tendencies, and for which we have extensive human subject data. . . . Each game is designed to reveal different behavioral tendencies and traits, such as cooperation, trust, reciprocity, altruism, spite, fairness, strategic thinking, and risk aversion. The personality profile survey and the behavioral games are complementary as one measures personality traits and the other behavioral tendencies, which are distinct concepts; e.g., agreeableness is distinct from a tendency to cooperate. . . .

In line with Turing’s suggested test, we are the human interrogators who compare the ChatGPTs’ choices to the choices of tens of thousands of humans who faced the same surveys and game instructions. We say an AI passes the Turing test if its responses cannot be statistically distinguished from randomly selected human responses.

A Turing test of whether AI chatbots are behaviorally similar to humans

Results of the Turing Test on ChatGPT4

Professor’s Jackson and Mei, and their students, found that GPT3.5 flunked the Turing test based on their sophisticated analysis, but GPT4.0 passed it with flying colors. This finding that humans and computers were indistinguishable is not too surprising for humans using ChatGPTs every day, but the startling part was the finding that ChatGPT4.0 was better than the humans tested. So much for scary, super-smart AI. They are nicer than us; well, at least ChatGPT4 is. It seems like the best way way now to tell GPT4 from humans is to measure ethical standards. To quote a famous Pogo mime, “We have met the enemy and they are us.” For now at least, we should fear our fellow humans, not generative AIs.

Here is how Professor’s Jackson and Mei explained their ethical AI finding.

The behaviors are generally indistinguishable, and ChatGPT-4 actually outperforms humans on average . . . When they do differ, the chatbots’ behaviors tend to be more cooperative and altruistic than the median human, including being more trusting, generous, and reciprocating. . . .

ChatGPT’s decisions are consistent with some forms of altruism, fairness, empathy, and reciprocity rather than maximization of its personal payoff. . . . These findings are indicative of ChatGPT-4’s increased level of altruism and cooperation compared to the human player distribution.

A Turing test of whether AI chatbots are behaviorally similar to humans

So What does this all mean? Is ChatGPT4 capable of thought or not? The esteemed Professors here do not completely dodge that question like Turing. They conclude, as the data dictates, that AI is better than us, “more human than humans.” Here is their words (emphasis added).

We have found that AI and human behavior are remarkably similar. Moreover, not only does AI’s behavior sit within the human subject distribution in most games and questions, but it also exhibits signs of human-like complex behavior such as learning and changes in behavior from role-playing. On the optimistic side, when AI deviates from human behavior, the deviations are in a positive direction: acting as if it is more altruistic and cooperative. This may make AI well-suited for roles necessitating negotiation, dispute resolution, or caregiving, and may fulfill the dream of producing AI that is “more human than human.” This makes them potentially valuable in sectors such as conflict resolution, customer service, and healthcare.

A Turing test of whether AI chatbots are behaviorally similar to humans, 3. Discussion.

Conclusion

This new study not only suggests that our fears of generative AI are over-blown, it shows that it is perfectly suited for lawyer activities such as negotiation and dispute resolution. The same goes for caregivers like physicians and therapists.

This is a double plus for the legal profession. About half of lawyers do some kind of litigation or another, and all lawyers negotiate. That is what lawyers do. We can all learn from the skills of ChatGPT4.

The new Chatbots will not kill us. They are not Terminators. They will help us. They are literally ‘more human than human,’ which I learned is also the name of a famous heavy metal song. With this study, and more like it, which I expect will be forthcoming soon, the public at large, including lawyers, should start to overcome their irrational fears. They should stop fearing generative AI, and start to use it. Lawyers will especially benefit from the new partnership.

Ralph Losey 2024 Copyright. — All Rights Reserved


Transform Your Legal Practice with AI: A Lawyer’s Guide to Embracing the Future

January 24, 2024

Ralph Losey. Published January 24, 2024.

In a world increasingly influenced by artificial intelligence, the legal profession stands at a crossroads. Lawyers must adopt the new AI tools and quickly learn how to use them effectively and safely. That means learning the skill of how to use AI, which techs call “prompt engineering.” Fortunately, that just means word engineering, the art of knowing how to talk to ChatGPTs. The new generative AIs are designed to be controlled by natural language, not computer code. This is the easiest kind of engineering possible for lawyers. The precise use of language is every lawyer’s stock-in-trade and so prompt engineering is within every lawyer’s capacity.

Wordsmith and Chat expert. Images by Ralph Losey using his GPT, Visual Muse.

Smart Computers Are Finally Here to Help Lawyers Do their Jobs

Ever since lawyers first started using personal computers in the eighties, we’ve eagerly awaited the day when they would get smart. We were often told that computers would soon progress from mere processing units to intelligent assistants. Dreamy promises of artificial intelligence were made, but never delivered. We have been stuck for over forty years with dumb computers that can barely catch spelling errors. Finally, a breakthrough has been made. With the advent of generative AI, the long wait is over, and the dream of smart computers is becoming a reality.

Ralph in the 80s trapped in dumb computers. Image created by Ralph in 2023 using various AI image tools.

The arrival of new generative AI, ChatGPT, is something that should be greeted by all legal professionals, indeed all computer users, with relief and enthusiasm, but also with a reasonable measure of care. We are finally leaving the horse-and-buggy stage of computers to fast moving cars, and, until you learn how to drive, they can be dangerous.

My Background and Use of AI in the Law

In 2012, I was lucky to have the opportunity to work on the landmark Da Silva Moore case. See, Austin The Da Silva Moore Case Ten Years Later; (EDRM 2/23/22) and Austin, A Case Where TAR Wasn’t Required (EDRM 8/9/22). Da Silva Moore established the legality of use of a special type of rule-based AI, a/k/a “active machine learning,” which is typically referred to in the law as predictive coding. I then began to specialize in this subfield of e-discovery. Thereafter, at Jackson Lewis I supervised the use of predictive coding in thousands of lawsuits across the country, and also taught and wrote about this type of AI. The emergence of truly smart generative AI in late 2022 with OpenAI’s release of GPT3.5, rekindled my enthusiasm in legal-tech. The long wait for smart computers was over. I put all thoughts of retirement aside.

I have been using the new AI tools in my legal practice ever since late 2022 on a limited basis, and as non-billable research and self education on a nearly full time basis. My focus lately has been on OpenAI’s prompt engineering instruction guide. My studies have centered around prompt engineering, the art of talking to Chat GPTs. My primary guide has been the instruction and best-practices advice provided by OpenAI. See OpenAI’s prompt engineering instruction guide. OpenAI is, of course, the company that created and first released ChatGPTs to the world.

Ready or not, here comes Chat GPT. Image by Ralph Losey using Midjourney AI tool.

My research includes extensive experimentation with GPTs, including my favorite project of seeing how ChatGPT4 could perform as an appellate judge. See e.g. Circuits in Session: Analysis of the Quality of ChatGPT4 as an Appellate Court Judge. I also did research on AI related policy and security issues attending DefCon 31 in August 2023 and participating in the AI hackathon. See e.g. DefCon Chronicles: Sven Cattell’s AI Village, ‘Hack the Future’ Pentest and His Unique Vision of Deep Learning and Cybersecurity Of course, you may have noticed that I make time to write extensively on generative AI and the law. Writing to share what I learn with my fellow professionals helps deepen my own understanding.

Trying to teach this is also a big help. I have recently done a few lectures, but my biggest teaching work so far has been behind the scenes. I have been “secretly” working on an online instructional program, Prompt Engineering Course for Legal Professionals, which will be based primarily on OpenAI’s prompt engineering instruction guide. The OpenAI guide, although invaluable and authoritative, is very technical and often difficult to understand. The goal of my work-in-progress course is to explain and build upon the OpenAI insights. I want to make their prompt engineering insights more accessible to legal professionals, to show how their six prompt engineering strategies can be applied in legal work. That is the key to empower any attorney to transform their legal practice with AI.

Once completed, the Prompt Engineering Course for Legal Professionals will be very detailed. It will probably require over twenty hours for a student to complete, and include homework and tests. More on all that later, when (not if!) it is finally finished. For now, this blog makes a short introduction to Open AI’s Six Strategies and how they can be applied as a kind of lawyer’s guide to future practice.

Success of Your AI Output Depends on You. AI image by Ralph Losey using Midjourney.

Why Prompt Engineering is Important?

Prompt Engineering (“PE”) is the art of chatting with generative type AI to get the intended answers and guidance you need. It also serves to minimize errors that are still inherent to AI. PE involves learning how to craft questions and commands that guide large language models types of AI like ChatGPT to generate more accurate, relevant, and useful responses. It is in essence a new type of wordsmith activity involving the precise use of clear instructions, clear prompts, which the AI then responds to.

The analytical linguistic skills necessary to control AI by prompts should be learned by everyone who uses it because the quality of an AI’s output depends, at least in part, on the input it receives. Well designed prompts improve AI performance and minimize misunderstandings and errors, including over-creative type errors – such as just making up answers, including case law, with no basis in reality – called ‘hallucinations‘ in AI jargon, as well as lesser known, but related errors, such as sycophantism, failure to admit the AI does not know the answer to a question posed, and even failure to admit that its prior responses were wrong.

Prompt engineering can greatly reduce risks of GPT hallucinations, sycophantism and other errors. Image by Ralph Losey using his GPT tool, Visual Muse.

Good prompts help ensure that the AI’s responses are legally sound and reliable. Of course, all legal work by AI should still be verified and controlled by humans. Legal practice cannot be delegated to AI, but it can be a powerful assistant.

The Six Strategies of OpenAI’s Prompt Engineering

  1. Writing Clear Prompt Instructions: The cornerstone of effective interaction with AI is clarity in communication. Lawyers often deal with complex issues that require precise language. By providing clear, specific instructions, legal professionals can guide the AI to deliver targeted and applicable responses, enhancing the quality of legal research, drafting, and analysis.
  2. Providing Reference Texts: AI models can produce more accurate answers when supplemented with relevant texts. In legal settings, referencing statutes, case laws, or legal articles can direct the AI to base its responses on established legal doctrines, leading to more reliable and contextually appropriate answers. This is the core meaning of Retrieval Augmented Generation whereby generative AI works on existing case law, regulations, and other texts and cites thereto in responses.
  3. Splitting Complex Tasks Into Simpler Subtasks: Legal issues are often multifaceted. Breaking them down into simpler, more manageable components enables AI to handle each aspect thoroughly. This strategy is particularly useful in document review and legal research, ensuring comprehensive coverage of all relevant points.
  4. Giving the Model Time to ‘Think’: While AI doesn’t ‘think’ in the human sense, structuring prompts to simulate step-by-step reasoning can lead to more thorough and reasoned legal analysis. This tactic is akin to guiding a junior lawyer through a legal problem, ensuring they consider all angles and implications.
  5. Using External Tools: Integrating AI with external tools, like legal databases or current statutes, can significantly enhance the accuracy and relevance of the AI’s outputs. This synergy is crucial in law, where staying updated with the latest legal developments is vital.
  6. Testing Changes Systematically: Regular testing and refinement of AI prompts ensure that they remain effective over time. This strategy is akin to continuous legal education, where lawyers constantly update their knowledge and skills to maintain professional competence.
Writing clear instructions is the polestar of GTP success. Image by Losey using Midjourney.

The Impact on Legal Practice

Improving prompt engineering skills can make a significant difference in legal practice. By mastering these strategies, legal professionals can:

  • Enhance Legal Research and Drafting: AI can assist in drafting legal documents and researching case law or statutes, but it requires precise prompts to generate useful outputs. Lawyers adept in prompt engineering can leverage AI to produce high-quality drafts and research efficiently. It also requires intelligent use of Retrieval Augmented Generation whereby prompts are run against existing, verified legal databases.
  • Reduce Errors and Misinterpretations: Inaccurate AI responses can lead to legal missteps. Effective prompt engineering minimizes such risks, ensuring that the AI’s outputs are dependable.
  • Stay Current with Legal Developments: The legal landscape is constantly evolving. Prompt engineering skills, especially using external tools and systematic testing, help lawyers keep up-to-date with the latest laws and judicial decisions.
  • Improve Client Services: With AI’s assistance in routine tasks, lawyers can focus more on complex aspects of legal practice, improving overall client service.
  • Ethical Compliance and Risk Management: Understanding AI’s capabilities and limitations through prompt engineering is crucial for ethical legal practice and managing the risks associated with AI use. never cite a case that only ChatGPT can find!
Everyone could use AI helpers to improve their work. Image by Losey using Midjourney.

Enhancing Legal Expertise with AI

In the legal profession, where the precision of language is paramount, the ability to effectively prompt AI can transform how legal analysis, research, and documentation are conducted. By refining prompts, legal professionals can extract nuanced and specific information from legal databases, case law, and statutes, much more efficiently than by use of traditional research methods alone. This not only saves time but also increases the breadth of resources that can be consulted within tight deadlines. Still, you must always check citations, review cases yourself and verify all work.

Ethical Considerations and AI in Law

The ethical implications of using AI in legal practice cannot be overstated. Misguided or poorly constructed prompts can lead to incorrect legal advice or analysis, raising ethical concerns. We have already seen this in the well known case, Mata v. Avianca, (S.D.N.Y. June 22, 2023), where lawyers were sanctioned for citing fake cases. Also see, Artigliere, Are We Choking the Golden Goose (EDRM 12/05/23). Retired Judge Ralph Artigliere cautions against over regulations that might discourage lawyer use. The answer is lawyer training, not stringent regulations. It is imperative that lawyers become proficient in prompt engineering, to better control the errors, and better align the AI outputs with legal standards and ethical guidelines. This is vital to maintain the integrity of legal advice and uphold the profession’s ethical standards.

Robot judge. Image by Losey using Midjourney.

The Importance of Prompt Engineering to Future Legal Practice

The new GPT AI tools are incredibly powerful, but the quality of their output depends on:

  1. the clarity of the instructions they receive;
  2. sufficient context to any reference text provided;
  3. the ability to decompose complex tasks into simpler ones;
  4. having the time to “think”;
  5. the use of external tools when necessary; and,
  6. the systematic testing of changes.

If you learn to use these strategies, you can significantly enhance the effectiveness of your interactions with all GPT models. When strategically crafted prompts go into the AI, then gold standard responses can come out. Conversely, the old computer saying applies: “Garbage In, Garbage Out.” For lawyers, negligent use of AI can lead to a garbage heap of problems, including expensive remedial efforts, lost clients, angry judges (worse than angry birds), maybe even sanctions.

GIGO. Image by Losey using Midjourney.

In these early days of AI, legal ethics and common sense require that you verify very carefully all of the output of GPT. Your trust level should be low and skeptical level high. Remember that it may seem like you are chatting with a great savant, but never forget, ChatGPT can be an Idiot-Savant if not fed with properly engineered prompts. GPTs are prone to forgetfulness, memory limitations, hallucinations, and outright errors. It may seem like a genius in a box, but it is not. That is one reason prompt engineering is so important, to keep the flattering bots under control. This will get better in the future no doubt, but the attitude should always remain: trust but verify.

Prompt engineering is a critical skill that everyone needs to learn, including legal professionals. Educational programs should make it easier for the profession to move smoothly into an AI future, a future where AI is an integral part of everything we do. We can all be more productive and more intelligent than we are now, and still be safe. Fear not friends, a little bit of prompt engineering knowledge will go a long way to ensure your effective use of AI enhanced computers. They will finally be smart, super-smart, and for that reason much more fun and enjoyable to have around the office.

Make law fun again, bring in the AI bots! Image by Losey using Visual Muse.

You Need to Learn These Prompt Engineering Skills andNot Be Tempted to Just Turn Everything Over to Vendors

In the midst of AI’s rapid evolution, some AI companies are already suggesting that prompt engineering skills will soon be unnecessary. They claim that future software advancements will embed all necessary prompts anyone may need, reducing the user’s role to simply pressing buttons. As tempting as this siren call may be, the promise of future software, often referred to as ‘vaporware,’ is misleading. No software currently exists that can fully automate the nuanced and complex task of prompting effective legal analysis. Lawyers need to embrace the future in a self-reliant manner.

Still, this does not mean lawyers will not continue to need verified legal databases to run certain types of prompts on, a/k/a Retrieval Augmented Generation. Here vendors are expected to have a role to play for many years, including assistance with inbuilt prompts.

Other approaches that delegate all use of AI to vendors raise critical ethical and practical questions. Can lawyers, bound by a duty of competence, ethically delegate their responsibilities of AI use to outside businesses? Does relying on a vendor to filter or conduct AI interactions compromise a lawyer’s duty to their clients?

While AI tools can significantly augment a lawyer’s capabilities, they cannot replace the nuanced understanding and ethical judgment of human lawyers. We have years of legal training in a variety of settings and, this is key, we have human experience with law takes place in the “real world.” All that ChatGPTs and other AI know is the mere world of words and language. There is a lot more to life and law than that!

Transform your practice today by learning new AI PE skills. Image by Losey using Midjourney.

Law is by nature very complex and constantly evolving human undertaking. AI needs the direct guidance and control of skilled legal practitioners to stay on track. Without a deep understanding of prompt engineering, lawyers may be ill-equipped to do so, they may be unable to guide AI effectively or critically evaluate its outputs.

Conclusion: Embracing a Responsible AI Future for the Legal Profession

As we embrace an AI-augmented legal landscape, mastery of prompt engineering is not just about efficiency but also responsibility. AI offers immense potential to transform legal practice, but its effectiveness hinges on the quality of our prompts. New educational initiatives are needed to equip the legal community to navigate this new era with confidence, ensuring AI is a boon, not a bane, to the legal profession.

Legal education should help legal professionals to operate the new AI tools themselves. They are too important and powerful to delegate to third parties. With the help of AI, and dedicated educators, everyone, especially wordsmiths like lawyers, can learn prompt engineering. Everyone can learn to use the new smart computers. AI is a great new tool, a powerful thinking tool, that can augment your own intelligence and legal work. It should be carefully embraced, not feared. Education about AI is the way forward.

Embrace AI, but with caution. Oil painting style AI image by Losey using Midjourney.


Ralph Losey Copyright 2024 — All Rights Reserved. See applicable Disclaimer to the course and all other contents of this blog and related websites. Watch the full avatar disclaimer and privacy warning here.


Move Fast and Fix Things Using AI: Conclusion to the Plato and Young Icarus Series

January 3, 2024

Ralph Losey. Published January 3, 2024.

This is the conclusion to the Plato and Young Icarus series. Part One set out the debate in neoclassical terms between those who would slow down AI and those who would speed it up. Part Two shared the story of the great visionary of AI, Ray Kurzweil. Part Three told the tale of Jensen Huang, the CEO and founder of NVIDIA. Part Four shared the story of Yann LeCun, Turing Award winner, hero of France and chief AI scientist of Facebook. In this conclusion we summarize the series in both neoclassical and modern terms, make the case for not slowing down AI and conclude with a call to action. Fly, Icarus fly!

Contemporary Icarus successfully files to the light. Depicted in a synthesis of neoclassical and digital styles using Ralph’s Visual Muse GPT.

Summary of the Four Part Plato and Young Icarus Series

Part One. Plato and Young Icarus Were Right: do not heed the frightening shadow talk giving false warnings of superintelligent AI. The inaugural post in the blog series challenges the prevalent skepticism and fear surrounding the development of advanced artificial intelligence. An argument is made for an accelerated pace in AI advancement, drawing inspiration from classical myths and philosophical allegories to frame the debate.

Ralph Losey as Icarus in neoclassical, digital style using his Visual Muse.

The article challenges the cautious approach of slowing down AI development and urges more technological progress. This is done by invoking the myth of Icarus and Daedalus, and reversing the message to symbolize the triumph of youthful innovation over caution. Like Icarus we should aim for the sun, and reach for greater intelligence. We should strive to overcome our human limitations. Superstitious, fear-driven myths, such as that of the original Icarus, should be cast aside in favor the pursuit of knowledge and progress. This reinterpretation aligns with Plato’s Allegory of the Cave, where escaping the cave signifies breaking free from the constraints of conventional thought and embracing enlightened reason.

Leaving Plato’s Cave for the Sun, futuristic digital style using Midjourney 6.0.

The stance of prominent scientists like Max Tegmark and Geoffrey Hinton, who have expressed reservations about the rapid advancement of AI, is based in part on fear-based speculations. These are shadows in a dark cave. We should focus instead on the very real dangers in the here and now. We need AI’s to solve the problems. AI is not an autonomous creature with potential for malevolence. AI is a tool created by humans to enhance and augment our capabilities. The beginning post acknowledges the necessity for regulation in AI development, but opposes last minute fears and second thoughts. There is no going back now to the darkness of Plato’s cave.

Prisoners hesitating to leave Plato’s Cave out of fear of the unknown. Photorealistic style using Midjourney 6.0.

Part Two. Ray Kurzweil: Google’s prophet of superintelligent AI who will not slow down. Ray Kurzweil, a vanguard in the field of artificial intelligence, has long been an advocate for the positive integration of AI into human society. He predicted in 1999 the emergence of general artificial intelligence by 2029. Kurzweil envisions a future where AI serves as an augmentation to human intelligence rather than a threat. He foresees a world where humans and machines merge, leading to new heights of intelligence and creativity. This optimistic outlook sees AI as an enhancer of human capabilities, not a usurper of jobs or a competitor. Kurzweil’s views extend to the practical aspects of AI development; he notably did not sign the Future of Life Institute’s open letter proposing a pause in AI research. He argues that such a pause is impractical and risks hindering progress in vital fields like medicine, education, and renewable energy. He suggests addressing safety concerns in more specific ways that don’t compromise crucial research.

Ray Kurzweil as Icarus astronaut in neoclassical style using Visual Muse.

Kurzweil’s optimism about AI extends to its impact on employment. In an interview with Lex Fridman, he dispels fears of job loss due to AI, drawing parallels with historical trends in automation. He points out that, historically, technology has enhanced human capabilities and increased the workforce, contrary to predictions of mass unemployment. Kurzweil emphasizes that our future with AI will not be one of replacement but of merger, enhancing our intellectual capacities exponentially. He predicts that we will develop more efficient interfaces with AI, moving beyond traditional tools like keyboards and smartphones to more advanced, direct connections with our brains.

Girl in near future with new AI connectivity devices. Photo style using Midjourney 6.0.

Kurzweil’s message is clear: AI is not an external force but an extension of ourselves, destined to amplify our intelligence and abilities manifold. His vision, backed by his work with Google, is of a future where humans seamlessly integrate AI into their cognition, transforming our interaction with technology and our potential for innovation.

Part Three. Jensen Huang’s Life and Company – NVIDIA: building supercomputers today for tomorrow’s AI, his prediction of AGI by 2028 and his thoughts on AI safety, prosperity and new jobs. In the realm of AI’s impact on society, Nvidia’s CEO Jensen Huang presents a pragmatic and forward-thinking perspective, emphasizing the importance of safety and regulation in AI development, akin to the standards established in aviation and automobile industries. Huang’s views align with the insights of tech visionary Ray Kurzweil, yet are grounded in his own extensive entrepreneurial experience. Addressing the potential dangers of AI, Huang maintains a reassuring stance, likening AI to other technologies like electricity, which, despite inherent risks, have become indispensable in modern life. His approach to AI safety is multifaceted, focusing on areas like robotics, self-driving cars, information integrity, and the protection of artistic rights. Huang advocates for a ‘human in the loop’ approach, particularly in the domain of large language models, to ensure that AI’s learning and evolution are guided and checked by human oversight. This cautious approach is mirrored in Nvidia’s method of developing AI, which involves rigorous data collection, training, testing, and validation before deployment.

Jensen Huang as Icarus in neoclassical digital style using Visual Muse and photoshop.

On the topic of AI and employment, Huang again presents an optimistic view. He predicts that AI will lead to more job creation in the near term, driven by the prosperity and expansion resulting from increased productivity. Huang argues that as companies become more successful and profitable due to AI-enhanced productivity, they will expand into new areas, creating more employment opportunities. This perspective challenges the common fear that AI will lead to widespread job losses. Instead, Huang suggests that AI will enable companies to explore new ideas and ventures, thereby generating more jobs. Jensen’s words are shown in the YouTube video that follows.

Jensen Huang on AI and Automation. His words and voice, Nvidia’s avatar and Ralph’s animation.

Jensen Huang acknowledges that while specific jobs may be lost, they will likely be replaced by other roles requiring AI proficiency. Thus, Huang emphasizes the importance of learning AI and using AI tools to augment one’s productivity. This view aligns with historical evidence showing that human ingenuity has consistently created new industries and opportunities over time. Huang’s message is clear: embracing AI and its tools is not just a means to adapt to a changing job market, but also a pathway to future prosperity and growth.

Part Four.Yann LeCun: Turing AwardWinner, Chief AI Scientist of Facebook and Hero of France. Yann LeCun received the Turing Award on AI along with  Yoshua Bengio and Geoffrey Hinton. They are commonly known as the three Godfathers of AI. In 2023 a striking divergence of views emerged concerning continued AI development. Hinton and Bengio expressed concerns, for the first time, about the pace of AI development and called for a pause of slow down. LeCun disagrees with them and their many followers. He continues to advocate for the accelerated pursuit of AI on an open source basis. Yann LeCun strongly believes in the great potential of AI and its positive impact on society.

Yann LeCun as Icarus using Visual Muse.

Yann LeCun challenges the prevalent fear-based narratives surrounding AI, arguing that smarter AI systems inherently carry reduced risks. LeCun asserts that AI, as an extension of human intent and control, cannot pose existential threats as depicted in science fiction. Yann LeCun’s arguments are backed by practical examples from his work as lead AI scientist at Facebook, where, for example, AI has been instrumental in improving functions like hate speech detection.

Yann respectfully criticizes his friends and colleagues in the movement to halt or pause AI development as being driven by unrealistic fears. Yann advocates instead for a balanced, open approach to regulation that recognizes both the potential and the challenges of AI. The focus should be on empowering the ‘good guys’ to innovate faster than their nefarious counterparts.

Conclusion and Call to Action

The “Plato and Young Icarus” series is an appeal to embrace the rapid advancement of artificial intelligence and reject fear and skepticism. The cautionary voices of AI veterans like Geoffrey Hinton and Yoshua Bengio, while valuable, should not overshadow the pressing need for advanced AI tools. We need the help of AI to meet the many pressing challenges that humanity now faces.

Image in graphic minimalism style created using Visual Muse.

It’s imperative that we not only understand AI but also actively contribute to shaping its trajectory. Whether your background or age – programmer, scientist, technologist, attorney, judge, policymaker, educator, caring parent, or interested citizen – your voice and actions matter. Walk the talk. Lead by fearless example. Connect and enjoy the knowledge and understanding enhancements that AI connectivity can bring.

Happy elders with AI enhancements. Photorealism style created using Midjourney 6.0.

The journey with Plato and Young Icarus concludes with a cautious return to the shadowy cave. It leads to a compassionate call to action; to education, not fights. Let’s work together to enlighten the shadowy cave. Let’s get the message across that AI is a tool for human empowerment, not a harbinger of doom. Perhaps this time Socrates can prevail at trial. Plato, Apology of Socrates (399 B.C.E.) (Socrates convicted of corrupting the young and “not believing in the gods, but in other new spiritual things.”)

Socrates rearguing his defense to his alleged crime of Asebeia, questioning traditional mythological accounts.

I am inclined to think the dry humor of leaders like Jensen Huang will make a difference. Wit and wisdom are powerful persuaders. There is a good chance that this time, unlike two 2,500 years ago, those who question will prevail.

Socrates sways the jury this time with his wit and wisdom.

This time polarization can be avoided by using the new AI tools of reason with compassion and humor. This time compromises can be reached to protect society through inclusive, open, practical approaches to government regulation. We can evolve beyond limited animal intelligence and fear.

People uplifted by AI coming together. Watercolor style using Visual Muse.

We can soar towards a future of enhanced human capabilities and understanding. We can innovate too fast for the would be tyrants hoarding yesterday’s AI. We can transcend our current limitations and forge a better future. The narrative of Icarus, reimagined in this series, symbolizes this journey – not as a tale of hubris and downfall, but as a testament to human aspiration and the transformative power of technology.

Icarus as a symbol of human aspiration and triumph of technology. Shown in neoclassical style using Visual Muse.

Let this, not disasters of stupidity, be our legacy to future generations: a bold, yet wise pursuit of AI, steering humanity towards an era of unparalleled growth and discovery.

AI enhanced human shown in photorealistic digital style using Midjourney 6.0.

Ralph Losey Copyright 2023 – All Rights Reserved


e-Discovery Team

LAW and TECHNOLOGY - Ralph Losey © 2006-2026

Skip to content ↓