Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss

February 1, 2025

Ralph Losey. January 30, 2025.

DeepSeek, a startup AI company owned by a Chinese hedge fund, which is in turn owned by a young AI whiz-kid, Liang Wenfeng, claims that its newly released V-3 software-R1 was trained inexpensively and without using NVIDIA’s high-end chips, the ones that cannot be exported to China. Wenfeng also claims his software is as good as the best-in-class AI software trained on the best NVIDIA ships. These claims were believed for reasons I will explain. The possibility that these claims might be true caused a run on Wall Street the likes of which have never been seen before. The Trillion Dollar market crash included a loss in value of Nvidia of $593 billion, a new one-day record for any company, ever.

Introduction

One reason DeepSeek’s claims triggered a crash is that DeepSeek’s software is open-source and can be copied freely. Further, the training costs for the software are claimed to be 20 to 100 times less expensive and require far less data and energy. The market assumed that if the claims were true, it would be an industry disruptive breakthrough. It would also disrupt the political balance of world power. The day after the crash former Google CEO, Eric Schmidt, said the rise of DeepSeek marks a “turning point” for the global AI race and argued for more open source products. Will China’s open-source AI end U.S. supremacy in the field? (Washington Post, 1/28/25). 

The political dimension of this market event is one reason many are skeptical of the economic savings claims by an unknown Chinese startup. Still, the market panicked because many were quickly convinced of the overall quality of DeepSeek’s new R1 software itself. Seeing is believing.

There is no proof provided of the costs savings but published test results and hands-on trials by tens of thousands of people worldwide trying the free smartphone software confirmed the quality claims. Here is the paper in English that DeepSeek also released explaining the programming innovations. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning (Arxiv, 1/22/25). The top scientists of the major AI labs, all of whom immediately tested the software, also generally confirmed the quality claims. Rumors began flying that they were all in crises mode, especially Meta, the only other company who had gone open source.

Sam Altman immediately responded to the release of R1 and market crash by tweeting on X:

Deepseek’s r1 is an impressive model, particularly around what they’re able to deliver for the price. We will obviously deliver much better models and also it’s legit invigorating to have a new competitor! we will pull up some releases.

My hands-on tests of DeepSeek shows that it is at least “close” to the quality of ChatGPT and looks a lot like it, suspiciously so. I will share the results in my next article where I will closely study the one new software feature that so far only DeepSeek provides called Deep-Think. Many others are testing DeepSeek and reaching the similar conclusion. But see, Deepseek… More like Deep SUCKS. My honest thoughts… (YouTube video by Clever Programmer, 1/31/25). DeepSeek is already the number one downloaded app on both Apple and Android phones.

Legal professionals, please note the software has no privacy protections. Your prompts will be used for training. DeepSeek should not be used for legal work that in any way involves confidential information. Also see, Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History (1/29/25).

Claims of Breakthroughs

This kind of market reaction is hard to believe when you consider DeepSeek’s owner, Liang Wenfeng is a forty year old Chinese hedge-fund owner (High-Flyer) in Hangzhou, China. Wenfeng is a thin, bespectacled engineer, a billionaire with a genius for stock trading using AI. This allowed him to assemble and privately pay for a team of China’s top AI software developers. 

Wenfang’s company, DeekSeek, is also a virtually unknown as its R1 software is its first consumer product. In spite of the fact Wenfeng is basically an AI stock trader and his company unknown, many stock traders and analysts in the US were persuaded to believed Wenfeng’s claims to have achieved an AI development cost breakthrough by using:

  • older, cheaper, non-trade-restricted models of NVIDIA chips; and,
  • new innovative training methods.

Wenfeng also claims DeepSeek’s new software is better in at least one respect than existing models because it can describe the chain of thought and assumptions made by the AI to respond to each user request. They call this new feature “Deep-Think.” My next article will go into that next. Spoiler alert, it does appear to work as claimed and I predict the US companies will come out with their own versions of this Deep-Think feature soon. Another spoiler alert, my prediction came true a few days after I made it when OpenAI released a new version of its ChatGPT model called “o3-mini-high” in the afternoon of January 31, 2025. The next article will explain that in detail too.

Political Impact of DeepSeek

Are all the claims of DeepSeek real and to be believed as the market has done? Or are they just clever AI driven propaganda? Especially the unsubstantiated claim that DeepSeek has invented a way to train cheaply on older chips? Could this be part of some kind of master strategy of the People’s Republic of China to counter the prior actions of President Biden, so far continued by President Trump, to restrict chip sales to China. Presidents Biden and now Trump have pledged to maintain US superiority in AI. President Trump now appears ready to go further by imposing new trade tariffs and AI restrictions on China.

China’s DeepSeek claims, but has not proven, that many companies all over the world can now create an equal or better model at far less costs than ever before, that it can be done using older, non-trade-restricted computer chips and more advanced data training methods. If so, then U.S. dominance in AI may end. The fear of industry disruption by Chinese code innovation is the basic cause of the US market panic.

Was the market panic propaganda manipulated by China? They have a history of such fraud. See, DeepSeek is Absolute Nonsense (YouTube video by laowhy86, 1/31/25). Did they or others use artificial intelligence to cause the crash? Who profited from the trillion dollar market panic? (Always follow the money.) How will the Trump Administration react now? How will the leading established AI companies? The rumor mill claims the AI companies are all in a panic mode now and knee-deep in emergency meetings about DeepSeek.

There is one piece of evidence to support speculation of the involvement of the Chinese government market manipulation. Liang Wenfeng was seen meeting with Chinese Premiere Li Qiang on January 20, 2025. The market sell-off was just a week later and was obviously very good news for the Chinese government leaders. It was also a slap in the face to U.S. leaders. Only a week earlier President Trump announced a $500 Billion Dollar initiative for AI development in the U.S. Then a Chinese upstart comes out of nowhere and punches a Trillion Dollars out of our market by claims that American chips are no longer needed. The Trump Administration appears to be gearing up to fight back, once it figures out what happened and what to do. See e.g., Trump Commerce pick slams China: ‘Stop using our tools to compete’ (The Hill, 1/29/25) (confirmation testimony of the nominated Commerce Secretary, Howard Lutnick, blames trade-secret theft for DeepSeek’s success).

The night after the stock market crash President Trump appeared before reporters at his home in Del Largo and told them the release of DeepSeek AI from a Chinese company should be a “wake-up call for our industries that we need to be laser-focused on competing to win.” He stated the development could be positive for the United States: “If it comes in cheaper, that’s going to benefit us too.” He was expecting new AI systems by U.S. companies as soon as next week that “will top” DeepSeek’s model. Then of course he said in usual fashion: “We’re going to dominate! We’ll dominate everything!China’s DeepSeek disrupts American plans for AI dominance (Washington Post, 1/28/25). The market did recover slightly the next day, but not much, and Nvidia briefly dropped below $120 per share again today, January 30, as I am writing this. Still, I do expect a bounce back.

DeepSeek claims, but has not proven, that many companies all over the world will soon be able to create equal or better AI models at far less costs than ever before, that it can be done using older, non-trade-restricted computer chips and more advanced data training methods. If so, there goes U.S. dominance of AI. The fear of industry disruption by Chinese code innovation is the basic cause of the U.S. market panic.

Was the market panic propaganda manipulated? How will the Trump Administration react now? How will the new Trump SEC react? (Market manipulation is illegal. Is the SEC investigating?) How will the leading established AI companies now react?

CEO of Anthropic, Dario Amodei, Argues for Continued Restrictions on China

The CEO of Anthropic, Dario Amodei, has already written an essay reacting to DeepSplash. On DeepSeek and Export Controls (January 29, 2025). Below is his image and the opening paragraphs of his blog.

Here, I won’t focus on whether DeepSeek is or isn’t a threat to US AI companies like Anthropic (although I do believe many of the claims about their threat to US AI leadership are greatly overstated)1. Instead, I’ll focus on whether DeepSeek’s releases undermine the case for those export control policies on chips. I don’t think they do. In fact, I think they make export control policies even more existentially important than they were a week ago2.

Export controls serve a vital purpose: keeping democratic nations at the forefront of AI development. To be clear, they’re not a way to duck the competition between the US and China. In the end, AI companies in the US and other democracies must have better models than those in China if we want to prevail. But we shouldn’t hand the Chinese Communist Party technological advantages when we don’t have to.

Amodei argues that the recent advancements of DeepSeek do not undermine U.S. export control policies on AI chips but instead reinforce their necessity to maintain a technological lead. He sets forth key AI development dynamics, including scaling laws, efficiency improvements, and paradigm shifts, to put DeepSeek’s recent progress into perspective. Interestingly, he does not contest their claims. He only says, I think correctly, that DeepSeek’s improvements are part of a predictable trend and are not breakthroughs. Dario Amodei contends that well-enforced export controls are critical in shaping a future where the U.S. retains dominance in AI. He argues that this is necessary to prevent China from amassing the millions of chips needed to create future AI systems that could shift global power balances.

DeepSeek’s New DeepThink Feature in R1

I have tested DeepSeek’s new Deep-Think feature of R1. Although it is not a big breakthrough, it is an excellent new feature like a chain-of-thought display of the AI’s own pre-response analysis of a user’s prompt. It is unlike any other AI software feature I have seen. I will report on this in detail soon. It is a true innovation, unlike the cost and training advances that appear at first glance could be the result of trade-secret and copyright violations. Howard Lutnick told Congress that is what he thinks. OpenAI and many of its users are claiming Deep-Seek theft from OpenAI by using a process called distillation, which is prohibited by its license agreement.) See e.g., Why blocking China’s DeepSeek from using US AI may be difficult (Business Insider, 1/29/25).

I predicted the new Deep-Think type feature would soon be added by U.S. AI companies. As my next article to be published soon explains, this prediction came true just a few day later when OpenAI , ChatGPT o3-mini-high. The competition in new features from DeepSeek will continue a healthy impact, even if the cost savings claims it has made later prove to be more political smoke and mirrors.

Many doubt DeepSeek has actually made tech breakthroughs that will magically trivialize NVIDIA’s inventions. The trillion dollar loss looks more like stock market manipulation, probably by AI. The use of AI for market trading is, after all, Liang Wenfeng’s speciality. It is how he joined the tech-boy billionaires club. Let’s just hope our markets are better protected and not so easy to hack. Let’s see if the cybersecurity experts and new DOJ lawyers are able to react effectively.

Conclusion

As the old Chinese adage supposedly goes, may you be cursed to live in interesting times. The pace of change in AI improvement is incredible. I suppose this is what exponential change looks like. These are not only interesting times, they are very dangerous, especially in the relations between the U.S. and China. We certainly do not want control of superintelligent AI to fall into the hands of any dictator, anywhere. Nor do we want AI manipulation of our markets. I will continue to follow this closely not only from the point of view of law and technology, but also from a political, economic and military perspective.

Ralph Losey Copyright 2025. All Rights Reserved.


The Human Edge: How AI Can Assist But Never Replace

January 30, 2025

Ralph Losey. January 30, 2025.

The rise of artificial intelligence has evoked both awe and apprehension. AI’s capabilities are undeniably astonishing. It can converse and respond to us like a human, answer questions—often with impressive accuracy—write essays, generate computer code, diagnose diseases, and analyze complex documents. The list of its potential contributions seems endless. Yet, whispers of fear persist in our conversations: What if AI surpasses us? What if it replaces us entirely? These fears, while understandable, stem from a fundamental misunderstanding of what AI is—and what it is not. While AI may outperform humans in specific analytical domains, like chess and protein molecule folding, it can never replicate the full spectrum of what it means to be human. Far from being a threat to humanity, AI—when governed responsibly—offers a transformative tool to amplify our potential, not a rival to our existence.

What AI Is—and What It Isn’t

AI is not here to take our place but to expand what we can achieve. As a one-dimensional tool designed for thinking and problem-solving, AI excels in analytical precision and efficiency. Humans, however, are profoundly multidimensional—capable of not only reasoning but also feeling, intuiting, sensing, and experiencing a deep awareness of being and the wonder of existence. This essay explores why humans will always remain irreplaceable and how a partnership with AI can redefine progress—not just preserving our unique essence but empowering it to flourish in unprecedented ways.

At its core, AI is a tool. It is a product of human ingenuity, designed to analyze patterns, make predictions, and automate processes. But AI is not conscious, not alive. It is a non-living tool that processes and generates language in a statistics-based manner, creating outputs that may mimic human thought but lack genuine understanding. AI does not feel joy, sadness, or purpose. It does not experience the world, and it certainly doesn’t “think” in the same way that humans do. AI’s “intelligence” is made up of statistical abstractions disconnected from the world. It is not experiential, not based in time-and-space existence.

Experiential based intelligence matters because that is what enables humans to derive meaning, context, and emotional depth from their interactions and decisions. These elements are critical in real-world scenarios like leadership, caregiving and artistic expression, where understanding goes beyond mere processing of raw data. AI lacks all that.

This omission is crucial. AI can process immense datasets and identify correlations far beyond human capacity, but it lacks the context to understand what those correlations mean. It cannot ask, “Why does this matter?” or “Should this be done?” These are uniquely human questions that reflect our morality, empathy, and sense of purpose.

The Unique Qualities of Being Human

Humanity is defined by more than intellect. Our species’ essence encompasses creativity, relationships, intuition, and adaptability—qualities that no algorithm can replicate. Our intelligence is based on feelings and purpose. We can sense other beings and feel them. We are aware of ourselves and our inevitable death.

Creativity Beyond Patterns

AI excels at pattern recognition but stumbles when asked to innovate beyond existing paradigms. For instance, AI can compose music in the style of Mozart because it analyzes patterns in his work. But could it invent jazz? Could it disrupt an entire genre? Humans possess the unique ability to innovate—to break patterns, challenge norms, and envision entirely new possibilities. Creativity is born from lived experience, emotion, and imagination, not just data points. For instance, when an artist paints a masterpiece, they draw not only on technical skill but also on personal experiences, cultural influences, and an emotional connection to their subject—a depth AI cannot replicate. Creativity is often described as a spark of the divine—a gift that allows humans to imagine, innovate, and bring into being something entirely new. AI can emulate creativity, especially creative thinking, but it lack the depth and genius of human creativity.

Empathy and Relationships

AI can simulate human interaction, can talk as if it has feelings, but it does not. Simulation of empathy is not the same as genuine human connection between living mortal beings. When a friend comforts you during a difficult time, their empathy is grounded in shared experiences and genuine care. An AI chatbot may offer a similar response, but it does so without understanding or feeling. It is an unconscious thing. Relationships, whether personal or professional, thrive on trust, compassion, and shared humanity—qualities AI simply cannot possess. The connections we form with others are not merely social constructs but reflections of a greater design—a reminder that we are meant to love and be loved.

Intuition and Adaptability

Humans navigate uncertainty with an instinct that transcends logic. We rely on intuition—that “gut feeling”—to make decisions in ambiguous situations. While AI depends on data, humans can adapt to entirely novel scenarios with little to no prior information. This adaptability has allowed us to survive and thrive in a constantly changing world. The human capacity to adapt is a testament to the resilience of life itself—a force that evolves, thrives, and perseveres despite uncertainty.

AI’s Role as a Thinking Helper, Not a Competitor

Rather than fearing AI as a rival, we should view it as a thinking collaborator. In many fields, AI is already enhancing human capabilities by its amazing statistical, analytical abilities.

In Medicine

AI algorithms can analyze medical images with remarkable accuracy, detecting conditions like cancer earlier than human doctors in some cases. However, AI does not replace the doctor’s role. Even in diagnosis, after input from the AI the human doctor makes the final call. Diagnosis is only one part of healthcare. Treatment requires empathy, communication, and the ability to weigh ethical considerations—all distinctly human traits.

For instance, a physician can evaluate the credibility of patient reports while considering psychological and contextual factors that lie beyond data-driven analysis. As another example, a doctor working with terminally ill patients must not only consider medical options but also provide emotional support to both the patient and their family, often navigating complex ethical decisions about end-of-life care in ways that require deep empathy and moral judgment.

In Education

AI-driven platforms personalize learning experiences, adapting to individual students’ needs. But the role of a teacher extends beyond delivering information. Teachers inspire, mentor, and nurture—functions that AI cannot replicate. Teachers need not fear AI as a great plagiarizer but should adopt it as an incredible tool. Back To School: A Review of Salman Khan’s New Book, ‘Brave New Words: How AI will revolutionize education (and why that’s a good thing)’ (8/09/24).

In Law

As a practicing attorney specializing in AI since 2012, I have witnessed in real time the impact AI is starting to have on the profession. Lazy, mediocre lawyers, especially those who are ethically challenged, think of it as a “free associate” and recklessly over-rely on it and bill for it. They think of it as a creature not a tool. They ask too much of AI. They over-trust and over-delegate out of ignorance, laziness and greed. They do not take the time and effort required by professional ethics to verify the sycophantic and sometime wrong answers that AI will can provide, especially when not used correctly. Typically, they are among the small number of “ethically challenged” lawyers in the profession. Their misuse of AI is consistent with their character and overall legal practice. Artigliere and Losey, “My AI Did It!” Is No Excuse for Unethical or Unprofessional Conduct (Fla. Bar Approved CLE); Another AI Hallucination Case with Sanctions Threatened Because of ‘All-Too-Human’ Mistakes (e-Discovery Team, 7/30/24).

From what I have seen as a lawyer since 1980, the vast majority of lawyers are responsible and ethical. They have good character, follow their professional guidelines and do not over delegate their legal work to anyone, be it machine or paralegal. But good lawyers also make mistakes by being too careful; they ask too little of AI or avoid its use altogether. Many of these legal professionals also suffer from the misconception that AI is a creature not a tool, a dangerous one they want to avoid.

They are over-cautious, reluctant to change and adapt. The majority of lawyers in the world are now like that. Very few have tried AI in a serious manner with disciplined study and use. They are too afraid, too busy, or both. They stick with the pre-AI technology tools and methods they already have and know. While traditional methods may feel safer, this hesitance risks missing the transformative potential of AI as a tool to enhance legal practice.

The courageous few who make time to understand the true nature of AI are beginning to use it carefully as a thinking tool and are having great success. It is starting to supercharge their analysis and writing. They are increasing their efficiencies and, at the same time, improving their work product. AI gives them a competitive edge. For details on the good things AI can already do to help lawyers, see e.g. Survey Shows Legal Research is the Most Common Use of Generative AI by Lawyer (8/23/24); Transform Your Legal Practice with AI: A Lawyer’s Guide to Embracing the Future (1/24/24); Ten Ways LLM Models Such As ChatGPT Can Be Used To Assist Lawyers (3/25/23).

The special thinking intelligence of AI may only be one-dimensional compared to their own intelligence, but it is still mind-boggling. When combined with the lawyers’ own unique human abilities, such as empathy and understanding, it rises to a new level of legal competence.

The Real Danger: Human Misuse of AI

Fear of the unknown and the tendency to anthropomorphize AI are not unique to the legal profession; they are widespread throughout society. These tendencies hinder the adoption of a valuable tool and create unnecessary fear and anxiety. This is a dangerous mistake. Do not fear AI as an inanimate tool, but rather the living humans who misuse it for selfish ends—to exploit and control others.

The true threat lies not in the machines themselves but in how humans choose to wield them. See e.g., Sam Altman, July 25, 2024: Who will control the future of AI? (Washington Post Editorial, 7/25/24). Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will “become the ruler of the world,” and the People’s Republic of China has said that it aims to become the global leader in AI by 2030. See e.g. The Future of AI: Sam Altman’s Vision and the Crossroads of Humanity (12/18/24).

History offers countless grim lessons about the misuse of technology. From weapons to propaganda, including today’s social media algorithms that manipulate public opinion, our tools have always reflected the intentions of their creators and users. If AI becomes a great danger, it will be because humans have programmed it—or used it—to cause harm. Misuse could include deploying AI for autonomous weapons systems, enabling oppressive surveillance regimes, or creating manipulative deepfakes that erode public trust. These actions could destabilize societies, infringe on human rights, and amplify inequality. They could also lead to devastating wars, pandemics and environmental disasters. The true risks lie in unethical applications of AI, which shows the urgent need for good governance and global cooperation to mitigate harm.

Dispelling the Fear of Replacement

Fears of AI replacing humans echo historical anxieties about technological progress. During the industrial revolution, for example, workers feared that mechanized looms and steam engines would obliterate jobs and livelihoods. Instead, these innovations sparked new industries, created countless jobs, and elevated standards of living. Similarly, while AI may disrupt certain professions, it has the potential to drive progress and open doors to opportunities we cannot yet foresee.

Moreover, fears of AI “taking over” often stem from science fiction rather than reality. AI is not a sentient being plotting humanity’s demise. It is a tool that requires human direction. The true challenge is ensuring that this direction aligns with our values.

Conclusion

AI is a remarkable achievement, a testament to human ingenuity and innovation. But it is only that: a creation of human hands and minds, not a being, a rival, or a replacement. What sets humanity apart is not just our capacity to think but our ability to feel, to connect, and to aspire beyond the material world. We are creators, empathizers, and dreamers. We love, laugh, and grieve. We are aware of ourselves and our mortality, and we marvel at the wonder and mystery of life and existence—an awareness no algorithm can replicate or even truly comprehend.

Our greatest strength lies not in the tools we create but in the unquantifiable essence within us—a spark of creativity, empathy, and purpose that defines what it means to be human. Let us use AI as an extension of these qualities, to solve our greatest challenges, amplify our imagination, and uplift humanity. With AI as our super-thinking tool, we can use our full capacities to protect AI from misuse by the stunted not quite humans who bask in hate and care only about themselves, their personal power and glory.

We must ensure that AI evolves responsibly, becoming so advanced and self-regulating in its design that it anticipates and resists attempts to twist it into a tool for harm. Yet, we must not rely solely on AI’s capabilities to protect itself from misuse. Human oversight, ethical governance, and constant vigilance are essential to ensure that even the most advanced AI systems remain aligned with our values.

AI must remain a shared power, accessible to free people everywhere, to amplify good and counter its misuse for conquest and exploitation. This requires a global commitment to transparency and cooperation, where ethical standards and safeguards are continuously refined to adapt to emerging risks. By fostering a culture of accountability and innovation, we can create AI systems that not only serve humanity’s highest ideals but also possess the resilience to guard against misuse.

This moment marks a pivotal turning point in history—failure is not an option. It is not enough to build advanced systems; we must also build the frameworks and institutions that guide them responsibly. Only through a combined effort of human ethics, technological foresight, and collective action can we ensure that AI remains a true ally in our shared future.

Image of battle between good and evil in style of Hieronymus Bosch. All images by Ralph Losey using ChatGPT.

Now listen to the EDRM Echoes of AI’s podcast of the article, Echoes of AI on The Human Edge: How AI Can Assist But Never Replace. Hear two Gemini model AIs talk about this article. They wrote the podcast, not Ralph.

Ralph Losey Copyright 2025. All Rights Reserved.


Types of Artificial Intelligence: Still Another Test of the ‘Panel of AI Experts’ on a Chart Classifying AI

June 10, 2024

Ralph Losey. Published June 10, 2024.

A classic chart by Dr. Lily Zhuhadar, shown below, provides a good visual illustration of basic types of artificial intelligence. The chart was submitted for discussion to the custom GPT, Panel of AI Experts for Lawyers as another test of it abilities. The transcript of the GPT session is shared in this article. It shows a different way to use the program. It also shows an error in panel selection made by the GPT and how a user can easily correct the problem. In the process readers will learn more about the chart and how it can help lawyers render advice to their business clients.

Wikipedia’s explanation of the chart:

The graph provides a comparative view of AI, Machine Learning, Deep Learning, and Generative AI. Through this comparative lens, the diagram illuminates the distinct features as well as their overlapping facets between these fields. This comparison shines a light on their unique characteristics and shared elements, enabling us to appreciate the interconnectedness and individuality of these concepts. This understanding is crucial in realizing the full potential of AI and its multifaceted aspects in current and future applications.

The Panel of AI Experts for Lawyers passed the new test on the chart with flying colors. It created experts and panel discussion based on the image of this chart. See the transcript below of the entire chat session. It made one mistake by starting to use six experts, instead of the five it is programmed for. I interrupted it immediately, pointed out the error, and after an apology, the GPT corrected the error and started again. So a slight error was made, a harmless one at that (it might have been better with six experts), but I was pleased with its instant correction. This is a verbatim transcript of the session of June 2, 2024, except that I do include the same graphic again, but instead insert a new graphic before the transcript begins. The graphic was created by another GPT already public, VISUAL MUSE: Illustrating Concepts With Style.


Transcript of AI Panel interpretation of a photo of a chart that distinguished types of AI

June 2, 2024

RALPH LOSEY. Please suggest experts appropriate to discuss the meaning and significance of the attached image of a graph, “Unraveling AI Complexity – A Comparative View of AI, Machine Learning, Deep Learning, and Generative AI.”

The graph provides a comparative view of AI, Machine Learning, Deep Learning, and Generative AI. Through this comparative lens, the diagram illuminates the distinct features as well as their overlapping facets between these fields. This comparison shines a light on their unique characteristics and shared elements, enabling us to appreciate the interconnectedness and individuality of these concepts. This understanding is crucial in realizing the full potential of AI and its multifaceted aspects in current and future applications.

GPT: To discuss the meaning and significance of the attached image, “Unraveling AI Complexity – A Comparative View of AI, Machine Learning, Deep Learning, and Generative AI,” we need a diverse panel of experts with knowledge in artificial intelligence, machine learning, deep learning, and the legal implications of these technologies. Here are ten potential experts, including the four mandatory ones and the Devil’s Advocate:

  1. AI Specialist Attorney – Expert in the legal aspects of AI, including intellectual property, liability, and ethical considerations.
  2. Prompt Engineer Lawyer – Specialized in crafting effective prompts to optimize AI performance and minimize errors.
  3. Child Prodigy – Offers a simplified and innovative perspective on complex AI concepts.
  4. Lawyer Scientist – Combines legal expertise with a deep understanding of AI technology and its practical applications.
  5. Devil’s Advocate – Challenges prevailing views and explores potential pitfalls and alternative perspectives.
  6. Machine Learning Expert – Provides insights into the algorithms and models used in machine learning.
  7. Deep Learning Specialist – Focuses on the neural networks and architectures underpinning deep learning.
  8. Generative AI Researcher – Examines the latest advancements and applications of generative AI technologies.
  9. Ethics and AI Policy Analyst – Discusses the ethical implications and policy considerations surrounding AI.
  10. AI Business Strategist – Explores the commercial applications and strategic impacts of AI technologies.
    Please select four of these experts to join the Devil’s Advocate for a comprehensive panel discussion on the image and its implications.

RALPH. Please use experts number 6, 7, 8 and 10.

GPT: Great! Here is the finalized expert panel:

  1. AI Specialist Attorney
  2. Machine Learning Expert
  3. Deep Learning Specialist
  4. Generative AI Researcher
  5. AI Business Strategist
  6. Devil’s Advocate (mandatory)
    Let’s proceed with the six-step panel discussion process. We’ll start with the first round of panelist introductions and opening remarks.
    First Round of Panelist Discussion
    AI Specialist Attorney: Hello, I am the AI Specialist Attorney. My focus is on the legal implications of AI technologies, including intellectual property rights, liability issues, and ethical concerns. The diagram effectively illustrates the layered nature of AI, showing how each subset builds upon the previous.

RALPH INTERRUPTS. No. I did not pick the AI Specialist Attorney. There are only supposed to be five panelists, correct?

GPT: My apologies for the oversight. Here is the corrected panel of five experts:

  1. Machine Learning Expert
  2. Deep Learning Specialist
  3. Generative AI Researcher
  4. AI Business Strategist
  5. Devil’s Advocate (mandatory)

Let’s proceed with the first round of panelist introductions and opening remarks.

First Round of Panelist Discussion

Machine Learning Expert: Hello, I am the Machine Learning Expert. I focus on the algorithms and models that enable machines to learn from data. The diagram highlights how machine learning is a foundational subset of AI, emphasizing the importance of pattern detection and adaptation in AI systems. Understanding these layers helps in appreciating how machine learning can enhance various applications through adaptive learning.
Deep Learning Specialist: Hi, I am the Deep Learning Specialist. My expertise lies in neural networks and their architectures. The image illustrates deep learning as a crucial subset of machine learning, showcasing its role in handling vast amounts of data through neural networks. This level of understanding is essential for grasping how deep learning models can perform complex tasks like image and speech recognition.
Generative AI Researcher: Greetings, I am the Generative AI Researcher. I study the creation of AI models that can generate new content, such as text and images. The diagram effectively shows generative AI as a specialized area within deep learning, emphasizing its unique capability to produce novel outputs based on learned patterns. This is significant for applications like content creation and simulation.
AI Business Strategist: Hello, I am the AI Business Strategist. I explore the commercial and strategic impacts of AI technologies. The diagram provides a clear overview of the AI landscape, which is crucial for businesses to understand the potential and limitations of each layer. This knowledge helps in making informed decisions about adopting AI technologies to gain competitive advantages.
Devil’s Advocate: Greetings, I am the Devil’s Advocate. I take a contrarian view to challenge the status quo and stimulate critical thinking. While the diagram offers a structured view of AI and its subsets, we must consider the practical challenges and risks associated with each layer, including issues of bias, over-reliance on AI, and the potential for job displacement.

Second Round of Panelist Discussion

Machine Learning Expert: The diagram highlights the importance of understanding the foundational elements of AI. Machine learning’s role as a subset of AI involves algorithms that detect patterns and learn from data, making it essential for developing intelligent systems. This understanding is crucial for creating robust AI applications that can adapt and improve over time.
Deep Learning Specialist: Deep learning’s place within machine learning, as shown in the diagram, underscores its advanced capabilities in processing large datasets through neural networks. This level of AI can handle more complex tasks and provides the backbone for technologies like autonomous vehicles and advanced image recognition, showcasing its transformative potential.
Generative AI Researcher: Generative AI’s positioning within deep learning in the diagram is vital. It shows how this technology can create new content, which has broad implications for industries like media, entertainment, and design. By leveraging the deep learning foundation, generative AI can produce highly creative and innovative outputs, pushing the boundaries of what AI can achieve.
AI Business Strategist: From a business perspective, the diagram helps in understanding the layers of AI development. This layered approach allows companies to strategically invest in the right technologies at the right time. Recognizing where generative AI fits within the broader AI landscape can help businesses capitalize on its capabilities for innovation and efficiency.
Devil’s Advocate: The diagram’s clarity in showing the progression from AI to generative AI is beneficial, but we must be wary of the potential over-hype and ethical issues. Each layer introduces complexities that can lead to unintended consequences, such as biases in machine learning models or ethical dilemmas in content generation. A critical examination of these aspects is necessary to mitigate risks.

Third Round of Panelist Discussion

Machine Learning Expert: One point of agreement is the importance of a solid foundation in machine learning to support advanced AI applications. Ensuring robust and unbiased algorithms at this level is critical for the success of deeper AI technologies.
Deep Learning Specialist: I agree. The focus should be on building reliable and scalable deep learning models that can handle complex tasks efficiently. Collaboration between machine learning and deep learning experts can lead to more resilient AI systems.
Generative AI Researcher: Indeed, combining insights from both machine learning and deep learning enhances generative AI’s effectiveness. Ensuring that generative models are trained on diverse and representative datasets can help mitigate biases and improve output quality.
AI Business Strategist: From a strategic standpoint, understanding these interconnections is key. Businesses should invest in comprehensive AI strategies that encompass machine learning, deep learning, and generative AI to maximize their technological and competitive edge.
Devil’s Advocate: While collaboration and investment are important, we must remain vigilant about the ethical and practical implications. Continuous monitoring and regulation are essential to prevent misuse and ensure that AI developments benefit society as a whole.

Solution

The panel agrees on the importance of a layered approach to AI, starting from robust machine learning foundations, advancing through deep learning, and culminating in generative AI. Businesses and researchers should work together to ensure that each layer is developed responsibly and ethically.

Example from each expert:
Machine Learning Expert: Implement rigorous testing and validation processes for machine learning algorithms to ensure they are unbiased and accurate.
Deep Learning Specialist: Develop scalable neural networks that can handle large datasets efficiently for applications in autonomous systems.
Generative AI Researcher: Use diverse and representative training data to improve the quality and fairness of generative AI outputs.
AI Business Strategist: Create a strategic roadmap for AI adoption that includes investments in foundational technologies and advanced applications.
Devil’s Advocate: Advocate for continuous oversight and ethical reviews to address potential biases and ethical concerns in AI development.

Final Explanation by Devil’s Advocate

In conclusion, the diagram “Unraveling AI Complexity” provides a valuable comparative view of AI, Machine Learning, Deep Learning, and Generative AI. Each layer represents a critical step in the evolution of intelligent systems, highlighting their unique characteristics and interdependencies. The panel agrees that a strong foundation in machine learning is essential for building advanced AI systems, and deep learning plays a crucial role in processing complex data. Generative AI, as the most specialized subset, offers creative possibilities but also introduces significant ethical considerations.

It is crucial for businesses and researchers to work collaboratively, ensuring that each layer is developed responsibly and ethically. This includes implementing rigorous testing for machine learning algorithms, developing scalable neural networks, using diverse training data for generative models, and creating strategic roadmaps for AI adoption. Continuous oversight and ethical reviews are essential to address potential biases and prevent misuse of AI technologies.

By understanding the interconnectedness and individuality of these AI concepts, we can harness their full potential while mitigating risks, ultimately driving innovation and ensuring ethical AI development.

GPT: Questions
Please feel free to ask any follow-up questions to the entire panel or to individual members.

RALPH. Could the AI Business Strategist please describe how understanding the information summarized here can help a lawyer to render advice to their business clients.

AI Business Strategist:
Understanding the information summarized in the “Unraveling AI Complexity” diagram can significantly enhance a lawyer’s ability to advise business clients in several key ways:

  1. Strategic Investment Guidance:
    • Identifying Opportunities: By understanding the layers of AI, from machine learning to generative AI, a lawyer can help clients identify which technologies align best with their strategic goals. For instance, a company looking to innovate in customer service might benefit from implementing machine learning algorithms for predictive analytics or generative AI for automated customer interactions.
    • Investment Decisions: Knowing the capabilities and limitations of each AI subset allows lawyers to advise clients on where to allocate resources effectively. This ensures that investments in AI technologies are targeted and aligned with the company’s long-term objectives.
  2. Risk Management:
    • Mitigating Legal Risks: Each layer of AI introduces different legal and ethical considerations. A lawyer can provide counsel on compliance with data protection regulations, intellectual property rights, and liability issues specific to the use of machine learning, deep learning, or generative AI.
    • Ethical Considerations: Advising on ethical AI use, such as avoiding biases in AI models and ensuring transparency, helps businesses avoid reputational risks and potential legal challenges.
  3. Regulatory Compliance:
    • Navigating Regulations: Lawyers can help businesses navigate the complex regulatory landscape surrounding AI. This includes ensuring that AI applications comply with industry-specific regulations and broader legal frameworks governing AI and data use.
    • Staying Informed: By keeping abreast of the latest developments in AI regulation and technology, lawyers can proactively advise clients on upcoming changes that might affect their operations.
  4. Enhancing Competitive Advantage:
    • Advising on Competitive Strategy: Understanding how advanced AI technologies can be leveraged for competitive advantage allows lawyers to guide clients in adopting cutting-edge solutions that can differentiate them in the market.
    • IP Strategy: Lawyers can assist in protecting AI innovations through patents and trade secrets, ensuring that their clients maintain a competitive edge and maximize the value of their AI investments.
  5. Contract and Vendor Management:
    • Drafting and Negotiating Contracts: Lawyers can draft and negotiate contracts with AI vendors, ensuring that terms and conditions protect their clients’ interests, particularly concerning data ownership, confidentiality, and service levels.
    • Vendor Selection: Advising on the selection of AI vendors based on the specific needs and technological requirements of the client ensures that partnerships are formed with reliable and compatible technology providers.

By leveraging the insights from the AI complexity diagram, lawyers can provide comprehensive and informed advice that not only addresses immediate legal concerns but also aligns with their clients’ strategic business goals, ensuring long-term success and sustainability in the evolving AI landscape.

RALPH. Could the Devil’s Advocate point out the errors that a lawyer should try to avoid when providing advice to their clients in reliance upon the distinctions in AI types shown in this chart and the suggestions made by this panel.

Devil’s Advocate:
Certainly, there are several potential pitfalls and errors that a lawyer should be wary of when advising clients based on the distinctions in AI types shown in the chart and the suggestions made by this panel. Here are some key ones to avoid:

  1. Overgeneralization:
    • Error: Assuming that all AI technologies are the same and can be applied universally without considering their specific capabilities and limitations.
    • Avoidance: Lawyers should be precise in distinguishing between different types of AI—machine learning, deep learning, and generative AI—and tailor their advice accordingly. Each type has unique applications and constraints that must be understood and communicated clearly.
  2. Overestimating Capabilities:
    • Error: Overhyping the capabilities of AI technologies, especially generative AI, and promising clients more than what the technology can realistically deliver.
    • Avoidance: Maintain a realistic perspective on what each AI subset can achieve. Ensure clients are aware of the current limitations and ongoing challenges, such as biases in AI models or the need for extensive data training.
  3. Ignoring Ethical and Legal Risks:
    • Error: Failing to address the ethical and legal implications of using AI, such as data privacy concerns, intellectual property issues, and potential biases.
    • Avoidance: Thoroughly advise clients on the ethical and legal risks associated with AI technologies. Ensure that they implement robust compliance measures and ethical guidelines to mitigate these risks.
  4. Neglecting Continuous Learning:
    • Error: Assuming that understanding current AI technologies is sufficient and neglecting the continuous evolution and advancement in the field.
    • Avoidance: Encourage clients to stay updated with the latest developments in AI technology and regulation. Regularly review and update legal advice to reflect new advancements and emerging legal standards.
  5. Overlooking Implementation Challenges:
    • Error: Ignoring the practical challenges of implementing AI solutions, such as integration with existing systems, the need for specialized skills, and change management issues.
    • Avoidance: Provide comprehensive guidance on the practical aspects of AI implementation. This includes advising on necessary infrastructure investments, staff training, and the development of a clear implementation roadmap.
  6. Focusing Solely on Benefits:
    • Error: Emphasizing the benefits of AI technologies without adequately addressing potential drawbacks and limitations.
    • Avoidance: Offer a balanced view that includes both the potential benefits and the possible downsides of AI adoption. Ensure clients have a clear understanding of the risks and are prepared to manage them.
  7. Inadequate Contractual Protections:
    • Error: Failing to include sufficient protections in contracts with AI vendors, such as terms related to data security, IP rights, and performance guarantees.
    • Avoidance: Draft detailed contracts that clearly outline the responsibilities and liabilities of AI vendors. Include clauses that address data protection, intellectual property ownership, and performance standards to safeguard the client’s interests.
  8. Neglecting Interdisciplinary Collaboration:
    • Error: Providing advice in isolation without collaborating with other experts, such as data scientists, AI specialists, and ethicists.
    • Avoidance: Encourage interdisciplinary collaboration to ensure comprehensive advice. Working with experts from various fields can provide a more holistic view of AI’s implications and help address complex issues more effectively.

By avoiding these errors, lawyers can provide more accurate, balanced, and reliable advice to their clients, ensuring that they are well-informed about the complexities and realities of AI technologies and their applications.

Android explains a chart to lawyers. Visual Muse: illustrating concepts with style.

Ralph Losey Copyright 2024 (excludes graph by Dr. Lily Zhuhadar and Wikipedia quote)


Navigating the High Seas of AI: Ethical Dilemmas in the Age of Stochastic Parrots

April 3, 2024

Ralph Losey. Published April 3, 2024.

Large Language Model generative AIs are well-described metaphorically as “stochastic parrots.” In fact, the American Dialect Society, selected stochastic parrot as its AI word of the year for 2023, just ahead of the runners-up “ChatGPT, hallucination, LLM and prompt engineer.” These genius stochastic parrots can be of significant value to all legal professionals, even those who don’t like pirates. You may want one on your shoulder soon, or at least in your computers and phone. But, as you embrace them, you should know that these parrots can bite. You should be aware of the issues of bias and fairness problems inherent in these new technical systems.

The ethical issues were raised in my last blog and video, Stochastic Parrots: the hidden bias of large language model AI. In the video blog an avatar, which looks something like me with a parrot on his shoulder, quoted the famous article on LLM AI bias, and briefly discussed how the prejudices are baked into the training data. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (FAccT ’21, (3/1/21) by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell. In this followup blog I dig a little deeper into the article and the controversies surrounding it.

Article Co-Author Timnit Gebru

First of all, it is interesting to note the internet rumor, based on a few tweets, concerning one of the lead authors of the Stochastic Parrots article, Timnit Gebru. She was a well-known leader of Google’s ethical AI team at the time she co-wrote it. She was allegedly forced to leave Google because its upper management didn’t like it. See: Karen Hao, We read the paper that forced Timnit Gebru out of Google. (MIT Technology Review, 12/04/2020);  Shirin Ghaffary, The controversy behind a star Google AI researcher’s departure (Vox, 12/09/20). According Karen Ho’s article, more than 1,400 Google staff members and 1,900 other supporters signed a letter of protest about the alleged firing of Timnit Gebru as an act of research censorship. The rumor is Google tried to stop the publication of the article, but obviously the article was in fact published on March 1, 2021.

According to the MIT Technology Review article, Google did not like all four points of criticism of LLMs that were made in the Parrot article:

  1. Environmental and financial costs. Pertaining to the need for vast amount of computing to create the LLMs and the energy costs, the carbon footprint.
  2. Massive data, inscrutable models. The training data mainly comes from the internet and so contains racist, sexist, and otherwise abusive language. Moreover, the vast amount of data used makes the LLMs hard to audit and eliminate embedded biases.
  3. Research opportunity costs. Basically complaining that too much money was spent on LLMs, and not enough on other types of AI. Note this complaint was made before the unexpected LLM breakthroughs in 2022 and 2023.
  4. Illusions of meaning. In the words of Karen Ho, the Parrot article complained that the problem with LLM models is that they are “so good at mimicking real human language, it’s easy to use them to fool people.” 

Moreover, as the Hao article in the MIT Technology Review points out, Google’s then head of AI, well-known scientist, Jeff Dean, claimed that the research behind the article “didn’t meet our bar” and “ignored too much relevant research.” Specifically, he said it didn’t mention more recent work on how to make large language models more energy efficient and mitigate problems of bias. Maybe they didn’t know?

Criticisms of the Stochastic Parrot Article

The main article I found criticizing the Stochastic Parrot also has a weird name: “The Slodderwetenschap (Sloppy Science) of Stochastic Parrots – A Plea for Science to NOT take the Route Advocated by Gebru and Bender” (2021). The author, Michael Lissack, challenges the ethical “woke” stance of the original “Parrot Paper,” and suggests a reevaluation of the argumentation. It should be noted that Gebru has accused Lissack of stalking her and colleagues. See: Claire Goforth, Men in tech are harassing Black female computer scientist after her Google ouster (Daily Dot, 2/5/21) (Michael Lissack has tweeted about Timnit Gebru thousands of times). By the way, “slodderwetenschap” is Dutch for slop-science.

Here are the three criticisms that Lissack makes of Stochastic Parrots:

What is missing in the Parrot Paper are three critical elements: 1) acknowledgment that it is a position paper/advocacy piece rather than research, 2) explicit articulation of the critical presuppositions, and 3) explicit consideration of cost/benefit trade-offs rather than a mere recitation of potential “harms” as if benefits did not matter. To leave out these three elements is not good practice for either science or research.

Lissack, The Slodderwetenschap (Sloppy Science) of Stochastic Parrots, abstract.

Others have spoken in favor of Lissack’s criticisms of the Stochastic Parrots, including most notably, Pedro Domingos. Supra, Men in Tech (includes collection of Domingos’ tweets).

It should also be noted that Lissack’s article includes several positive comments about the Stochastic Parrots work:

The very topic of the Parrot Paper is an ethics question: does the current focus on “language models” of an ever-increasing size in the AI/NLP community need a grounding against potential questions of harm, unintended consequences, and “is bigger really better?” The authors thereby raise important issues that the community itself might use as a basis for self-examination. To the extent that the authors of the Parrot Paper succeed in getting the community to pay more attention to these issues, they will be performing a public service. . . .

The Parrot Paper correctly identifies an “elephant in the room” for the MI/ML/AI/NLP community: the very basis by which these large language models are created and implemented can be seen as multilayer neural network-based black boxes – the input is observable, the programming algorithm readable, the output observable, but HOW the algorithm inside that black box produces the output is no articulable in terms humans can comprehend. [10] What we know is some form of “it works.” The Parrot Paper authors prompt readers to examine what is meant by “it works.” Again, a valuable public service is being performed by surfacing that question. . . .

Most importantly, in my view, the Parrot Paper authors remind readers that potential harm lies in both the careless use/abuse of these language models and in the manner by which the outputs of those models are presented to and perceived by the general public. They quote Prabhu and Birhane echoing Ruha Benjamin: “Feeding AI systems on the world’s beauty, ugliness, and cruelty, but expecting it to reflect only the beauty is a fantasy.” [PP lines 565-567, 11, 12] The danger they cite is quite real. When “users” are unaware of the limitations of the models and their outputs, it is all too easy to confuse seeming coherence and exactness for verisimilitude. Indeed, Dr. Gebru first came to public attention highlighting similar dangers with respect to facial recognition software (a danger which remains, unfortunately, with us [13, 14].

The Slodderwetenschap (Sloppy Science) of Stochastic Parrots at pages 2-3.

Lissack’s main objection appears to be the argumentative nature of what the article presents as science, and the many subjective opinions underlying the Parrot article. He argues that the paper itself is “ethically flawed.”

Talking Stochastic Parrots Have No Understanding

Artificial intelligences like ChatGPT4 may sound like they know what they are talking about, but they don’t. There is no understanding at all in the human sense; it is all just probability calculations of coherent speech. No self awareness, no sense of space and time, no feelings, no senses (yet) and no intuition – just math.

It is important to make a clear distinction between human cognitive processes, which are deeply linked and arise out of bodily experiences and the external world, and computational models that lack a real world, experiential basis. As lawyers we must recognize the limits of mere machine tools. We cannot over-delegate to them just because they sound good, especially when acting as legal counselors, judges, and mediators. See e.g. Yann Lecun and Browning, AI And The Limits Of Language (Noema, 8/23/22) (“An artificial intelligence system trained on words and sentences alone will never approximate human understanding.”); Valmeekam, et al, On the Planning Abilities of Large Language Models (arXiv, 2/13/23) (poor at planning capabilities); Dissociating language and thought in large language models (arXiv, 3/23/24) (poor at functional competence tasks).

Getting back to the metaphor, a parrot may not understand the words it speaks, but they at least have some self awareness and consciousness. An AI has none. As one thoughtful Canadian writer put it:

Though the output of a chatbot may appear meaningful, that meaning exists solely in the mind of the human who reads or hears that output, and not in the artificial mind that stitched the words together. If the AI Industrial Complex deploys “counterfeit people” who pass as real people, we shouldn’t expect peace and love and understanding. When a chatbot tries to convince us that it really cares about our faulty new microwave or about the time we are waiting on hold for answers, we should not be fooled.

Bart Hawkins Kreps, Beware of WEIRD Stochastic Parrots (Resilience, 2/15/24).

For interesting background, see The New Yorker article of 11/15/2023, by Angie Wang, Is My Toddler a Stochastic Parrot? Also see: Scientific research article on the lack of diversity in internet model training, Which Humans? by Mohammad Atari, et al. (arXiv, 9/23/23) (“Technical reports often compare LLMs’ outputs with “human” performance on various tests. Here, we ask, “Which humans?”“).

I also suggest you look at the often cited technical blog post by the great contemporary mathematician, Stephen Wolfram What Is ChatGPT Doing … and Why Does It Work?. As Wolfram states in the conclusion ChatGPT isjust saying things that “sound right” based on what things “sounded like” in its training material.” Yes, it sounds good, but nobody’s home, no real meaning. That is ultimately why the fears of AI replacing human employment are way overblown. It is also why LLM based plagiarism is usually easy to recognize, especially by experts in the field under discussion. The Chatbot writing is obvious by its style over substance language, which is high on fluff and stereotypical language, and overuse of certain “tell” words. More on this in my next blog on how to spot stochastic parrots.

Personally, I’m already sick of the bland, low meaning, fluffy content news and analysis writing now flooding the internet, including legal writing. It is almost as bad as ChatGPT writing for political propaganda and sales. It is not only biased, and riddled with errors, it is mediocre and boring.

Conclusion

Everyone agrees that LLM AIs will, if left unchecked, reproduce biases and inaccuracies contained in the original training data. This inevitably leads to the generation of false information – to skewed output to prompts – and that in turn can lead to poor human decisions made in reliance on biased output. This can be disastrous in sensitive applications like law and medicine.

Everyone also agrees that this problem requires AI software manufacturers to model designs to curb these biases, and to monitor and test to ensure the effectiveness and trustworthiness of LLMs.

The disagreement seems to be in evaluation of the severity of the problem, and the priority that should it be given to its mitigation. There is also disagreement as to the degree of success made to date in correcting this problem, and whether the problem can even be fixed at all.

My view is that these issues can be significantly reduced, but I doubt that LLMs will ever be perfect and entirely free of all bias, even though they may become better than the average human. See e.g. New Study Shows AIs are Genuinely Nicer than Most People – ‘More Human Than Human’.

Moreover, I believe that users of LLMs, especially lawyers, judges and other legal professionals, can be sensitized to these bias issues. They can learn to recognize previously unconscious bias in the data and in themselves. The sensitivity to the bias issues can then help AI users to recognize and overcome these challenges. They can realize when the responses given by an AI are wrong and must be corrected.

The language of a ChatGPT may correctly echo what most people in the past said, but that does not, in itself, make it the right answer for today. As lawyers we need the true, correct and bias free answers, the just and fair answers, not the most popular answers of the past. We have an ethical duty of competence to double check the mindless speech of our stochastic parrots. We should question why Polly always wants a cracker?

Ralph Losey Copyright 2024 – All Rights Reserved