2025 Year in Review: Beyond Adoption—Entering the Era of AI Entanglement and Quantum Law

December 31, 2025

Ralph Losey, December 31, 2025

As I sit here reflecting on 2025—a year that began with the mind-bending mathematics of the multiverse and ended with the gritty reality of cross-examining algorithms—I am struck by a singular realization. We have moved past the era of mere AI adoption. We have entered the era of entanglement, where we must navigate the new physics of quantum law using the ancient legal tools of skepticism and verification.

A split image illustrating two concepts: on the left, 'AI Adoption' showing an individual with traditional tools and paperwork; on the right, 'AI Entanglement' featuring the same individual surrounded by advanced technology and integrated AI systems.
In 2025 we moved from AI Adoption to AI Entanglement. All images by Losey using many AIs.

We are learning how to merge with AI and remain in control of our minds, our actions. This requires human training, not just AI training. As it turns out, many lawyers are well prepared by past legal training and skeptical attitude for this new type of human training. We can quickly learn to train our minds to maintain control while becoming entangled with advanced AIs and the accelerated reasoning and memory capacities they can bring.

A futuristic woman with digital circuitry patterns on her face interacts with holographic data displays in a high-tech environment.
Trained humans can enhance by total entanglement with AI and not lose control or separate identity. Click here or the image to see video on YouTube.

In 2024, we looked at AI as a tool, a curiosity, perhaps a threat. By the end of 2025, the tool woke up—not with consciousness, but with “agency.” We stopped typing prompts into a void and started negotiating with “agents” that act and reason. We learned to treat these agents not as oracles, but as ‘consulting experts’—brilliant but untested entities whose work must remain privileged until rigorously cross-examined and verified by a human attorney. That put the human legal minds in control and stops the hallucinations in what I called “H-Y-B-R-I-D” workflows of the modern law office.

We are still way smarter than they are and can keep our own agency and control. But for how long? The AI abilities are improving quickly but so are our own abilities to use them. We can be ready. We must. To stay ahead, we should begin the training in earnest in 2026.

A humanoid robot with glowing accents stands looking out over a city skyline at sunset, next to a man in a suit who observes the scene thoughtfully.
Integrate your mind and work with full AI entanglement. Click here or the image to see video on YouTube.

Here is my review of the patterns, the epiphanies, and the necessary illusions of 2025.

I. The Quantum Prelude: Listening for Echoes in the Multiverse

We began the year not in the courtroom, but in the laboratory. In January, and again in October, we grappled with a shift in physics that demands a shift in law. When Google’s Willow chip in January performed a calculation in five minutes that would take a classical supercomputer ten septillion years, it did more than break a speed record; it cracked the door to the multiverse. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan. 2025).

The scientific consensus solidified in October when the Nobel Prize in Physics was awarded to three pioneers—including Google’s own Chief Scientist of Quantum Hardware, Michel Devoret—for proving that quantum behavior operates at a macroscopic level. Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago; and Google’s New ‘Quantum Echoes Algorithm’ and My Last Article, ‘Quantum Echo’ (Oct. 2025).

For lawyers, the implication of “Quantum Echoes” is profound: we are moving from a binary world of “true/false” to a quantum world of “probabilistic truth”. Verification is no longer about identical replication, but about “faithful resonance”—hearing the echo of validity within an accepted margin of error.

But this new physics brings a twin peril: Q-Day. As I warned in January, the same resonance that verifies truth also dissolves secrecy. We are racing toward the moment when quantum processors will shatter RSA encryption, forcing lawyers to secure client confidences against a ‘harvest now, decrypt later’ threat that is no longer theoretical.

We are witnessing the birth of Quantum Law, where evidence is authenticated not by a hash value, but by ‘replication hearings’ designed to test for ‘faithful resonance.’ We are moving toward a legal standard where truth is defined not by an identical binary match, but by whether a result falls within a statistically accepted bandwidth of similarity—confirming that the digital echo rings true.

A digital display showing a quantum interference graph with annotations for expected and actual results, including a fidelity score of 99.2% and data on error rates and system status.
Quantum Replication Hearings Are Probable in the Future.

II. China Awakens and Kick-Starts Transparency

While the quantum future dangers gestated, AI suffered a massive geopolitical shock on January 30, 2025. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. The release of China’s DeepSeek not only scared the market for a short time; it forced the industry’s hand on transparency. It accelerated the shift from ‘black box’ oracles to what Dario Amodei calls ‘AI MRI’—models that display their ‘chain of thought.’ See my DeepSeek sequel, Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand. This display feature became the cornerstone of my later 2025 AI testing.

My Why the Release article also revealed the hype and propaganda behind China’s DeepSeek. Other independent analysts eventually agreed and the market quickly rebounded and the political, military motives became obvious.

A digital artwork depicting two armed soldiers facing each other, one representing the United States with the American flag in the background and the other representing China with the Chinese flag behind. Human soldiers are flanked by robotic machines symbolizing advanced military technology, set against a futuristic backdrop.
The Arms Race today is AI, tomorrow Quantum. So far, propaganda is the weapon of choice of AI agents.

III. Saving Truth from the Memory Hole

Reeling from China’s propaganda, I revisited George Orwell’s Nineteen Eighty-Four to ask a pressing question for the digital age: Can truth survive the delete key? Orwell feared the physical incineration of inconvenient facts. Today, authoritarian revisionism requires only code. In the article I also examine the “Great Firewall” of China and its attempt to erase the history of Tiananmen Square as a grim case study of enforced collective amnesia. Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother

My conclusion in the article was ultimately optimistic. Unlike paper, digital truth thrives on redundancy. I highlighted resources like the Internet Archive’s Wayback Machine—which holds over 916 billion web pages—as proof that while local censorship is possible, global erasure is nearly unachievable. The true danger we face is not the disappearance of records, but the exhaustion of the citizenry. The modern “memory hole” is psychological; it relies on flooding the zone with misinformation until the public becomes too apathetic to distinguish truth from lies. Our defense must be both technological preservation and psychological resilience.

A graphic depiction of a uniformed figure with a Nazi armband operating a machine that processes documents, with an eye in the background and the slogan 'IGNORANCE IS STRENGTH' prominently displayed at the top.
Changing history to support political tyranny. Orwell’s warning.

Despite my optimism, I remained troubled in 2025 about our geo-political situation and the military threats of AI controlled by dictators, including, but not limited to, the Peoples Republic of China. One of my articles on this topic featured the last book of Henry Kissinger, which he completed with Eric Schmidt just days before his death in late 2024 at age 100. Henry Kissinger and His Last Book – GENESIS: Artificial Intelligence, Hope, and the Human Spirit. Kissinger died very worried about the great potential dangers of a Chinese military with an AI advantage. The same concern applies to a quantum advantage too, although that is thought to be farther off in time.

IV. Bench Testing the AI models of the First Half of 2025

I spent a great deal of time in 2025 testing the legal reasoning abilities of the major AI players, primarily because no one else was doing it, not even AI companies themselves. So I wrote seven articles in 2025 concerning benchmark type testing of legal reasoning. In most tests I used actual Bar exam questions that were too new to be part of the AI training. I called this my Bar Battle of the Bots series, listed here in sequential order:

  1. Breaking the AI Black Box: A Comparative Analysis of Gemini, ChatGPT, and DeepSeek. February 6, 2025
  2. Breaking New Ground: Evaluating the Top AI Reasoning Models of 2025. February 12, 2025
  3. Bar Battle of the Bots – Part One. February 26, 2025
  4. Bar Battle of the Bots – Part Two. March 5, 2025
  5. New Battle of the Bots: ChatGPT 4.5 Challenges Reigning Champ ChatGPT 4o.  March 13, 2025
  6. Bar Battle of the Bots – Part Four: Birth of Scorpio. May 2025
  7. Bots Battle for Supremacy in Legal Reasoning – Part Five: Reigning Champion, Orion, ChatGPT-4.5 Versus Scorpio, ChatGPT-o3. May 2025.
Two humanoid robots fighting against each other in a boxing ring, surrounded by a captivated audience.
Battle of the legal bots, 7-part series.

The test concluded in May when the prior dominance of ChatGPT-4o (Omni) and ChatGPT-4.5 (Orion) was challenged by the “little scorpion,” ChatGPT-o3. Nicknamed Scorpio in honor of the mythic slayer of Orion, this model displayed a tenacity and depth of legal reasoning that earned it a knockout victory. Specifically, while the mighty Orion missed the subtle ‘concurrent client conflict’ and ‘fraudulent inducement’ issues in the diamond dealer hypothetical, the smaller Scorpio caught them—proving that in law, attention to ethical nuance beats raw processing power. Of course, there have been many models released since then May 2025 and so I may do this again in 2026. For legal reasoning the two major contenders still seem to be Gemini and ChatGPT.

Aside for legal reasoning capabilities, these tests revealed, once again, that all of the models remained fundamentally jagged. See e.g., The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7% (Sec. 5 – Study Consistent with Jagged Frontier research of Harvard and others). Even the best models missed obvious issues like fraudulent inducement or concurrent conflicts of interest until pushed. The lesson? AI reasoning has reached the “average lawyer” level—a “C” grade—but even when it excels, it still lacks the “superintelligent” spark of the top 3% of human practitioners. It also still suffers from unexpected lapses of ability, living as all AI now does, on the Jagged Frontier. This may change some day, but we have not seen it yet.

A stylized illustration of a jagged mountain range with a winding path leading to the peak, set against a muted blue and beige background, labeled 'JAGGED FRONTIER.'
See Harvard Business School’s Navigating the Jagged Technological Frontier and my humble papers, From Centaurs To Cyborgs, and Navigating the AI Frontier.

V. The Shift to Agency: From Prompters to Partners

If 2024 was the year of the Chatbot, 2025 was the year of the Agent. We saw the transition from passive text generators to “agentic AI”—systems capable of planning, executing, and iterating on complex workflows. I wrote two articles on AI agents in 2025. In June, From Prompters to Partners: The Rise of Agentic AI in Law and Professional Practice and in November, The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%.

Agency was mentioned in many of my other articles in 2025. For instance, in my June and July as part of my release the ‘Panel of Experts’—a free custom GPT tool that demonstrated AI’s surprising ability to split into multiple virtual personas to debate a problem. Panel of Experts for Everyone About Anything, Part One and Part Two and Part Three .Crucially, we learned that ‘agentic’ teams work best when they include a mandatory ‘Contrarian’ or Devil’s Advocate. This proved that the most effective cure for AI sycophancy—its tendency to blindly agree with humans—is structural internal dissent.

By the end of 2025 we were already moving from AI adoption to close entanglement of AI into our everyday lives

An artistic representation of a human hand reaching out to a robotic hand, signifying the concept of 'entanglement' in AI technology, with the year 2025 prominently displayed.
Close hybrid multimodal methods of AI use were proven effective in 2025 and are leading inexorably to full AI entanglement.

This shift forced us to confront the role of the “Sin Eater”—a concept I explored via Professor Ethan Mollick. As agents take on more autonomous tasks, who bears the moral and legal weight of their errors? In the legal profession, the answer remains clear: we do. This reality birthed the ‘AI Risk-Mitigation Officer‘—a new career path I profiled in July. These professionals are the modern Sin Eaters, standing as the liability firewall between autonomous code and the client’s life, navigating the twin perils of unchecked risk and paralysis by over-regulation.

But agency operates at a macro level, too. In June, I analyzed the then hot Trump–Musk dispute to highlight a new legal fault line: the rise of what I called the ‘Sovereign Technologist.’ When private actors control critical infrastructure—from satellite networks to foundation models—they challenge the state’s monopoly on power. We are still witnessing a constitutional stress-test where the ‘agency’ of Tech Titans is becoming as legally disruptive as the agents they build.

As these agents became more autonomous, the legal profession was forced to confront an ancient question in a new guise: If an AI acts like a person, should the law treat it like one? In October, I explored this in From Ships to Silicon: Personhood and Evidence in the Age of AI. I traced the history of legal fictions—from the steamship Siren to modern corporations—to ask if silicon might be next.

While the philosophical debate over AI consciousness rages, I argued the immediate crisis is evidentiary. We are approaching a moment where AI outputs resemble testimony. This demands new tools, such as the ALAP (AI Log Authentication Protocol) and Replication Hearings, to ensure that when an AI ‘takes the stand,’ we can test its veracity with the same rigor we apply to human witnesses.

VI. The New Geometry of Justice: Topology and Archetypes

To understand these risks, we had to look backward to move forward. I turned to the ancient visual language of the Tarot to map the “Top 22 Dangers of AI,” realizing that archetypes like The Fool (reckless innovation) and The Tower (bias-driven collapse) explain our predicament better than any white paper. See, Archetypes Over Algorithms; Zero to One: A Visual Guide to Understanding the Top 22 Dangers of AI. Also see, Afraid of AI? Learn the Seven Cardinal Dangers and How to Stay Safe.

But visual metaphors were only half the equation; I also needed to test the machine’s own ability to see unseen connections. In August, I launched a deep experiment titled Epiphanies or Illusions? (Part One and Part Two), designed to determine if AI could distinguish between genuine cross-disciplinary insights and apophenia—the delusion of seeing meaningful patterns in random data, like a face on Mars or a figure in toast.

I challenged the models to find valid, novel connections between unrelated fields. To my surprise, they succeeded, identifying five distinct patterns ranging from judicial linguistic styles to quantum ethics. The strongest of these epiphanies was the link between mathematical topology and distributed liability—a discovery that proved AI could do more than mimic; it could synthesize new knowledge

This epiphany lead to investigation of the use of advanced mathematics with AI’s help to map liability. In The Shape of Justice, I introduced “Topological Jurisprudence”—using topological network mapping to visualize causation in complex disasters. By mapping the dynamic links in a hypothetical we utilized topology to do what linear logic could not: mathematically exonerate the innocent parties. The topological map revealed that the causal lanes merged before the control signal reached the manufacturer’s product, proving the manufacturer had zero causal connection to the crash despite being enmeshed in the system. We utilized topology to do what linear logic could not: mathematically exonerate the innocent parties in a chaotic system.

A person in a judicial robe stands in front of a glowing, intricate, knot-like structure representing complex data or ideas, symbolizing the intersection of law and advanced technology.
Topological Jurisprudence: the possible use of AI to find order in chaos with higher math. Click here to see YouTube video introduction.

VII. The Human Edge: The Hybrid Mandate

Perhaps the most critical insight of 2025 came from the Stanford-Carnegie Mellon study I analyzed in December: Hybrid AI teams beat fully autonomous agents by 68.7%.

This data point vindicated my long-standing advocacy for the “Centaur” or “Cyborg” approach. This vindication led to the formalization of the H-Y-B-R-I-D protocol: Human in charge, Yield programmable steps, Boundaries on usage, Review with provenance, Instrument/log everything, and Disclose usage. This isn’t just theory; it is the new standard of care.

My “Human Edge” article buttressed the need for keeping a human in control. I wrote this in January 2025 and it remains a persona favorite. The Human Edge: How AI Can Assist But Never Replace. Generative AI is a one-dimensional thinking tool My ‘Human Edge’ article buttressed the need for keeping a human in control… AI is a one-dimensional thinking tool, limited to what I called ‘cold cognition’—pure data processing devoid of the emotional and biological context that drives human judgment. Humans remain multidimensional beings of empathy, intuition, and awareness of mortality.

AI can simulate an apology, but it cannot feel regret. That existential difference is the ‘Human Edge’ no algorithm can replicate. This self-evident claim of human edge is not based on sentimental platitudes; it is a measurable performance metric.

I explored the deeper why behind this metric in June, responding to the question of whether AI would eventually capture all legal know-how. In AI Can Improve Great Lawyers—But It Can’t Replace Them, I argued that the most valuable legal work is contextual and emergent. It arises from specific moments in space and time—a witness’s hesitation, a judge’s raised eyebrow—that AI, lacking embodied awareness, cannot perceive.

We must practice ‘ontological humility.’ We must recognize that while AI is a ‘brilliant parrot’ with a photographic memory, it has no inner life. It can simulate reasoning, but it cannot originate the improvisational strategy required in high-stakes practice. That capability remains the exclusive province of the human attorney.

A futuristic office scene featuring humanoid robots and diverse professionals collaborating at high-tech desks, with digital displays in a skyline setting.
AI data-analysis servants assisting trained humans with project drudge-work. Close interaction approaching multilevel entanglement. Click here or image for YouTube animation.

Consistent with this insight, I wrote at the end of 2025 that the cure for AI hallucinations isn’t better code—it’s better lawyering. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. We must skeptically supervise our AI, treating it not as an oracle, but as a secret consulting expert. As I warned, the moment you rely on AI output without verification, you promote it to a ‘testifying expert,’ making its hallucinations and errors discoverable. It must be probed, challenged, and verified before it ever sees a judge. Otherwise, you are inviting sanctions for misuse of AI.

Infographic titled 'Cross-Examine Your AI: A Lawyer's Guide to Preventing Hallucinations' outlining a protocol for legal professionals to verify AI-generated content. Key sections highlight the problem of unchecked AI, the importance of verification, and a three-phase protocol involving preparation, interrogation, and verification.
Infographic of Cross-Exam ideas. Click here for full size image.

VII. Conclusion: Guardians of the Entangled Era

As we close the book on 2025, we stand at the crossroads described by Sam Altman and warned of by Henry Kissinger. We have opened Pandora’s box, or perhaps the Magician’s chest. The demons of bias, drift, and hallucination are out, alongside the new geopolitical risks of the “Sovereign Technologist.” But so is Hope. As I noted in my review of Dario Amodei’s work, we must balance the necessary caution of the “AI MRI”—peering into the black box to understand its dangers—with the “breath of fresh air” provided by his vision of “Machines of Loving Grace.” promising breakthroughs in biology and governance.

The defining insight of this year’s work is that we are not being replaced; we are being promoted. We have graduated from drafters to editors, from searchers to verifiers, and from prompters to partners. But this promotion comes with a heavy mandate. The future belongs to those who can wield these agents with a skeptic’s eye and a humanist’s heart.

We must remember that even the most advanced AI is a one-dimensional thinking tool. We remain multidimensional beings—anchored in the physical world, possessed of empathy, intuition, and an acute awareness of our own mortality. That is the “Human Edge,” and it is the one thing no quantum chip can replicate.

Let us move into 2026 not as passive users entangled in a web we do not understand, but as active guardians of that edge—using the ancient tools of the law to govern the new physics of intelligence

Infographic summarizing the key advancements and societal implications of AI in 2025, highlighting topics such as quantum computing, agentic AI, and societal risk management.
Click here for full size infographic suitable for framing for super-nerds and techno-historians.

Ralph Losey Copyright 2025 — All Rights Reserved


AIs Debate and Discuss My Last Article – “Cross-Examine Your AI” – and then a Podcast, a Slide Deck, Infographic and a Video. GIFTS FOR YOU!

December 22, 2025

Ralph Losey, December 22, 2025

Google AI Adds to My Last Article

I used Google’s NotebookLM to analyze my last article, Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. I started with the debate feature, where two AIs have a respectful argument about whatever source material you provide, here my article. The debate turned out very well (see below). The two debating AI personas made some very interesting points. The analysis was good and hallucination free.

Then just a few prompts and a half-hour later, Google’s NotebookLM had made a Podcast, a Slide Deck, a Video and a terrific Infographic. NotebookLM can also make expanding mind-maps, reports, quizzes, and even study flash-cards, all based on the source material. So easy, it seems only right that I make them available to readers to use, if they wish, in their own teaching efforts for whatever legal related group they are in. So please take this blog as a small give-away.

A humanoid robot dressed in a Santa outfit, holding a stack of colorful wrapped gifts in front of a decorated Christmas tree and fireplace.
Image by Losey using Google’s ‘Nano Banana Pro’ – Click here for short animation on YouTube.

AI Debate

The back-and-forth argument in this NotebookLM creation lasts 16 minutes, makes you think, and may even help you to talk about these ideas with your colleagues.

A podcast promotional image featuring two individuals debating the importance of cross-examination in controlling AI hallucinations, with the title 'Echoes of AI' displayed prominently.
Click here to listen to the debate

AI Podcast

I also liked the podcast created by NotebookLM with direction and verification on my part. The AI write the words, no time. It seems accurate to me and certainly has no hallucinations. Again, it is a fun listen and comes in at only only 12.5 minutes. These AIs are good at both analysis and persuasion.

Illustration for the podcast 'Echoes of AI' featuring two AI podcasters, with a digital background and details about the episode's topic and host.
Click here to hear the podcast

AI Slide Deck

If that were not enough, NotebookLM AI also made a 14-slide deck to present the article. The only problem is that it generated a PDF file, not a powerpoint format. Proprietary issues. Still, pretty good content. See below.

AI Video

They also made a video, see below and click here for the same video on YouTube. It is just under seven minutes and has been verified and approved, except for its discussion of the Park v. Kim, case, which it misunderstood and yes, hallucinated the holding at 1:38-1:44. The Google NotebookLM AI said that the appeal was dismissed due to AI fabricated cases, whereas, in fact, the appeal upheld the lower court’s dismissal because of AI fabricated cases filed in the lower court.

Rereading the article it is easy to see how Google’s AI made that mistake. Oh, and to prove how carefully I checked the work, the AI misspelled “cross-examined” at 6:48 in the video: it only used one “s” i.w. – cros-examined (horrors). If I missed anything else, please let me know. I’m only human.

Except for that error, the movie was excellent, with great graphics and dialogue. I especially liked this illustration of the falling house of cards to show the fragility of AI’s reasoning when it fabricates. I wish I had thought of that image.

Illustration contrasting a collapsing house of cards on the left, symbolizing fragility, with a solid castle on the right, representing stability.
Screenshot of one of the images in the video at 4:49

Even though the video was better than I could have created, and took the NotebookLM AI only a minute to create, the mistakes in the video show that we humans still have a role to play. Plus, do not forget, the AI was illustrating and explaining my idea, my article; although admittedly another AI, ChatGPT-5.2, helped me to write the article. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations.

My conclusion, go ahead and work with them, supervise carefull, and fix their mistakes. If you follow that kind of skeptical hybrid method, they can be good helpers. The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7% (e-Discovery Team, 12/01/25).

Here is the video:

Click here to watch the video on YouTube

Invitation to use these teaching materials.

Anyone is welcome to download and use the slide deck, the article itself, Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations, the audio podcast, the debate, the infographic and the video to help them make a presentation on the use of AI. The permission is limited to educational or edutainment use only. Please do not change the article or audio content. But, as to the fourteen slides, feel free to change them as needed. They seem too wordy to me, but I like the images. If you use the video, serve popcorn; that way you can get folks to show-up. It might be fun to challenge your colleagues to detect the small hallucination the video contains. Even if they have read my article, I bet many will still not detect the small error.

Here is the infographic.

An infographic titled 'Cross-Examine Your AI: A Lawyer's Guide to Preventing Hallucinations,' illustrating a professional protocol for legal professionals to verify AI-generated content and avoid liability. It includes sections on the issues of unchecked AI, a documented global issue, and a three-phase protocol: Prepare, Interrogate, and Verify.
Infographic by NotebookLM of my article. Click here to download the full size image.

Ralph Losey Copyright 2025 — All Rights Reserved, except as expressly noted.


Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations

December 17, 2025

Ralph Losey, December 17, 2025

I. Introduction: The Untested Expert in Your Office

AI walks into your office like a consulting expert who works fast, inexpensively, and speaks with knowing confidence. And, like any untested expert, is capable of being spectacularly wrong. Still, try AI out, just be sure to cross-examine it before using the work-product. This article will show you how.

A friendly-looking robot with a white exterior and glowing blue eyes, set against a wooden background. The robot has a broad smile and a tagline that reads, 'AI is only too happy to please.'
Want AI to do legal research? Find a great case on point? Beware: any ‘Uncrossed AI’ might happily make one up for you. [All images in this article by Ralph Losey using AI tools.]

Lawyers are discovering AI hallucinations the hard way. Courts are sanctioning attorneys who accept AI’s answers at face value and paste them into briefs without a single skeptical question. In the first, Mata v. Avianca, Inc., a lawyer submitted a brief filled with invented cases that looked plausible but did not exist. The judge did not blame the machine. The judge blamed the lawyer. In Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024), the Second Circuit again confronted AI-generated citations that dissolved under scrutiny. Case dismissed. French legal scholar Damien Charlotin has catalogued almost seven hundred similar decisions worldwide in his AI Hallucination Cases project. The pattern is the same: the lawyer treated AI’s private, untested opinion as if it were ready for court. It wasn’t. It never is.

A holographic figure resembling a consultant sits at a table with two lawyers, one male and one female, who appear to be observing the figure's briefcase labeled 'ANSWERS.' Books are placed on the table.
Never accept research or opinions before you skeptically cross-examine the AI.

The solution is not fear or avoidance. It is preparation. Think of AI the way you think of an expert you are preparing to testify. You probe their reasoning. You make sure they are not simply trying to agree with you. You examine their assumptions. You confirm that every conclusion has a basis you can defend. When you apply that same discipline to AI — simple, structured, lawyerly questioning — the hallucinations fall away and the real value emerges.

This article is not about trials. It is about applying cross-examination instincts in the office to control a powerful, fast-talking, low-budget consulting expert who lives in your laptop.

Click here to see video on YouTube of Losey’s encounters with unprepared AIs.

II. AI as Consulting Expert and Testifying Expert: A Hybrid Metaphor That Works

Experienced litigators understand the difference between a consulting expert and a testifying expert. A consulting expert works in private. You explore theories. You stress-test ideas. The expert can make mistakes, change positions, or tell you that your theory is weak. None of it harms the case because none of it leaves the room. It is not discoverable.

Once you convert that same person into a testifying expert, everything changes. Their methodology must be clear. Their assumptions must be sound. Their sources must be disclosed. Their opinions must withstand cross-examination. Their credibility must be earned. Discovery of them is open subject to minor restraints.

AI Should always start as a secret consulting expert. It answers privately, often brilliantly, sometimes sloppily, and occasionally with complete fabrications. But the moment you rely on its words in a brief, a declaration, a demand letter, a discovery response, or a client advisory, you have promoted that consulting expert to a testifying one. Judges and opposing counsel will evaluate its work that way — even if you didn’t.

This hybrid metaphor — part expert preparation, part cross-examination — is the most accurate way to understand AI in legal practice. It gives you a familiar, legally sound framework for interrogating AI before staking your reputation on its output.

A lawyer seated at a desk reading documents, with a holographic figure representing AI or an expert consultant displayed next to him.
Working with AI and carefully examining its early drafts.

III. Why Lawyers Fear AI Today: The Hallucination Problem Is Real, but Preventable

AI hallucinations sound exotic, but they are neither mysterious nor unpredictable. They arise from familiar causes:

Anyone who has ever supervised an over-confident junior associate will recognize these patterns or response. Ask vague questions and reward polished answers, and you will get polished answers whether they are correct or not.

The problem is not that AI hallucinates. The problem is that lawyers forget to interrogate the hallucination before adopting it.

Never rely on an AI that has not been cross examined.

Both lawyer and judicial frustration is mounting. Charlotin’s global hallucination database reads like a catalogue of avoidable errors. Lawyers cite nonexistent cases, rely on invented quotations, or submit timelines that collapse the moment a judge asks a basic question. Courts have stopped treating these problems as innocent misunderstandings about new technology. Increasingly, they see them as failures of competence and diligence.

The encouraging news is that hallucinations collapse under even moderate questioning. AI improvises confidently in silence. It becomes accurate under pressure.

That pressure is supplied by cross-examination.

A female business professional discussing strategies with a humanoid robot in a modern office setting, displaying the text 'PREPARE INTERROGATE VERIFY' on a screen in the background.
Team approach to AI prep works well, including other AIs.

IV. Five Cross-Examination Techniques for AI

The techniques below are adapted from how lawyers question both their own experts and adverse ones. They require no technical training. They rely entirely on skills lawyers already use: asking clear questions, demanding reasoning, exposing assumptions, and verifying claims.

The five techniques are:

  1. Ask for the basis of the opinion.
  2. Probe uncertainty and limits.
  3. Present the opposing argument.
  4. Test internal consistency.
  5. Build a verification pathway.

Each can be implemented through simple, repeatable prompts.

A woman in a business suit stands confidently in a courtroom-like setting, pointing with one finger while holding a tablet. Next to her is a humanoid robot. A large sign in the background displays the words 'BASIS', 'UNCERTAINTY', 'OPPOSING', 'CONSISTENCY', and 'VERIFY'. Sky-high view of city buildings is visible through the window.
Click to see YouTube video of this associate’s presentation to partners of the AI cross-exam.

1. Ask for the Basis of the Opinion

AI developers use the word “mechanism.” Lawyers use reasoning, methodology, procedure, or logic. Whatever the label, you need to know how the model reached its conclusion.

Instead of asking, “What’s the law on negligent misrepresentation in Florida?” ask:

“Walk me through your reasoning step by step. List the elements, the leading cases, and the authorities you are relying on. For each step, explain why the case applies.”

This produces a reasoning ladder rather than a polished paragraph. You can inspect the rungs and see where the structure holds or collapses.

Ask AI explicitly to:

  • identify each reasoning step
  • list assumptions about facts or law
  • cite authorities for each step
  • rate confidence in each part of the analysis

If the reasoning chain buckles, the hallucination reveals itself.

A lawyer in a suit examining a transparent, futuristic humanoid robot's head with a flashlight in a library setting.
Click here for short YouTube video animation about reasoning cross.

2. Probe Uncertainty and Limits

AI tries to be helpful and agreeable. It will give you certainty, even though it is fake. The original AI training data from the Internet never said, “I don’t know the answer.” So now you have to train your AI in prompts and project instructions to admit it does not know. You must demand honesty. You must demand truth over agreement with your own thoughts and desires. Repeatedly specify to AI in instructions to admit when it does not know the answer, or is uncertain. Get it to explain to you what is does not know; to explain what it cannot provide citations to support. Get it to reveal the unknowns.

A friendly robot with a smile sitting at a desk with a computer keyboard, in front of two screens displaying error messages '404 ANSWER NOT FOUND' and 'ANSWERS NOT FOUND.' The robot appears to be ready to improvise.
Most AIs do not like to admit they don’t know. Do you?

Ask your AI:

  • “What do you not know that might affect this conclusion?”
  • “What facts would change your analysis?”
  • “Which part of your reasoning is weakest?”
  • “Which assumptions are unstated or speculative?”

Good human experts do this instinctively. They mark the edges of their expertise. AI will also do it, but only when asked.

A man in a suit stands in a courtroom, holding a tablet and speaking confidently, with a holographic display of connected data points in the background.
Click here for YouTube animation of AI cross of its unknowns.

3. Present the Opposing Argument

If you only ask, “Why am I right?” AI will gladly tell you why you are right. Sycophantism is one of its worst habits.

Counteract that by assigning it the opposing role:

  • “Give me the strongest argument against your conclusion.”
  • “How would opposing counsel attack this reasoning?”
  • “What weaknesses in my theory would they highlight?”

This is the same preparation you would do with a human expert before deposition: expose vulnerabilities privately so they do not explode publicly.

A lawyer in a formal suit stands in a courtroom, examining a holographic chessboard with blue and orange outlines representing opposing arguments.
Quality control by counter-arguments. Click here for short YouTube animation.

4. Test Internal Consistency

Hallucinations are brittle. Real reasoning is sturdy.

You expose the difference by asking the model to repeat or restructure its own analysis.

  • “Restate your answer using a different structure.”
  • Summarize your prior answer in three bullet points and identify inconsistencies.”
  • “Explain your earlier analysis focusing only on law; now do the same focusing only on facts.”

If the second answer contradicts the first, you know the foundation is weak.

This is impeachment in the office, not in the courtroom.

A digitally created robot face divided in half, with one side featuring cool metallic tones and glowing blue elements, and the other side displaying warmer hues with a glowing red effect.
Click here for YouTube animation on contradictions.

5. Build a Verification Pathway

Hallucinations survive only when no one checks the sources.

Verification destroys them.

Always:

  • read every case AI cites and make sure the court cited actually issued the opinion (of course, also check case history to verify it is still good law)
  • confirm that the quotations appear in the opinion (sometime small errors creep in)
  • check jurisdiction, posture, and relevance (normal lawyer or paralegal analysis)
  • verify every critical factual claim and legal conclusion

This is not “extra work” created by AI. It is the same work lawyers owe courts and clients. The difference is simply that AI can produce polished nonsense faster than a junior associate. Overall, after you learn the AI testing skills, the time and money saved will be significant. This associate practically works for free with no breaks for sleep, much less food or coffee.

Your job is to slow it down. Turn it off while you check its work.

An older man in a suit sits at a table, writing notes on a document, while a humanoid robot with blue eyes sits beside him in a professional setting.
Always carefully check the work of your AIs.

V. How Cross-Examination Dramatically Reduces Hallucinations

Cross-examination is not merely a metaphor here. It is the mechanism — in the lawyer’s meaning of the word — that exposes fabrication and reveals truth.

Consider three realistic hypotheticals.

1. E-Discovery Misfire

AI says a custodian likely has “no relevant emails” based on role assumptions.

You ask: “List the assumptions you relied on.”

It admits it is basing its view on a generic corporate structure.

You know this company uses engineers in customer-facing negotiations.

Hallucination avoided.

2. Employment Retaliation Timeline

AI produces a clean timeline that looks authoritative.

You ask: “Which dates are certain and which were inferred?”

AI discloses that it guessed the order of two meetings because the record was ambiguous.

You go back to the documents.

Hallucination avoided.

3. Contract Interpretation

AI asserts that Paragraph 14 controls termination rights.

You ask: “Show me the exact language you relied on and identify any amendments that affect it.”

It re-reads the contract and reverses itself.

Hallucination avoided.

The common thread: pressure reveals quality.

Without pressure, hallucinations pass for analysis.

A businessman in a suit points at a digital display showing a timeline with events and an inconsistency highlighted in red, seated next to a humanoid robot on a table with a laptop.
Work closely with your AI to improve and verify its output.

VI. Why Litigators Have a Natural Advantage — And How Everyone Else Can Learn

Litigators instinctively challenge statements. They distrust unearned confidence. They ask what assumptions lie beneath a conclusion. They know how experts wilt when they cannot defend their methodology.

But adversarial reasoning is not limited to courtrooms. Transactional lawyers use it in negotiations. In-house lawyers use it in risk assessments. Judges use it in weighing credibility. Paralegals and case managers use it in preparing witnesses and assembling factual narratives.

Anyone in the legal profession can practice:

  • asking short, precise questions
  • demanding reasoning, not just conclusions
  • exploring alternative explanations
  • surfacing uncertainty
  • checking for consistency

Cross-examining AI is not a trial skill. It is a thinking skill — one shared across the profession.

A business meeting in an office featuring a woman in a suit presenting to a robot resembling Iron Man, while a man in a suit sits at a laptop, with a display showing academic citations and data in the background.
Thinking like a lawyer is a prerequisite for AI training; be skeptical and objective.

VII. The Lawyer’s Advantage Over AI

AI is inexpensive, fast, tireless, and deeply cross-disciplinary. It can outline arguments, summarize thousands of pages, and identify patterns across cases at a speed humans cannot match. It never complains about deadlines and never asks for a retainer.

Human experts outperform AI when judgment, nuance, emotional intelligence, or domain mastery are decisive. But those experts are not available for every issue in every matter.

AI provides breadth. Lawyers provide judgment.

AI provides speed. Lawyers provide skepticism.

AI provides possibilities. Lawyers decide what is real.

Properly interrogated, AI becomes a force multiplier for the profession.

Uninterrogated, it becomes a liability.

A professional meeting room with a diverse group of lawyers and a robot figure. The human leader gestures confidently while presenting. A screen behind them displays phrases like 'Challenge assumptions,' 'Expose weak logic,' and 'Ask better questions.'
Good lawyers challenge and refine their AI output.

VIII. Courts Expect Verification — And They Are Right

Judges are not asking lawyers to become engineers or to audit model weights. They are asking lawyers to verify their work.

In hallucination sanction cases, courts ask basic questions:

  • Did you read the cases before citing them?
  • Did you confirm that the case exists in any reporter?
  • Did you verify the quotations?
  • Did you investigate after concerns were raised?

When the answer is no, blame falls on the lawyer, not on the software.

Verification is the heart of legal practice.

It just takes a few minutes to spot and correct the hallucinated cases. The AI needs your help.

IX. Practical Protocol: How to Cross-Examine Your AI Before You Rely on It

A reliable process helps prevent mistakes. Here is a simple, repeatable, three-phase protocol.

Phase 1: Prepare

  1. Clarify the task.

Ask narrow, jurisdiction-specific, time-anchored questions.

  1. Provide context.

Give procedural posture, factual background, and applicable law.

  1. Request reasoning and sources up front.

Tell AI you will be reviewing the foundation.

Phase 2: Interrogate

  1. Ask for step-by-step reasoning.
  2. Probe what the model does not know.
  3. Have it argue the opposite side.
  4. Ask for the analysis again, in a different structure.

This phase mimics preparing your own expert — in private.

Phase 3: Verify

  1. Check every case in a trusted database.
  2. Confirm factual claims against your own record.
  3. Decide consciously which parts to adopt, revise, or discard.

Do all this and if a judge or client later asks, “What did you do to verify this?” – you have a real answer.

Business meeting involving a lawyer presenting to a man and a humanoid robot, with a digital presentation on a screen that includes flowchart-style prompts.
It takes some training and experience, but keeping your AI under control is really not that hard.

X. The Positive Side: AI Becomes Powerful After Cross-Examination

Once you adopt this posture, AI becomes far less dangerous and far more valuable.

When you know you can expose hallucinations with a few well-crafted questions, you stop fearing the tool. You start seeing it as an idea generator, a drafting assistant, a logic checker, and even a sparring partner. It shows you the shape of opposing arguments. It reveals where your theory is vulnerable. It highlights ambiguities you had overlooked.

Cross-examination does not weaken AI.

It strengthens the partnership between human lawyer and machine.

A lawyer and a humanoid robot stand together in a courtroom, representing a blend of human expertise and artificial intelligence in legal practice.
Click here for video animation on YouTube.

XI. Conclusion: The Return of the Lawyer

Cross-examining your AI is not a theatrical performance. It is the methodical preparation that seasoned litigators use whenever they evaluate expert opinions. When you ask AI for its basis, test alternative explanations, probe uncertainty, check consistency, and verify its claims, you transform raw guesses into analysis that can withstand scrutiny.

Two professionals interacting with a futuristic robot in an office setting, analyzing a digital display that highlights the concept of 'Inference Gap Needs Judgment' amidst various data points and inferences.
Complex assignments always take more time but the improved quality AI can bring is well worth it.

Courts are no longer forgiving lawyers who fall for a sycophantic AI and skip this step. But they respect lawyers who demonstrate skeptical, adversarial reasoning — the kind that prevents hallucinations, avoids sanctions, and earns judicial confidence. More importantly, this discipline unlocks AI’s real advantages: speed, breadth, creativity, and cross-disciplinary insight.

The cure for hallucinations is not technical.

It is skeptical, adversarial reasoning.

Cross-examine first. Rely second.

That is how AI becomes a trustworthy partner in modern practice.

See the animation of our goodbye summary on the YouTube video. Click here.

Ralph Losey Copyright 2025 — All Rights Reserved


The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%

December 1, 2025

Ralph Losey, December 1, 2025 (25 minute read)

For years, technologists have promised that fully autonomous AI Agents were just around the corner, always one release away, always about to replace entire categories of work. Then Stanford and Carnegie Mellon opened the box and observed the Agents directly. Like Schrödinger’s cat, the dream of flawless autonomy did not survive the measurement.

An artistic representation of a robot emerging from an open box, with digital particles dispersing away from it, symbolizing the concept of AI and technology.
Observation reveals fragile AI Agents. All images in this article are by Ralph Losey using various AI tools.

What did survive was something far more practical: hybrid human–AI teaming, which outperformed autonomous Agents by a decisive 68.7%. If you care about accuracy, ethics, or your professional license, this is the part of the AI story you need to understand.

A digital graphic showing a bar chart representing 68.7% performance improvement, set against a blue background with circuit-like patterns.
Humans can work much better if augmented by AI Agents but the Agents alone fail fast.

1. Introduction to the New Study by Carnegie Mellon and Stanford

The Mellon/Stanford report is important to anyone trying to integrate AI into workflows. Wang, Shao, Shaikh, Fried, Neubig, Yang, How Do AI Agents Do Human Work? Comparing AI and Human Workflows Across Diverse Occupations (arXiv, 11/06/25, v.2) (“Mellon/Stanford Study” or just “Study”).

Just to be clear what we mean here by AI Agent, Wikipedia provides a generally accepted defination of an Agent as “an entity that perceives its environment, takes actions autonomously to achieve goals, and may improve its performance through machine learning or by acquiring knowledge.”

So, you see most everyone thinks of AI Agents and autonomy as synonymous. The Study bursts that bubble. It shows that Agents today need a fair amount of human guidance to be effective and fail too often, and too fast without it.

A split-image illustration contrasting the 'fantasy' of a futuristic, human-like robot on the left with the 'reality' of a more cartoonish robot struggling with an error on the right. The left side features a sleek, metallic robot, while the right side depicts a confused robot holding a document with an error message, emphasizing the challenges faced by AI.
This is the real world of AI Agents that we live in today.

The Study Introduction (citations omitted) begins this way:

AI agents are increasingly developed to perform tasks traditionally carried out by human workers as reflected in the growing competence of computer-use agents in work-related tasks such as software engineering and writing. Nonetheless, they still face challenges in many scenarios such as basic administrative or open-ended design tasks, sometimes creating a gap between expectations and reality in agent capabilities to perform real-world work.

To further improve agents’ utility at such tasks, we argue that it is necessary to look beyond their end-task outcome evaluation as measured in existing studies and investigate how agents currently perform human work — understanding their underlying workflows to gain deeper insights into their work process, especially how it aligns or diverges from human workers, to reveal the distinct strengths and limitations between them. Therefore, such an analysis should not benchmark agents in isolation, but rather be grounded in comparative studies of human and agent workflows.

A group of professionals and humanoid robots collaborating at a modern workspace, discussing data displayed on screens.
Studying AI and Human workflows to evaluate AI Agent performance.

2. More Detail on the Study: What the researchers did and found

Scope & setup. The Carnegie/Stanford team compared the work of 48 qualified human professionals with four AI agent frameworks. The software included stand-alone ChatGPT-based agents (version four series) and software code-writing agent platforms like OpenHands, also using ChatGPT version four series levels. These programs were “wraps”—software layers built on top of a third-party generative AI engine. A wrap adds specialized tools, interfaces, and guardrails while relying on the underlying model for generative AI capabilities. In the legal world, this is similar to how Westlaw and Lexis offer AI assistants powered by ChatGPT under the hood, but wrapped inside their own proprietary databases, interfaces, and safety systems.

The Study used 16 realistic tasks that required multiple coordinated steps, tools, and decisions—what the researchers call long-horizon tasks. They require multiple prompts requiring a series of steps, such as preparing a quarterly finance report, analyzing stock-prediction data, or designing a company landing page. The fully automated Agent tried to do most everything by writing code whereas the humans used multiple tools to do so, including AI and tools that included AI. This was a kind of hybrid or augmented method that did not attempt to closely incorporate the Agents into the work flow.

To observe how work was actually performed, the authors built what they called a workflow-induction toolkit. Think of it as a translation engine: it converts the raw interaction data of computer use (clicks, keystrokes, file navigation, tool usage) into readable, step-by-step workflows. The workflows reveal the underlying process, not just the final product. The 16 tasks are supposed to collectively represent 287 computer-using U.S. occupations and roughly 71.9% of the daily activities within them. For lawyers and others outside of these occupations the relevance comes from the overlap in task structure, not subject matter.

  • The engineering and design tasks don’t map directly to legal work but are useful for observing where agents tend to fail on open-ended or visually dependent steps. 
  • The structured writing tasks are similar to legal drafting (e.g., memos, policies, summaries); although it is imprtant to note that the writing tasks in the Study were not persuasion or adversarial, oriented.
  • The data-analysis tasks parallel evidence evaluation, damages models, timeline building, and spreadsheet-based work that litigators do every day.
  • The administrative/computational tasks resemble the work of preparing exhibits, reconciling data, or generating chronologies.
Infographic contrasting structured tasks and human judgment in AI workflows, showcasing templates, definitions, and cross-references on one side, and tone, narrative, and emotive emphasis on the other.
Agents were fast but made too many mistakes to be useful in anything but very structured tasks. Human judgment rules.

3. Key Findings of the Study.

1. Human-led Hybrid Agent workflows are much more efficient and accurate than AI Agents working alone. When AI is integrated into existing human workflows (the Hybrid approach, aka Augmented approach) there is “minimal disruption and 24.3% efficiency improvement.” Id. As shown in Figure 7 of the Study: “Humans complete work with higher quality (a), while agents possess a huge advantage in efficiency (b). Teaming human and agent workers based on their respective advantages ensures task accuracy and improves efficiency by 68.7%. Id. at pg. 14. This key finding is the essence of the hybrid methodology, which the Study refers to as AI Augmentation as opposed to AI automation. The next section will unpack what that means for law, medical and other high-stakes work

2. Humans outperform AI agents on quality by a wide margin. When Agents’ work by themselves without humans their success rates are 32.5%–49.5% lower that when humans work alone without Agents (but still using software that includes AI). In the fully automated approach, the AI will fail in a number of ways, including fabrication (hallucinating) plausible data and misusing tools (for example, switching to web search instead of opening the files you give them).

3. AI Agents fail primarily because they default to writing code. Agents “write programs to solve essentially all tasks, even when equipped with and trained for UI interactions. . . . This is in stark contrast to the visual-oriented human workflows.” Mellon/Stanford Study, Id. at pg. 2. This programmatic bias drives both their speed and their multiple failures. Agents today are only reliable when the task itself is programmable and code-friendly. Of course, the AI still makes mistakes with code too, so humans still need to step in to verify or debug.

4. When AI agents do not fail, they are extremely efficient. For tasks completed by both groups (both solo Agents and solo human/AI), agents on average took 88.3% less time, 96.4% fewer actions, and at a 90.4%–96.2% lower cost than human workers alone. Still, the 32.5%–49.5% failure rate overall makes the fully automated, aka AI automation solution only appropriate for code writing and even there the AI still makes mistakes that require human intervention, mainly verification and debugging. As the Study explains:

Human workflows are substantially altered by AI automation, but not by AI augmentation (hybrid). One quarter of human activities we studied involve AI tools, with most used for augmentation purposes: integrating AI into existing workflows with minimal disruption, while improving efficiency by 24.3%. In contrast, AI automation markedly reshapes workflows and slows human work by 17.7%, largely due to additional time spent on verification and debugging.

Id. at pgs. 2, 11 figure 5.

An illustration showing a humanoid robot interacting with a man in glasses, highlighting key takeaways from the Stanford-Carnegie Study on hybrid AI performance versus autonomous agents.

4. Study Findings Support a Hybrid Workflow with Man and Machine Working Together

The Carnegie Mellon and Stanford research supports the AI work method I’ve used and advocated sice 2012: hybrid multimodal, where humans and machines work together in multiple modes with strong human oversight. The Study found that minimal quality requirements require close team efforts and make full AI autonomy impractical.

This finding is consistent with my tests over the years on best practices. If you want to dig deeper see e.g. From Prompters to Partners: The Rise of Agentic AI in Law and Professional Practice (agentic governance).

Unsupervised, autonomous AI is just too unreliable for meaningful work. The Study also found that it is too sneaky to use without close supervision. It will make up false data that looks good to try to cover its mistakes. Agents simply cannot be trusted. Anyone who wants to do serious workk with Agents will need to keep a close eye on them. This article will provides suggestions on how to do that.

A cartoon illustration of a mischievous robot with a sly grin, set against a dark, textured background.

Click here for YouTube animation of a sneaky robot. Watch your money!

5. Study Consistent with Jagged Frontier research of Harvard and others.

The jagged line of competence cannot be predicted and changes slightly with each new AI release. See the excellent Harvard Business School working paper by Fabrizio Dell’Acqua, Edward McFowland III, Ethan Mollick, et al, Navigating the Jagged Technological Frontier (September, 2023) and my papers, From Centaurs To Cyborgs: Our evolving relationship with generative AI; and Navigating the AI Frontier: Balancing Breakthroughs and Blind Spots;

The unpredictable unevenness of generative Ai and its Agents is why “trust but verify” is not just a popular slogan, it is a safety rule.

An illustrated graphic featuring a stylized mountain range depicting a jagged frontier with sharp peaks and valleys, set against a cloudy sky.
With each new Release users find that AI competence is unpredictable.

6. Surprising Tasks Where Agents Still Struggle

You might expect AI agents to struggle on exotic, creative work. The Study shows something more mundane.

In addition to some simple math and word counts, AI Agents often tripped on:

  • Simple administrative and computer user interface (UI) steps. Navigating files, interpreting folder labels, or following naming conventions that a paralegal would understand at a glance.
  • Repetitive computational tasks that still require interpretation. For example, choosing which column or field to use when the instructions are slightly ambiguous.
  • Open-ended or visually grounded steps. Anywhere the task depends on “seeing” patterns in a chart or layout rather than following a crisp rule.

The pattern is consistent with other research: agents excel when a task can be turned into code, and they wobble along a jagged edge of competency when the task requires context, interpretation, or judgment.

That is why the 68.7% improvement in hybrid workflows is so important. The best results came when the human handled the ambiguous, judgment-heavy step and then let the agent run away with the programmable remainder.

Here is a good take-away memory aid:

An illustration showing a smiling man in a suit next to a humanoid robot. The robot appears to be processing information, symbolizing a hybrid approach to AI and human collaboration. Text on the image emphasizes that agents are fast and programmatic, while humans provide context and accountability.

7. What Agent “Failure” Looks Like

The Mellon/Stanford paper is especially useful because it does not just report scores. It shows how the AI agents went wrong.

When agents failed, the failures usually fell into two categories:

  • Fabrication. When an agent could not parse an image-based receipt or understand a field, it sometimes filled in “reasonable” numbers anyway. In other words, it invented or hallucinated data instead of admitting it was stuck. It is the Mata v. Avianca case all over again, making up case law when it could not find any. See Navigating AI’s Twin Perils: The Rise of the Risk-Mitigation Officer (e-Discovery Team, 7/28/25). That is classic hallucination, but now wrapped inside a workflow that looks productive.
  • Tool misuse. In some trials, agents abandoned the PDFs or files supplied by the user and went to fetch other materials from the web. For lawyers, that is a data-provenance nightmare. You think you are working from the client’s record. The agent quietly swaps in something else, often without any alert to the user. This suggest yet another challenge for AI Risk-Mitigation Officers, which I predict will soon be a hot new field for tech-savvy lawyers.

The authors of the Mellon/Stanford Study explicitly flag these behaviors. As will be discussed, the new version five series of ChatGPT AI and other equivalent models such as Gemini 3, may have lessened these risks, but the problem remains.

For legal practice and other high-stakes matters such as medical, the takeaway is simple: if you do not supervise the workflow and do not control the sources, you will not even know when you left the record, or what is real and what is fake. That may be fine for hairstyles but not for Law.

A humanoid robot with a metallic finish and intricate design stands beside a woman with an edgy hairstyle and makeup in a modern salon setting.
Hairstyle by a hallucinating AI. Is this hair real or fake?

8. Legal Ethics and Professionalism: Competence, Supervision, Confidentiality

Nothing in the Agent Study changes the fundamentals of legal ethics. It sharpens them.

  • Competence now includes understanding how AI works well enough to use and supervise it responsibly. ABA Model Rule 1.1.
  • Supervision means treating agents like junior lawyers or vendors: define their scope, demand logs, and review their work before it touches a client or court. Rule 5.1.
  • Confidentiality means knowing where your data goes, how it is stored, and which models or services can access it. Rule 1.6.

The same logic applies to medical ethics and professional standards in other regulated fields. In all of them, responsibility remains with the human professional.

As I argued in AI Can Improve Great Lawyers—But It Can’t Replace Them, the highest-value legal knowledge is contextual, emergent, and embodied. The same is true of the highest-value medical judgment. It cannot be bottled and automated. Agents are tools, not professionals with standing.

An illustration of a robot opening a glowing box, surrounded by abstract digital elements and stars, symbolizing the discovery of advanced technology.
Now that Agents have emerged and we’ve seen their abilities, we know they are just tools, and fragile ones at that.

9. Do Not Over-Generalize: What the Study does and does not cover

Before we map this into legal workflows, it is important to stay within the boundaries of the evidence.

The 127 Occupational tasks that Stanford and Carnegie researched were all office-style, structured sandboxed environments.

The legal profession should treat the results as directly relevant only to:

  • Structured drafting,
  • Evidence and data analysis,
  • Spreadsheet and dashboard work,
  • Document-heavy desk work that has clear inputs and outputs.

They tasks studied do not directly answer questions about:

  • Final legal conclusions,
  • Persuasive writing to judges or juries,
  • Ethical decisions, strategy, or settlement judgment.

Those legal domains are within what I call the human edge. The Human Edge: How AI Can Assist But Never Replace.

An illustration labeled 'Study Scope' featuring icons of a document, a chart, and a table. Silhouettes of people in the background create a collaborative atmosphere.
Study only covered a few computer tasks performed by legal professionals and did not include any non-computer use tasks.

10. What the Findings Mean for Legal Workflows

The natural question for any lawyer is: So where does this help me, and where does it not? The answer lines up nicely with the task categories in the Study.

A. Structured drafting as legal building blocks

The writing tasks in the paper look a lot like the templated components of much legal writing:

  • Fact sections and chronologies,
  • Procedural histories,
  • Policy and compliance summaries,
  • Standardized client alerts and internal memos.

These are places where agents can:

  • Produce reasonable first drafts quickly,
  • Enforce consistency of structure and style,
  • Help with cross-references, definitions, and internal coherence.

Humans still need to control:

  • Tone, emphasis, and narrative arc,
  • Which facts matter for the client and the forum,
  • How much assertion or restraint is appropriate.

The right pattern is: let the agent assemble and polish the building blocks; you decide which building you are constructing.

I’ve also documented the power of AI-driven expert brainstorming across dozens of experiments over the past two years. For readers who want to explore that thread, I’ve compiled those Panel of Experts studies in one place called Brainstorming.

A robotic figure sitting at a desk with a laptop, displaying a glowing brain above its head, indicating advanced intelligence or insight in a high-tech environment.
AI is great at brainstorming creative solutions.

B. Evidence analytics as data analysis

The data-analysis type of work included in the Study maps cleanly to some litigation and investigation tasks:

  • Damages models and exposure estimates,
  • Budget and variance analyses,
  • Timeline and attendance compilations,
  • De-duplication and reconciliation of overlapping datasets,
  • Citation and reference tables.

Here the speed gains are real. Having an agent pull, group, and calculate from labeled inputs can save hours.

But that 37.5% error rate on calculations is a red flag. Again the multimodal method shows the way. For legal work, the rule of thumb should be:

Agents may calculate.

Humans must verify.

You can treat agent results like you would a junior associate’s complex spreadsheet: extremely useful, never unquestioned.

C. Legal research and persuasion are different animals

It is tempting to read “writing” and “analysis” and think this Study blesses full-blown AI Agent legal research and brief-writing. It does not.

An illustration depicting a lawyer holding a legal document and gavel, while facing a humanoid robot in a maze labelled 'Legal Research Frontier'. The image represents the intersection of technology and legal research.

The tasks in the paper do not measure:

  • Authority-based research quality,
  • Case-law synthesis under jurisdictional constraints,
  • Persuasive legal writing aimed at a specific judge or tribunal.

Those domains depend heavily on:

  • Judgment,
  • Ethics and candor,
  • Audience calibration,
  • Deep understanding of rules and standards.

That is the territory I have called the human edge in earlier writings. AI can assist in jagged line, but it cannot replace the lawyer’s role.

A robot sits at the base of a mountainous landscape, working on a computer, while a human figure stands triumphantly at the summit, holding a staff beside a sign that reads 'HUMAN EDGE' under a sunrise.
Humans have an edge over AI in everything except rational thinking, and knowledge.

11. Hybrid Centaurs, Cyborgs,
and the 68.7% Result

For two and a half years, since I first heard the concepts and language used by Wharton Professor Ethan Mollick (From Centaurs To Cyborgs), I have used the Centaur → Cyborg metaphor and grid as a simple way to write about hybrid AI use:

  • Centaur. Clear division of labor. The human does one task; the AI does a related but distinct task. Strategy and judgment remain fully human. The AI does scoped work such as writing code, outline and first draft generation, summarizing, or checking. Some foolish users of this method and fail to verify the AI (horsey) part.
  • Cyborg. Tighter back-and-forth. Human and AI work in smaller alternating steps. The lawyer starts; the AI refines; the lawyer revises; the AI restructures. Tasks are intertwined rather than separated. Supervision is inherent to the process. The Study suggests this is the best way to perform Agentic tasks.
A futuristic illustration of a humanoid figure with robotic features, standing on a rocky pathway, holding a lantern, and gazing into a starry landscape filled with floating geometric shapes and glowing cracks.
Centaur+Cyborg is good way to navigate the jagged edge and use AI Agents.

The Cyborg type of Hybrid workflow is good for AI Agents because:

  • Augmentation inside human workflows (Centaur-like use) speeds people up by 24.3%.
  • End-to-end full automation slows people down by 17.7% because of the review burden.
  • Step-level teaming, where the human handles the non-programmable judgment steps and the agent handles the rest in a close, intermingled process improves performance by 68.7% with quality intact. That is Hybrid, Cyborg-style work done correctly.
An abstract illustration representing 'Hybrid Practice', featuring a stylized spiral staircase with layered elements depicting human figures, documents, and circuit patterns against a dark background.
Humans an AI working closely together step by step.

12. Best-Practice Argument: Hybrid, Multimodal Use Should Be the Standard of Care—Especially in Law and Medicine

For more than a decade, my position has been consistent: the safest and most effective way to use AI in any high-stakes domain is hybrid and multimodal. That means:

  • Multiple AI capabilities working together (language, code, retrieval, vision),
  • Combined with traditional analytic tools (databases, spreadsheets, review platforms),
  • All orchestrated by humans who remain responsible for judgment, ethics, and outcomes.
A conductor guides a group of humanoid robots, with swirling blue energy above, creating an atmosphere of hybrid collaboration between humans and technology.
Humans conduct an orchestra AI of instruments.

I first developed this view in e-discovery using active machine learning, but it maps cleanly to agentic AI systems and now extends well beyond law. The Carnegie/Stanford Study provides the empirical foundation: hybrid, supervised workflows outperform fully autonomous ones in speed and quality.

The evidence and professional obligations point in the same direction: hybrid, multimodal AI use—under strong human oversight, is not a temporary workaround. It is the durable, long-term standard of care for law, medicine, and any profession where judgment and accountability matter.

AI has no emotions or intuition—only clever wordplay.

Illustration contrasting human intuition represented by a heart and machine computation depicted as a circuit board within a round shape.
Get the dualities to work together and you have Hybrid Augmentation Supremacy.

13. Risk and Governance: A Quick Checklist for Lawyers, Legal Ops, and Other High-Stakes Teams

The Carnegie/Stanford Study gives us concrete failure modes. Risk management should respond to those, not hypotheticals. Here is a short “trust but verify” checklist designed for law but conceptually adaptable to medicine and other high-stakes fields.

A. Provenance or it is not used.

Require page, line, or document IDs for every fact an agent surfaces. If there is no source anchor, the output does not get used. If speculation must be included, you should label it as such. In clinical settings the analogue is clear: no untraceable data, images, or derived metrics.

B. No blind web pivots.

Agents that “helpfully” fetch other files when they cannot parse your materials must be constrained. In law, that means they stay within the client record or approved data repositories. In medicine, the agent must not silently mix in external data that is not part of the patient’s chart.

C. Fabrication drills.

Regularly feed the system bad PDFs or deliberately ambiguous instructions, then watch for made-up numbers or invented content. Document what you catch and fix prompts, policies, and configuration. Health systems can do the same with flawed test inputs and simulated charts.

D. Mark human-only steps.

Identify steps that are inherently non-programmable, such as visual judgments, privilege calls, contextual inferences, settlement strategy, or ethical decisions. In medicine, the parallels are differential diagnosis, treatment choice, risk discussion, and consent. These remain human steps. An AI should never deliver a fatal diagnosis.

An illustration depicting a split brain design: one half showcases structured tasks represented in blue circuitry, while the other half features words like 'Judgement,' 'Advocacy,' and 'Ethics' in glowing orange against a dark backdrop. A humanoid robot and a business professional are interacting with a digital interface at the center.
Combine the unique skills of each kind of intelligence and know when to step from one to another.

E. Math checks are mandatory.

A 37.5% error rate in data-analysis tasks is more than enough to require independent human verification. Use template calculations, cross-checks, and a second set of human eyes any time numbers affect a client or patient outcome.

F. Logging and replay.

Turn on action logs for every delegation: files touched, tools invoked, transformations run. If the platform cannot log, it is not appropriate for high-stakes legal or clinical work.

G. Disclosure and confidentiality.

Disclose AI use when rules, regulations, or reasonable expectations require it. Keep agents confined to narrow, internal repositories when handling client or patient data. Treat them at least as carefully as you would any other third-party system with sensitive information.

H. Bottom line:

Fabrication and tool misuse are not hypothetical. The Study observed and measured them. You should assume they will occur and design your governance accordingly.

A colorful artistic painting depicting a seated elderly man with a mechanical head of circuitry, conversing with a robot in a similar style, seated in an orange armchair against a vivid backdrop.
The tendency of AI to make things up, to hallucinate, is lessening as the models improve, but is still a real threat, so is one of its causes, sycophantism.

14. Counter-Arguments and Rebuttals

You may hear pushback against the hybrid method from some technologists who argue for full automation, after all that’s how Wikipedia defines Agent, as fully autonomous. That has always been the dream of many in the AI community. You will also hear the opposite criticism, frequently from legal colleagues, who resist the use of AI, at least in any meaningful way. The Study frustrates both camps—automation maximalists and AI-averse traditionalists—because its empirical findings support neither worldview as they currently argue it.

A. “AI if just a passing fad.”

The anti-AI argument is also strong and based on powerful fears. Still, the legal profession must not allow itself a Luddite nap. Those of us who use AI safely everyday are working hard to address those concerns. See, for example, the law review article I wrote this year with my friend, Judge Ralph Artigliere (retired), who did most of the heavy lifting: The Future Is Now: Why Trial Lawyers and Judges Should Embrace Generative AI Now and How to Do it Safely and Productively. (American Journal of Trial Advocacy, Vol. 48.2, Spring 2025),

B. “Full autonomy is imminent; hybrids are a temporary crutch.”

Autonomy is improving, but the current evidence contradicts claims of imminent AGI, much less super-intelligence. Instead, it shows:

  • programmatic bias,
  • low success rates, and
  • failure modes that directly implicate ethics, confidentiality, and safety.

That is why the authors of the Carnegie/Stanford paper recommend designs inspired by human workflows and step-level teaming, not unsupervised handoff. In fields like law and medicine, where standards of care and liability apply, hybrid is not a crutch, it is the design pattern.

Soon, the cyborg connection and control tools that humans use to work with AI will be design patterns too. Stylish new types of tattoos and jewelry may become popular as we evolve beyond the decades old smart phone obsession. See e.g. Jony Ive’s sale for $6.5 Billion to Open AI of his famous design company, which designed iPhones for Apple.

A portrait of a woman with short hair, wearing a black cap and glasses. Her skin features glowing blue circuit-like patterns. She is dressed in a black shirt and has a futuristic device around her neck.
Next generation computer links will emerge as we evolve beyond smart phones. Early forms of smart glasses and pendants are already available. I predict electric tattoos and hats will come next.

Plus, there are many things more important than thinking and speech, things that AI can never do. AI is a super-intellectual encyclopedia, but ultimately, heartless. This truth drives many of the fears people have about AI, but is not well founded. See, The Human Edge: How AI Can Assist But Never Replace, and AI Can Improve Great Lawyers—But It Can’t Replace Them.

C. “Hybrid slows teams down.”

The data in the Study shows:

  • augmentation inside human workflows, the hybrid team method, speeds people up by 24.3%;
  • attempted end-to-end automation slows people down by 17.7% because the verification and debugging of AI mistakes reduce the gains.

Hybrid done correctly is faster and safer than human-only practice. Autonomous AI is fast, and often clever, but its tendencies to err and fabricate make it too risky to let loose in the wild.

D. “Quality control can be automated away.”

Not for high-stakes work. The 37.5% data-analysis error rate and the fabrication examples are exactly the kind of failures automation does not see. Quality is judgment in context: applying rules to facts, weighing risk, and making trade-offs with human beings in mind. That is lawyer and medical work. While I agree some quality control work can be automated, especially by applying metrics, not all can be. The universe is too complex, the variables too many. We will always need humans in the loop, although their work to ensure excellence will constantly change.

E. “Agents already beat humans across the board.”

Where both succeed, agents are usually faster and cheaper. That is good news. But their success rates are still 32.5% to 49.5% lower. In law or medicine, a fast wrong answer is not a bargain, it is a liability. It could be a wrongful death. Hybrid workflows let you capture some of the speed and savings while keeping human-level or better quality.

A futuristic scene depicting a human operator interacting with a holographic AI assistant in a high-tech control room, surrounded by digital displays of information and data.
ThenStudy shows you have to keep a qualified human at the helm of Hybrid teams.

15. The New Working Rules
H-Y-B-R-I-D

These rules appys in law, medicine, and any other field that cannot afford unreviewed error. [Side Note: AI came up with this clever mnemonic, not me, but it knows I like this sort of thing.]

H Human in charge. Strategy, conclusions, and sign-off stay human.
Y Yield programmable steps to agents. Let agents handle tasks they can do well.
B Boundaries and bans. Define no-go areas: final legal opinions, privilege calls, etc.
R Review with provenance. If there is no source or traceable input, the output is not used.
I Instrument and iterate. Turn on logs, run regular fabrication drills, and update checklists.
D Disclose and document. Inform and document efforts when AI is used in a significant manner.

The word 'HYBRID' illustrated in a bold, colorful, and stylized font.

16. Does the November 2025 Study Use of Last Month’s Models Already Make it Obsolete?

After the Study was completed new models of AI were released that purport to improve on the accuracy and reduce the hallucinations of AI Agents. These are not empty claims. I am seeing this in my daily hands-on use of the latest AI. Still, I also see that every improvement seems to create new, typically more refined issues.

The advances in AI models do not change the structural lessons:

  • Agents still prefer programmatic paths over messy reality.
  • Step-level teaming still beats blind delegation, especially in risk sensitive occupations.
  • Logging, provenance, and supervision remain non-negotiable wherever high standards of care apply.

Hybrid is not a temporary workaround while we wait for some imagined fully autonomous professional AI. It is the durable operating model for AI in work, especially in legal work, medical, and other fields where judgment and accountability matter. The AI can augment and improve your work.

A man in a suit with digital circuitry patterns on his face and arm speaks in a courtroom setting while holding a tablet, with a humanoid robot behind him and a judge in the background.

Conclusion: Keep Humans in Command And Start Practicing Hybrid Now

The Carnegie/Stanford evidence confirms what those of us working hands-on with AI already know: Agents are astonishingly fast, relentlessly programmatic, and sometimes surprisingly brittle. Humans, on the other hand, bring judgment, spirit, context, and accountability, but not speed. When you combine those strengths intentionally—working in a close back-and-forth rhythm—you get the best of both worlds: speed with quality and real human awareness. That is the advanced cyborg style of hybrid practice.

And no, it is not the fully autonomous Agent that nerds and sci-fi optimists like me once dreamed about. But it is the world that researchers observed when they opened the box. Thank you, Stanford and Carnegie Mellon, for collapsing yet another Schrödinger cat.

An illustration depicting a futuristic robot on the left, looking confident, alongside a smaller, sad robot on the right, facing a computer screen with code and a question mark, symbolizing the challenges of AI in understanding complex tasks.
Observations burst another SciFi fantasy bubble about AI Agents.

Hybrid multimodal practice is not a temporary bridge. It is what agency actually looks like today. It is the durable operating model for law, medicine, engineering, finance, and every other field where errors matter and consequences are real. The Study shows that when humans handle the contextual, ambiguous, and judgment-heavy steps—and agents handle the programmable remainder—overall performance improves by 68.7% with quality intact. That is not a footnote. That is a strategy.

So the message for lawyers, clinicians, and every high-stakes professional is straightforward:

Use the machine. Supervise the machine. Do not become the machine.

Two individuals smiling at the camera, wearing futuristic attire and caps, with intricate geometric tattoos adorning their necks, set against a high-tech background.
These future humans are in control of their fashionable new AI devices. You don’t want to know what is under their hats!

Here is your short action plan—the first steps toward responsible AI practice:

  • Adopt the H-Y-B-R-I-D system across your team. It operationalizes the Study’s lessons and bakes verification into daily habits.
  • Instrument your agents. If a tool cannot log its actions, replay its steps, or anchor its facts, it does not belong in high-stakes work.
  • Shift to cyborg-style hybrid teaming, where humans handle judgment calls and agents handle the programmable portions of drafting, evidence analysis, spreadsheet work, and data tasks.
  • Train everyone on trust-but-verify behaviors, not as a slogan but as the muscle memory of modern practice.
A businessman in a suit holds a shield labeled 'VERIFY' to protect himself from two robotic figures that appear menacing, with glowing red eyes and error messages floating around them in a dark, dramatic setting.

Those who embrace hybrid intelligently will see their output improve, their risk decline, and their judgment sharpen. Those who avoid it—or try to leap straight to full autonomy—will struggle.

The future of professional practice is not human versus machine.

It is human judgment amplified by machine speed, with the human still holding the pen, signing the orders, and deciding what matters.

And that is exactly what the Study revealed when it opened the box on modern AI: not flawless autonomy, but the measurable advantage of humans and agents working together, each taking the steps they handle best.

Hybrid is here. Hybrid works. Now it’s time to practice it.

A diverse group of professionals stands confidently in a modern office environment, with two humanoid robots in the background. They are dressed in business attire and display a mix of expressions, indicating collaboration between humans and AI.

Echoes of AI Podcast

Click here to listen to two AIs talk about this article in a lively podcast format. Written by Google’s NotebookLM (not Losey). Losey conceived, produced, directed and verify this 14-minute podcast. By the way, Losey found the AIs made a couple of small errors, but not enough to require a redo. See if you can spot the one glaring, but small, mistake. Hint: had to do with the talk about wraps.

Illustration of two anonymous AI podcasters discussing the findings of the Stanford-Carnegie study on hybrid AI teams, featuring titles and graphics related to AI performance.
Click to start podcast.

Ralph Losey Copyright 2025 — All Rights Reserved


e-Discovery Team

LAW and TECHNOLOGY - Ralph Losey © 2006-2025

Skip to content ↓