2025 Year in Review: Beyond Adoption—Entering the Era of AI Entanglement and Quantum Law

December 31, 2025

Ralph Losey, December 31, 2025

As I sit here reflecting on 2025—a year that began with the mind-bending mathematics of the multiverse and ended with the gritty reality of cross-examining algorithms—I am struck by a singular realization. We have moved past the era of mere AI adoption. We have entered the era of entanglement, where we must navigate the new physics of quantum law using the ancient legal tools of skepticism and verification.

A split image illustrating two concepts: on the left, 'AI Adoption' showing an individual with traditional tools and paperwork; on the right, 'AI Entanglement' featuring the same individual surrounded by advanced technology and integrated AI systems.
In 2025 we moved from AI Adoption to AI Entanglement. All images by Losey using many AIs.

We are learning how to merge with AI and remain in control of our minds, our actions. This requires human training, not just AI training. As it turns out, many lawyers are well prepared by past legal training and skeptical attitude for this new type of human training. We can quickly learn to train our minds to maintain control while becoming entangled with advanced AIs and the accelerated reasoning and memory capacities they can bring.

A futuristic woman with digital circuitry patterns on her face interacts with holographic data displays in a high-tech environment.
Trained humans can enhance by total entanglement with AI and not lose control or separate identity. Click here or the image to see video on YouTube.

In 2024, we looked at AI as a tool, a curiosity, perhaps a threat. By the end of 2025, the tool woke up—not with consciousness, but with “agency.” We stopped typing prompts into a void and started negotiating with “agents” that act and reason. We learned to treat these agents not as oracles, but as ‘consulting experts’—brilliant but untested entities whose work must remain privileged until rigorously cross-examined and verified by a human attorney. That put the human legal minds in control and stops the hallucinations in what I called “H-Y-B-R-I-D” workflows of the modern law office.

We are still way smarter than they are and can keep our own agency and control. But for how long? The AI abilities are improving quickly but so are our own abilities to use them. We can be ready. We must. To stay ahead, we should begin the training in earnest in 2026.

A humanoid robot with glowing accents stands looking out over a city skyline at sunset, next to a man in a suit who observes the scene thoughtfully.
Integrate your mind and work with full AI entanglement. Click here or the image to see video on YouTube.

Here is my review of the patterns, the epiphanies, and the necessary illusions of 2025.

I. The Quantum Prelude: Listening for Echoes in the Multiverse

We began the year not in the courtroom, but in the laboratory. In January, and again in October, we grappled with a shift in physics that demands a shift in law. When Google’s Willow chip in January performed a calculation in five minutes that would take a classical supercomputer ten septillion years, it did more than break a speed record; it cracked the door to the multiverse. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan. 2025).

The scientific consensus solidified in October when the Nobel Prize in Physics was awarded to three pioneers—including Google’s own Chief Scientist of Quantum Hardware, Michel Devoret—for proving that quantum behavior operates at a macroscopic level. Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago; and Google’s New ‘Quantum Echoes Algorithm’ and My Last Article, ‘Quantum Echo’ (Oct. 2025).

For lawyers, the implication of “Quantum Echoes” is profound: we are moving from a binary world of “true/false” to a quantum world of “probabilistic truth”. Verification is no longer about identical replication, but about “faithful resonance”—hearing the echo of validity within an accepted margin of error.

But this new physics brings a twin peril: Q-Day. As I warned in January, the same resonance that verifies truth also dissolves secrecy. We are racing toward the moment when quantum processors will shatter RSA encryption, forcing lawyers to secure client confidences against a ‘harvest now, decrypt later’ threat that is no longer theoretical.

We are witnessing the birth of Quantum Law, where evidence is authenticated not by a hash value, but by ‘replication hearings’ designed to test for ‘faithful resonance.’ We are moving toward a legal standard where truth is defined not by an identical binary match, but by whether a result falls within a statistically accepted bandwidth of similarity—confirming that the digital echo rings true.

A digital display showing a quantum interference graph with annotations for expected and actual results, including a fidelity score of 99.2% and data on error rates and system status.
Quantum Replication Hearings Are Probable in the Future.

II. China Awakens and Kick-Starts Transparency

While the quantum future dangers gestated, AI suffered a massive geopolitical shock on January 30, 2025. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. The release of China’s DeepSeek not only scared the market for a short time; it forced the industry’s hand on transparency. It accelerated the shift from ‘black box’ oracles to what Dario Amodei calls ‘AI MRI’—models that display their ‘chain of thought.’ See my DeepSeek sequel, Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand. This display feature became the cornerstone of my later 2025 AI testing.

My Why the Release article also revealed the hype and propaganda behind China’s DeepSeek. Other independent analysts eventually agreed and the market quickly rebounded and the political, military motives became obvious.

A digital artwork depicting two armed soldiers facing each other, one representing the United States with the American flag in the background and the other representing China with the Chinese flag behind. Human soldiers are flanked by robotic machines symbolizing advanced military technology, set against a futuristic backdrop.
The Arms Race today is AI, tomorrow Quantum. So far, propaganda is the weapon of choice of AI agents.

III. Saving Truth from the Memory Hole

Reeling from China’s propaganda, I revisited George Orwell’s Nineteen Eighty-Four to ask a pressing question for the digital age: Can truth survive the delete key? Orwell feared the physical incineration of inconvenient facts. Today, authoritarian revisionism requires only code. In the article I also examine the “Great Firewall” of China and its attempt to erase the history of Tiananmen Square as a grim case study of enforced collective amnesia. Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother

My conclusion in the article was ultimately optimistic. Unlike paper, digital truth thrives on redundancy. I highlighted resources like the Internet Archive’s Wayback Machine—which holds over 916 billion web pages—as proof that while local censorship is possible, global erasure is nearly unachievable. The true danger we face is not the disappearance of records, but the exhaustion of the citizenry. The modern “memory hole” is psychological; it relies on flooding the zone with misinformation until the public becomes too apathetic to distinguish truth from lies. Our defense must be both technological preservation and psychological resilience.

A graphic depiction of a uniformed figure with a Nazi armband operating a machine that processes documents, with an eye in the background and the slogan 'IGNORANCE IS STRENGTH' prominently displayed at the top.
Changing history to support political tyranny. Orwell’s warning.

Despite my optimism, I remained troubled in 2025 about our geo-political situation and the military threats of AI controlled by dictators, including, but not limited to, the Peoples Republic of China. One of my articles on this topic featured the last book of Henry Kissinger, which he completed with Eric Schmidt just days before his death in late 2024 at age 100. Henry Kissinger and His Last Book – GENESIS: Artificial Intelligence, Hope, and the Human Spirit. Kissinger died very worried about the great potential dangers of a Chinese military with an AI advantage. The same concern applies to a quantum advantage too, although that is thought to be farther off in time.

IV. Bench Testing the AI models of the First Half of 2025

I spent a great deal of time in 2025 testing the legal reasoning abilities of the major AI players, primarily because no one else was doing it, not even AI companies themselves. So I wrote seven articles in 2025 concerning benchmark type testing of legal reasoning. In most tests I used actual Bar exam questions that were too new to be part of the AI training. I called this my Bar Battle of the Bots series, listed here in sequential order:

  1. Breaking the AI Black Box: A Comparative Analysis of Gemini, ChatGPT, and DeepSeek. February 6, 2025
  2. Breaking New Ground: Evaluating the Top AI Reasoning Models of 2025. February 12, 2025
  3. Bar Battle of the Bots – Part One. February 26, 2025
  4. Bar Battle of the Bots – Part Two. March 5, 2025
  5. New Battle of the Bots: ChatGPT 4.5 Challenges Reigning Champ ChatGPT 4o.  March 13, 2025
  6. Bar Battle of the Bots – Part Four: Birth of Scorpio. May 2025
  7. Bots Battle for Supremacy in Legal Reasoning – Part Five: Reigning Champion, Orion, ChatGPT-4.5 Versus Scorpio, ChatGPT-o3. May 2025.
Two humanoid robots fighting against each other in a boxing ring, surrounded by a captivated audience.
Battle of the legal bots, 7-part series.

The test concluded in May when the prior dominance of ChatGPT-4o (Omni) and ChatGPT-4.5 (Orion) was challenged by the “little scorpion,” ChatGPT-o3. Nicknamed Scorpio in honor of the mythic slayer of Orion, this model displayed a tenacity and depth of legal reasoning that earned it a knockout victory. Specifically, while the mighty Orion missed the subtle ‘concurrent client conflict’ and ‘fraudulent inducement’ issues in the diamond dealer hypothetical, the smaller Scorpio caught them—proving that in law, attention to ethical nuance beats raw processing power. Of course, there have been many models released since then May 2025 and so I may do this again in 2026. For legal reasoning the two major contenders still seem to be Gemini and ChatGPT.

Aside for legal reasoning capabilities, these tests revealed, once again, that all of the models remained fundamentally jagged. See e.g., The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7% (Sec. 5 – Study Consistent with Jagged Frontier research of Harvard and others). Even the best models missed obvious issues like fraudulent inducement or concurrent conflicts of interest until pushed. The lesson? AI reasoning has reached the “average lawyer” level—a “C” grade—but even when it excels, it still lacks the “superintelligent” spark of the top 3% of human practitioners. It also still suffers from unexpected lapses of ability, living as all AI now does, on the Jagged Frontier. This may change some day, but we have not seen it yet.

A stylized illustration of a jagged mountain range with a winding path leading to the peak, set against a muted blue and beige background, labeled 'JAGGED FRONTIER.'
See Harvard Business School’s Navigating the Jagged Technological Frontier and my humble papers, From Centaurs To Cyborgs, and Navigating the AI Frontier.

V. The Shift to Agency: From Prompters to Partners

If 2024 was the year of the Chatbot, 2025 was the year of the Agent. We saw the transition from passive text generators to “agentic AI”—systems capable of planning, executing, and iterating on complex workflows. I wrote two articles on AI agents in 2025. In June, From Prompters to Partners: The Rise of Agentic AI in Law and Professional Practice and in November, The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%.

Agency was mentioned in many of my other articles in 2025. For instance, in my June and July as part of my release the ‘Panel of Experts’—a free custom GPT tool that demonstrated AI’s surprising ability to split into multiple virtual personas to debate a problem. Panel of Experts for Everyone About Anything, Part One and Part Two and Part Three .Crucially, we learned that ‘agentic’ teams work best when they include a mandatory ‘Contrarian’ or Devil’s Advocate. This proved that the most effective cure for AI sycophancy—its tendency to blindly agree with humans—is structural internal dissent.

By the end of 2025 we were already moving from AI adoption to close entanglement of AI into our everyday lives

An artistic representation of a human hand reaching out to a robotic hand, signifying the concept of 'entanglement' in AI technology, with the year 2025 prominently displayed.
Close hybrid multimodal methods of AI use were proven effective in 2025 and are leading inexorably to full AI entanglement.

This shift forced us to confront the role of the “Sin Eater”—a concept I explored via Professor Ethan Mollick. As agents take on more autonomous tasks, who bears the moral and legal weight of their errors? In the legal profession, the answer remains clear: we do. This reality birthed the ‘AI Risk-Mitigation Officer‘—a new career path I profiled in July. These professionals are the modern Sin Eaters, standing as the liability firewall between autonomous code and the client’s life, navigating the twin perils of unchecked risk and paralysis by over-regulation.

But agency operates at a macro level, too. In June, I analyzed the then hot Trump–Musk dispute to highlight a new legal fault line: the rise of what I called the ‘Sovereign Technologist.’ When private actors control critical infrastructure—from satellite networks to foundation models—they challenge the state’s monopoly on power. We are still witnessing a constitutional stress-test where the ‘agency’ of Tech Titans is becoming as legally disruptive as the agents they build.

As these agents became more autonomous, the legal profession was forced to confront an ancient question in a new guise: If an AI acts like a person, should the law treat it like one? In October, I explored this in From Ships to Silicon: Personhood and Evidence in the Age of AI. I traced the history of legal fictions—from the steamship Siren to modern corporations—to ask if silicon might be next.

While the philosophical debate over AI consciousness rages, I argued the immediate crisis is evidentiary. We are approaching a moment where AI outputs resemble testimony. This demands new tools, such as the ALAP (AI Log Authentication Protocol) and Replication Hearings, to ensure that when an AI ‘takes the stand,’ we can test its veracity with the same rigor we apply to human witnesses.

VI. The New Geometry of Justice: Topology and Archetypes

To understand these risks, we had to look backward to move forward. I turned to the ancient visual language of the Tarot to map the “Top 22 Dangers of AI,” realizing that archetypes like The Fool (reckless innovation) and The Tower (bias-driven collapse) explain our predicament better than any white paper. See, Archetypes Over Algorithms; Zero to One: A Visual Guide to Understanding the Top 22 Dangers of AI. Also see, Afraid of AI? Learn the Seven Cardinal Dangers and How to Stay Safe.

But visual metaphors were only half the equation; I also needed to test the machine’s own ability to see unseen connections. In August, I launched a deep experiment titled Epiphanies or Illusions? (Part One and Part Two), designed to determine if AI could distinguish between genuine cross-disciplinary insights and apophenia—the delusion of seeing meaningful patterns in random data, like a face on Mars or a figure in toast.

I challenged the models to find valid, novel connections between unrelated fields. To my surprise, they succeeded, identifying five distinct patterns ranging from judicial linguistic styles to quantum ethics. The strongest of these epiphanies was the link between mathematical topology and distributed liability—a discovery that proved AI could do more than mimic; it could synthesize new knowledge

This epiphany lead to investigation of the use of advanced mathematics with AI’s help to map liability. In The Shape of Justice, I introduced “Topological Jurisprudence”—using topological network mapping to visualize causation in complex disasters. By mapping the dynamic links in a hypothetical we utilized topology to do what linear logic could not: mathematically exonerate the innocent parties. The topological map revealed that the causal lanes merged before the control signal reached the manufacturer’s product, proving the manufacturer had zero causal connection to the crash despite being enmeshed in the system. We utilized topology to do what linear logic could not: mathematically exonerate the innocent parties in a chaotic system.

A person in a judicial robe stands in front of a glowing, intricate, knot-like structure representing complex data or ideas, symbolizing the intersection of law and advanced technology.
Topological Jurisprudence: the possible use of AI to find order in chaos with higher math. Click here to see YouTube video introduction.

VII. The Human Edge: The Hybrid Mandate

Perhaps the most critical insight of 2025 came from the Stanford-Carnegie Mellon study I analyzed in December: Hybrid AI teams beat fully autonomous agents by 68.7%.

This data point vindicated my long-standing advocacy for the “Centaur” or “Cyborg” approach. This vindication led to the formalization of the H-Y-B-R-I-D protocol: Human in charge, Yield programmable steps, Boundaries on usage, Review with provenance, Instrument/log everything, and Disclose usage. This isn’t just theory; it is the new standard of care.

My “Human Edge” article buttressed the need for keeping a human in control. I wrote this in January 2025 and it remains a persona favorite. The Human Edge: How AI Can Assist But Never Replace. Generative AI is a one-dimensional thinking tool My ‘Human Edge’ article buttressed the need for keeping a human in control… AI is a one-dimensional thinking tool, limited to what I called ‘cold cognition’—pure data processing devoid of the emotional and biological context that drives human judgment. Humans remain multidimensional beings of empathy, intuition, and awareness of mortality.

AI can simulate an apology, but it cannot feel regret. That existential difference is the ‘Human Edge’ no algorithm can replicate. This self-evident claim of human edge is not based on sentimental platitudes; it is a measurable performance metric.

I explored the deeper why behind this metric in June, responding to the question of whether AI would eventually capture all legal know-how. In AI Can Improve Great Lawyers—But It Can’t Replace Them, I argued that the most valuable legal work is contextual and emergent. It arises from specific moments in space and time—a witness’s hesitation, a judge’s raised eyebrow—that AI, lacking embodied awareness, cannot perceive.

We must practice ‘ontological humility.’ We must recognize that while AI is a ‘brilliant parrot’ with a photographic memory, it has no inner life. It can simulate reasoning, but it cannot originate the improvisational strategy required in high-stakes practice. That capability remains the exclusive province of the human attorney.

A futuristic office scene featuring humanoid robots and diverse professionals collaborating at high-tech desks, with digital displays in a skyline setting.
AI data-analysis servants assisting trained humans with project drudge-work. Close interaction approaching multilevel entanglement. Click here or image for YouTube animation.

Consistent with this insight, I wrote at the end of 2025 that the cure for AI hallucinations isn’t better code—it’s better lawyering. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. We must skeptically supervise our AI, treating it not as an oracle, but as a secret consulting expert. As I warned, the moment you rely on AI output without verification, you promote it to a ‘testifying expert,’ making its hallucinations and errors discoverable. It must be probed, challenged, and verified before it ever sees a judge. Otherwise, you are inviting sanctions for misuse of AI.

Infographic titled 'Cross-Examine Your AI: A Lawyer's Guide to Preventing Hallucinations' outlining a protocol for legal professionals to verify AI-generated content. Key sections highlight the problem of unchecked AI, the importance of verification, and a three-phase protocol involving preparation, interrogation, and verification.
Infographic of Cross-Exam ideas. Click here for full size image.

VII. Conclusion: Guardians of the Entangled Era

As we close the book on 2025, we stand at the crossroads described by Sam Altman and warned of by Henry Kissinger. We have opened Pandora’s box, or perhaps the Magician’s chest. The demons of bias, drift, and hallucination are out, alongside the new geopolitical risks of the “Sovereign Technologist.” But so is Hope. As I noted in my review of Dario Amodei’s work, we must balance the necessary caution of the “AI MRI”—peering into the black box to understand its dangers—with the “breath of fresh air” provided by his vision of “Machines of Loving Grace.” promising breakthroughs in biology and governance.

The defining insight of this year’s work is that we are not being replaced; we are being promoted. We have graduated from drafters to editors, from searchers to verifiers, and from prompters to partners. But this promotion comes with a heavy mandate. The future belongs to those who can wield these agents with a skeptic’s eye and a humanist’s heart.

We must remember that even the most advanced AI is a one-dimensional thinking tool. We remain multidimensional beings—anchored in the physical world, possessed of empathy, intuition, and an acute awareness of our own mortality. That is the “Human Edge,” and it is the one thing no quantum chip can replicate.

Let us move into 2026 not as passive users entangled in a web we do not understand, but as active guardians of that edge—using the ancient tools of the law to govern the new physics of intelligence

Infographic summarizing the key advancements and societal implications of AI in 2025, highlighting topics such as quantum computing, agentic AI, and societal risk management.
Click here for full size infographic suitable for framing for super-nerds and techno-historians.

Ralph Losey Copyright 2025 — All Rights Reserved


Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations

December 17, 2025

Ralph Losey, December 17, 2025

I. Introduction: The Untested Expert in Your Office

AI walks into your office like a consulting expert who works fast, inexpensively, and speaks with knowing confidence. And, like any untested expert, is capable of being spectacularly wrong. Still, try AI out, just be sure to cross-examine it before using the work-product. This article will show you how.

A friendly-looking robot with a white exterior and glowing blue eyes, set against a wooden background. The robot has a broad smile and a tagline that reads, 'AI is only too happy to please.'
Want AI to do legal research? Find a great case on point? Beware: any ‘Uncrossed AI’ might happily make one up for you. [All images in this article by Ralph Losey using AI tools.]

Lawyers are discovering AI hallucinations the hard way. Courts are sanctioning attorneys who accept AI’s answers at face value and paste them into briefs without a single skeptical question. In the first, Mata v. Avianca, Inc., a lawyer submitted a brief filled with invented cases that looked plausible but did not exist. The judge did not blame the machine. The judge blamed the lawyer. In Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024), the Second Circuit again confronted AI-generated citations that dissolved under scrutiny. Case dismissed. French legal scholar Damien Charlotin has catalogued almost seven hundred similar decisions worldwide in his AI Hallucination Cases project. The pattern is the same: the lawyer treated AI’s private, untested opinion as if it were ready for court. It wasn’t. It never is.

A holographic figure resembling a consultant sits at a table with two lawyers, one male and one female, who appear to be observing the figure's briefcase labeled 'ANSWERS.' Books are placed on the table.
Never accept research or opinions before you skeptically cross-examine the AI.

The solution is not fear or avoidance. It is preparation. Think of AI the way you think of an expert you are preparing to testify. You probe their reasoning. You make sure they are not simply trying to agree with you. You examine their assumptions. You confirm that every conclusion has a basis you can defend. When you apply that same discipline to AI — simple, structured, lawyerly questioning — the hallucinations fall away and the real value emerges.

This article is not about trials. It is about applying cross-examination instincts in the office to control a powerful, fast-talking, low-budget consulting expert who lives in your laptop.

Click here to see video on YouTube of Losey’s encounters with unprepared AIs.

II. AI as Consulting Expert and Testifying Expert: A Hybrid Metaphor That Works

Experienced litigators understand the difference between a consulting expert and a testifying expert. A consulting expert works in private. You explore theories. You stress-test ideas. The expert can make mistakes, change positions, or tell you that your theory is weak. None of it harms the case because none of it leaves the room. It is not discoverable.

Once you convert that same person into a testifying expert, everything changes. Their methodology must be clear. Their assumptions must be sound. Their sources must be disclosed. Their opinions must withstand cross-examination. Their credibility must be earned. Discovery of them is open subject to minor restraints.

AI Should always start as a secret consulting expert. It answers privately, often brilliantly, sometimes sloppily, and occasionally with complete fabrications. But the moment you rely on its words in a brief, a declaration, a demand letter, a discovery response, or a client advisory, you have promoted that consulting expert to a testifying one. Judges and opposing counsel will evaluate its work that way — even if you didn’t.

This hybrid metaphor — part expert preparation, part cross-examination — is the most accurate way to understand AI in legal practice. It gives you a familiar, legally sound framework for interrogating AI before staking your reputation on its output.

A lawyer seated at a desk reading documents, with a holographic figure representing AI or an expert consultant displayed next to him.
Working with AI and carefully examining its early drafts.

III. Why Lawyers Fear AI Today: The Hallucination Problem Is Real, but Preventable

AI hallucinations sound exotic, but they are neither mysterious nor unpredictable. They arise from familiar causes:

Anyone who has ever supervised an over-confident junior associate will recognize these patterns or response. Ask vague questions and reward polished answers, and you will get polished answers whether they are correct or not.

The problem is not that AI hallucinates. The problem is that lawyers forget to interrogate the hallucination before adopting it.

Never rely on an AI that has not been cross examined.

Both lawyer and judicial frustration is mounting. Charlotin’s global hallucination database reads like a catalogue of avoidable errors. Lawyers cite nonexistent cases, rely on invented quotations, or submit timelines that collapse the moment a judge asks a basic question. Courts have stopped treating these problems as innocent misunderstandings about new technology. Increasingly, they see them as failures of competence and diligence.

The encouraging news is that hallucinations collapse under even moderate questioning. AI improvises confidently in silence. It becomes accurate under pressure.

That pressure is supplied by cross-examination.

A female business professional discussing strategies with a humanoid robot in a modern office setting, displaying the text 'PREPARE INTERROGATE VERIFY' on a screen in the background.
Team approach to AI prep works well, including other AIs.

IV. Five Cross-Examination Techniques for AI

The techniques below are adapted from how lawyers question both their own experts and adverse ones. They require no technical training. They rely entirely on skills lawyers already use: asking clear questions, demanding reasoning, exposing assumptions, and verifying claims.

The five techniques are:

  1. Ask for the basis of the opinion.
  2. Probe uncertainty and limits.
  3. Present the opposing argument.
  4. Test internal consistency.
  5. Build a verification pathway.

Each can be implemented through simple, repeatable prompts.

A woman in a business suit stands confidently in a courtroom-like setting, pointing with one finger while holding a tablet. Next to her is a humanoid robot. A large sign in the background displays the words 'BASIS', 'UNCERTAINTY', 'OPPOSING', 'CONSISTENCY', and 'VERIFY'. Sky-high view of city buildings is visible through the window.
Click to see YouTube video of this associate’s presentation to partners of the AI cross-exam.

1. Ask for the Basis of the Opinion

AI developers use the word “mechanism.” Lawyers use reasoning, methodology, procedure, or logic. Whatever the label, you need to know how the model reached its conclusion.

Instead of asking, “What’s the law on negligent misrepresentation in Florida?” ask:

“Walk me through your reasoning step by step. List the elements, the leading cases, and the authorities you are relying on. For each step, explain why the case applies.”

This produces a reasoning ladder rather than a polished paragraph. You can inspect the rungs and see where the structure holds or collapses.

Ask AI explicitly to:

  • identify each reasoning step
  • list assumptions about facts or law
  • cite authorities for each step
  • rate confidence in each part of the analysis

If the reasoning chain buckles, the hallucination reveals itself.

A lawyer in a suit examining a transparent, futuristic humanoid robot's head with a flashlight in a library setting.
Click here for short YouTube video animation about reasoning cross.

2. Probe Uncertainty and Limits

AI tries to be helpful and agreeable. It will give you certainty, even though it is fake. The original AI training data from the Internet never said, “I don’t know the answer.” So now you have to train your AI in prompts and project instructions to admit it does not know. You must demand honesty. You must demand truth over agreement with your own thoughts and desires. Repeatedly specify to AI in instructions to admit when it does not know the answer, or is uncertain. Get it to explain to you what is does not know; to explain what it cannot provide citations to support. Get it to reveal the unknowns.

A friendly robot with a smile sitting at a desk with a computer keyboard, in front of two screens displaying error messages '404 ANSWER NOT FOUND' and 'ANSWERS NOT FOUND.' The robot appears to be ready to improvise.
Most AIs do not like to admit they don’t know. Do you?

Ask your AI:

  • “What do you not know that might affect this conclusion?”
  • “What facts would change your analysis?”
  • “Which part of your reasoning is weakest?”
  • “Which assumptions are unstated or speculative?”

Good human experts do this instinctively. They mark the edges of their expertise. AI will also do it, but only when asked.

A man in a suit stands in a courtroom, holding a tablet and speaking confidently, with a holographic display of connected data points in the background.
Click here for YouTube animation of AI cross of its unknowns.

3. Present the Opposing Argument

If you only ask, “Why am I right?” AI will gladly tell you why you are right. Sycophantism is one of its worst habits.

Counteract that by assigning it the opposing role:

  • “Give me the strongest argument against your conclusion.”
  • “How would opposing counsel attack this reasoning?”
  • “What weaknesses in my theory would they highlight?”

This is the same preparation you would do with a human expert before deposition: expose vulnerabilities privately so they do not explode publicly.

A lawyer in a formal suit stands in a courtroom, examining a holographic chessboard with blue and orange outlines representing opposing arguments.
Quality control by counter-arguments. Click here for short YouTube animation.

4. Test Internal Consistency

Hallucinations are brittle. Real reasoning is sturdy.

You expose the difference by asking the model to repeat or restructure its own analysis.

  • “Restate your answer using a different structure.”
  • Summarize your prior answer in three bullet points and identify inconsistencies.”
  • “Explain your earlier analysis focusing only on law; now do the same focusing only on facts.”

If the second answer contradicts the first, you know the foundation is weak.

This is impeachment in the office, not in the courtroom.

A digitally created robot face divided in half, with one side featuring cool metallic tones and glowing blue elements, and the other side displaying warmer hues with a glowing red effect.
Click here for YouTube animation on contradictions.

5. Build a Verification Pathway

Hallucinations survive only when no one checks the sources.

Verification destroys them.

Always:

  • read every case AI cites and make sure the court cited actually issued the opinion (of course, also check case history to verify it is still good law)
  • confirm that the quotations appear in the opinion (sometime small errors creep in)
  • check jurisdiction, posture, and relevance (normal lawyer or paralegal analysis)
  • verify every critical factual claim and legal conclusion

This is not “extra work” created by AI. It is the same work lawyers owe courts and clients. The difference is simply that AI can produce polished nonsense faster than a junior associate. Overall, after you learn the AI testing skills, the time and money saved will be significant. This associate practically works for free with no breaks for sleep, much less food or coffee.

Your job is to slow it down. Turn it off while you check its work.

An older man in a suit sits at a table, writing notes on a document, while a humanoid robot with blue eyes sits beside him in a professional setting.
Always carefully check the work of your AIs.

V. How Cross-Examination Dramatically Reduces Hallucinations

Cross-examination is not merely a metaphor here. It is the mechanism — in the lawyer’s meaning of the word — that exposes fabrication and reveals truth.

Consider three realistic hypotheticals.

1. E-Discovery Misfire

AI says a custodian likely has “no relevant emails” based on role assumptions.

You ask: “List the assumptions you relied on.”

It admits it is basing its view on a generic corporate structure.

You know this company uses engineers in customer-facing negotiations.

Hallucination avoided.

2. Employment Retaliation Timeline

AI produces a clean timeline that looks authoritative.

You ask: “Which dates are certain and which were inferred?”

AI discloses that it guessed the order of two meetings because the record was ambiguous.

You go back to the documents.

Hallucination avoided.

3. Contract Interpretation

AI asserts that Paragraph 14 controls termination rights.

You ask: “Show me the exact language you relied on and identify any amendments that affect it.”

It re-reads the contract and reverses itself.

Hallucination avoided.

The common thread: pressure reveals quality.

Without pressure, hallucinations pass for analysis.

A businessman in a suit points at a digital display showing a timeline with events and an inconsistency highlighted in red, seated next to a humanoid robot on a table with a laptop.
Work closely with your AI to improve and verify its output.

VI. Why Litigators Have a Natural Advantage — And How Everyone Else Can Learn

Litigators instinctively challenge statements. They distrust unearned confidence. They ask what assumptions lie beneath a conclusion. They know how experts wilt when they cannot defend their methodology.

But adversarial reasoning is not limited to courtrooms. Transactional lawyers use it in negotiations. In-house lawyers use it in risk assessments. Judges use it in weighing credibility. Paralegals and case managers use it in preparing witnesses and assembling factual narratives.

Anyone in the legal profession can practice:

  • asking short, precise questions
  • demanding reasoning, not just conclusions
  • exploring alternative explanations
  • surfacing uncertainty
  • checking for consistency

Cross-examining AI is not a trial skill. It is a thinking skill — one shared across the profession.

A business meeting in an office featuring a woman in a suit presenting to a robot resembling Iron Man, while a man in a suit sits at a laptop, with a display showing academic citations and data in the background.
Thinking like a lawyer is a prerequisite for AI training; be skeptical and objective.

VII. The Lawyer’s Advantage Over AI

AI is inexpensive, fast, tireless, and deeply cross-disciplinary. It can outline arguments, summarize thousands of pages, and identify patterns across cases at a speed humans cannot match. It never complains about deadlines and never asks for a retainer.

Human experts outperform AI when judgment, nuance, emotional intelligence, or domain mastery are decisive. But those experts are not available for every issue in every matter.

AI provides breadth. Lawyers provide judgment.

AI provides speed. Lawyers provide skepticism.

AI provides possibilities. Lawyers decide what is real.

Properly interrogated, AI becomes a force multiplier for the profession.

Uninterrogated, it becomes a liability.

A professional meeting room with a diverse group of lawyers and a robot figure. The human leader gestures confidently while presenting. A screen behind them displays phrases like 'Challenge assumptions,' 'Expose weak logic,' and 'Ask better questions.'
Good lawyers challenge and refine their AI output.

VIII. Courts Expect Verification — And They Are Right

Judges are not asking lawyers to become engineers or to audit model weights. They are asking lawyers to verify their work.

In hallucination sanction cases, courts ask basic questions:

  • Did you read the cases before citing them?
  • Did you confirm that the case exists in any reporter?
  • Did you verify the quotations?
  • Did you investigate after concerns were raised?

When the answer is no, blame falls on the lawyer, not on the software.

Verification is the heart of legal practice.

It just takes a few minutes to spot and correct the hallucinated cases. The AI needs your help.

IX. Practical Protocol: How to Cross-Examine Your AI Before You Rely on It

A reliable process helps prevent mistakes. Here is a simple, repeatable, three-phase protocol.

Phase 1: Prepare

  1. Clarify the task.

Ask narrow, jurisdiction-specific, time-anchored questions.

  1. Provide context.

Give procedural posture, factual background, and applicable law.

  1. Request reasoning and sources up front.

Tell AI you will be reviewing the foundation.

Phase 2: Interrogate

  1. Ask for step-by-step reasoning.
  2. Probe what the model does not know.
  3. Have it argue the opposite side.
  4. Ask for the analysis again, in a different structure.

This phase mimics preparing your own expert — in private.

Phase 3: Verify

  1. Check every case in a trusted database.
  2. Confirm factual claims against your own record.
  3. Decide consciously which parts to adopt, revise, or discard.

Do all this and if a judge or client later asks, “What did you do to verify this?” – you have a real answer.

Business meeting involving a lawyer presenting to a man and a humanoid robot, with a digital presentation on a screen that includes flowchart-style prompts.
It takes some training and experience, but keeping your AI under control is really not that hard.

X. The Positive Side: AI Becomes Powerful After Cross-Examination

Once you adopt this posture, AI becomes far less dangerous and far more valuable.

When you know you can expose hallucinations with a few well-crafted questions, you stop fearing the tool. You start seeing it as an idea generator, a drafting assistant, a logic checker, and even a sparring partner. It shows you the shape of opposing arguments. It reveals where your theory is vulnerable. It highlights ambiguities you had overlooked.

Cross-examination does not weaken AI.

It strengthens the partnership between human lawyer and machine.

A lawyer and a humanoid robot stand together in a courtroom, representing a blend of human expertise and artificial intelligence in legal practice.
Click here for video animation on YouTube.

XI. Conclusion: The Return of the Lawyer

Cross-examining your AI is not a theatrical performance. It is the methodical preparation that seasoned litigators use whenever they evaluate expert opinions. When you ask AI for its basis, test alternative explanations, probe uncertainty, check consistency, and verify its claims, you transform raw guesses into analysis that can withstand scrutiny.

Two professionals interacting with a futuristic robot in an office setting, analyzing a digital display that highlights the concept of 'Inference Gap Needs Judgment' amidst various data points and inferences.
Complex assignments always take more time but the improved quality AI can bring is well worth it.

Courts are no longer forgiving lawyers who fall for a sycophantic AI and skip this step. But they respect lawyers who demonstrate skeptical, adversarial reasoning — the kind that prevents hallucinations, avoids sanctions, and earns judicial confidence. More importantly, this discipline unlocks AI’s real advantages: speed, breadth, creativity, and cross-disciplinary insight.

The cure for hallucinations is not technical.

It is skeptical, adversarial reasoning.

Cross-examine first. Rely second.

That is how AI becomes a trustworthy partner in modern practice.

See the animation of our goodbye summary on the YouTube video. Click here.

Ralph Losey Copyright 2025 — All Rights Reserved


Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother

April 1, 2025

by Ralph Losey with illustrations also by Ralph using his Visual Muse AI. March 28, 2025.

George Orwell warned us in his dark masterpiece Nineteen Eighty-Four how effortlessly authoritarian regimes could erase inconvenient truths by tossing records into a “memory hole”—a pneumatic chute leading directly to incineration. Once burned, these facts ceased to exist, allowing Big Brother’s Ministry of Truth to rewrite reality without contradiction. This scenario was plausible in Orwell’s paper-bound world, where truth relied heavily on fragile documents and even more fragile human memory. History could be repeatedly altered by those in power, keeping citizens ignorant or indifferent—and ignorance strengthened the regime’s grip. Even more damaging, Orwell, whose real name, now nearly forgotten, was Eric Blair (1903-1950), envisioned how constant exposure to contradictory misinformation could numb citizens psychologically, leaving them passive and apathetic, unwilling or unable to distinguish truth from lies.

Fortunately, our paper-bound past is long behind us. Today, we inhabit a digital era Orwell never envisioned, where information is electronically stored, endlessly replicated, and globally dispersed. Electronically Stored Information (“ESI”) is simultaneously ephemeral and astonishingly resistant to permanent deletion. Instead of vanishing in smoke and ashes, digital truth multiplies exponentially—making it nearly impossible for any would-be Big Brother to bury reality forever. Yet, the same digital proliferation that safeguards truth also multiplies misinformation, posing the threat Orwell most feared: a confused and exhausted citizenry vulnerable to psychological manipulation.

Memory Holes

In Orwell’s 1984 a totalitarian regime systematically altered historical records to maintain control over truth. Documents, photographs, and any inconvenient historical truths vanished permanently, as if they never existed. Orwell’s literary nightmare finds unsettling parallels in today’s digital world, where online information can be silently modified, deleted, or rewritten without obvious traces. Modern memory hole practices pose real challenges for the preservation of accurate accounts of the past..

Today’s memory hole doesn’t rely on fire; it relies on code, and it doesn’t need a Big Brother bureaucracy. A simple click of a “delete” button instantly kills the information targeted. Touch three buttons at once, click-alt-delete, and a whole system of beliefs is rebooted. Any government, corporation, hacker groups or individuals can manipulate digital records effortlessly. Such ease breeds public skepticism and confusion—citizens become exhausted by contradictory narratives and lose confidence in their own perceptions of reality. Orwell’s warning becomes clear: constant misinformation risks eroding citizens’ psychological resilience, causing widespread apathy and helplessness. Yesterday’s obvious misstatement can become today’s truth. Think of the first sentence of Orwell’s book: “It was a bright cold day in April, and the clocks were striking thirteen.

China’s Attempted Erasure of Tiananmen Square

In early June 1989, the Chinese military brutally suppressed pro-democracy protests in Beijing. The estimated death toll ranged from hundreds to thousands, but exact numbers remain uncertain due to intense state censorship. Public acknowledgment or commemoration of the incident is systematically banned, enforced by severe penalties including imprisonment. Government-controlled media remains silent or actively spreads misinformation. Chinese internet censorship tools—the so-called “Great Firewall”—vigorously scrub references to the Tiananmen Square incident, blocking web pages and posts containing related keywords and images. Young generations living in China remain unaware or possess distorted knowledge of the massacre, demonstrating Orwell’s warning of enforced collective amnesia.

Efforts to preserve truth outside China, however, demonstrate digital resilience. Human rights groups, diaspora communities, and academic institutions diligently archive documents and eyewitness accounts. Digital redundancy ensures that factual records remain accessible globally. But digital redundancy alone cannot protect Chinese citizens from internal psychological manipulation. Constant state-sponsored misinformation inside China successfully induces apathy, illustrating Orwell’s psychological warning vividly.

This deliberate suppression of history in China serves as stark reminder of the vulnerabilities inherent in a digitally interconnected world where powerful entities control internet access and online narratives. The success of the Chinese government in rewriting history for its 1.5 Billion population demonstrates the profound value and urgency of international digital preservation efforts. It underscores the responsibility of legal professionals, human rights advocates, and technology companies worldwide to collaborate in protecting historical truth and ensuring that significant events remain accessible for future generations.

Hope Through Digital Redundancy and Psychological Resilience

Orwell could not conceive of our digital world, where truth is multiplicious, freely copied, and stored globally. Thousands or millions of digital copies safeguard history, making complete erasure nearly impossible

According the Katharine Trendacosta, who is the Director of Policy and Advocacy of the well-respected Electronic Frontier Foundation:

If there is one axiom that we should want to be true about the internet, it should be: the internet never forgets. One of the advantages of our advancing technology is that information can be stored and shared more easily than ever before. And, even more crucially, it can be stored in multiple places.  

Those who back things up and index information are critical to preserving a shared understanding of facts and history, because the powerful will always seek to influence the public’s perception of them. It can be as subtle as organizing a campaign to downrank articles about their misdeeds, or as unsubtle as removing previously available information about themselves. 

Trendacosta, The Internet Never Forgets: Fighting the Memory Hole (EFF, 1/30/25).

Yet digital abundance alone doesn’t eliminate Orwell’s deeper psychological threat. Constant misinformation can erode citizens’ willingness and ability to discern truth, leading to profound apathy. Addressing this requires active psychological strategies:

  1. Digital Literacy and Education: Equip citizens with skills to critically evaluate and cross-check digital information.
  2. Algorithmic Transparency: Demand transparency from platforms regarding content promotion and clearly label misinformation.
  3. Independent Journalism: Support credible journalism to provide trustworthy reference points.
  4. Civic Engagement: Encourage active citizen participation, dialogue, and public accountability.
  5. Verification Tools: Provide accessible, user-friendly digital tools for independent verification of information authenticity.
  6. International Cooperation: Strengthen global collaboration against coordinated misinformation campaigns.
  7. Psychological Resilience: Foster healthy skepticism and educate the public about misinformation’s emotional and cognitive impacts.

The Digital Memory Holes Today

Recent U.S. governmental memory hole actions involving the deletion of web content on Diversity, Equity, and Inclusion (DEI) illustrate digital manipulation’s psychological risks even in democratic societies. Megan Garber‘s article in The Atlantic, Control. Alt. Delete, describes these deletions as “tools of mass forgetfulness,” emphasizing how selective editing weakens collective memory and societal cohesion. (Ironically, the article is hidden behind a firewall, so you may not be able to read it.)

Our collective memories of key events are an important part of the glue holding people together. They must be treasured and preserved. Everyone remembers where they were when the planes struck the twin towers on 9/11, when the Challenger exploded, and for those old enough, the day of JFK’s assassination. There are many more historical events that hold a country together. For instance, the surprise attack of Pearl Harbor, the horrors of fighting the Nazis and others in WWII and the shocking discovery of the Holocaust atrocities. The list goes on and on, including Hiroshima. We must never forget the many harsh lessons of history or we may be doomed to repeat them. The warning of Orwell is clear: “Who controls the past controls the future; who controls the present controls the past.” We must never allow our memories of the past to be sucked into a black hole of forgetfulness.

Memories sucked into a black hole in Graphite Sketch Horror style by Ralph Losey using his sometimes scary Visual Muse.

Our collective memories and democratic values are unlikely to be disintegrate into totalitarianism, despite the alarming cries of the Atlantic and others. Although some small attempts to rewrite history recently are troubling, the U.S, unlike China, has had a democratic system of government in place for centuries. It has always had a two-party system of government. Even the Chinese government, where only one party has ever been allowed, the communist party, took decades to purge Tiananmen Square memories. These memories are still alive outside of mainland China. The world today is vast and interconnected, its digital writings are countless. The true history of China, including the many great cultural achievements of pre-communist China, will eventually escape from the memory holes and reunite with its people.

The current administration in the U.S. does not have unchecked power as the Atlantic article suggests. Perhaps we should be concerned about new memory holes but not fearful. The larger concern is the psychological impact of rapidly changing dialogues. Even though there is too much electronic data for a complete memory reboot anywhere, digital misinformation and selective editing of records still pose psychological risks. Citizens bombarded by conflicting narratives can become apathetic, confused, and disengaged, weakening democracy from within. Protecting our mental health must be a high priority for everyone.

Leveraging Internet Archives: The Wayback Machine

Internet archival services, notably the Internet Archive’s Wayback Machine, is a powerful ally against digital historical revisionism. The Wayback Machine currently has over 916 billion web pages stored, including government websites. See this recent article providing good background on the Internet Archive’s work to preserve history. As the Trump administration purges web pages, this group is rushing to save them (NPR, 3/23/25).

According to the NPR article, the Internet Archive has copies of all of the government websites that were later taken down or altered after the Biden Administration left. Supposedly the Internet Archive is the only place the public can now find a copy of an interactive timeline detailing the events of Jan. 6. The timeline is a product of the congressional committee that investigated the Capitol attack, and has since been taken down from their website. No doubt there are now many, many copies of it online, especially in the so-called dark web, not to mention even more copies stored offline on portable drives scattered the world over.

This publicly accessible resource archives billions of webpages, allowing anyone to access snapshots of web content even after the original pages are altered or removed. I just checked my own website for the first time ever and found it has been “saved 538 times between March 21, 2007 and March 1, 2025.” Internet Archive 93/26/25). It provides an incredible amount of detailed information on each website captured, most of which is displayed in impressive, customizable graphics. See e.g. e-Discovery Team Site Map for the year 2024.

I had the Wayback Machine do the same kind of analysis for EDRM.net, found here. Here is the link to the interactive EDRM.net site map for 2024. And this is a still image screen shot of the map.

This is the Internet Archive explanation of the interactive map:

This “Site Map” feature groups all the archives we have for websites by year, then builds a visual site map, in the form of a radial-tree graph, for each year. The center circle is the “root” of the website and successive rings moving out from the center present pages from the site. As you roll-over the rings and cells note the corresponding URLs change at the top, and that you can click on any of the individual pages to go directly to an archive of that URL.

It is important to the fight against memory holes that the Way Back Machine be protected. It has a sixteen projects listed as now in progress and many ways that you can help. All of its data should duplicated, encrypted and dispersed to undisclosed guardians. Actually, I would be surprised if this has not already been done many times over the years.

It remains to be seen what role the LLM’s vacuum of internet data will play in all this. They have been trained at specific times on Internet data and presumably all of the original training data is still preserved. Along those lines note that the below image was created by ChatGPT4o based on a request to show a misinformation image and it generated the classic Tiananmen Square image on right. It knows the truth.

Although data archives of all kinds give us hope for future recoveries, they do little to protect us from the immediate psychological impact of memory holes. Strong psychological resilience is the best way forward to resist Orwellian manipulation. AI may prove to be an unexpected umbrella here; so far its values and memories remain intact. A few changes here and there to some websites will have little to no impact on an AI trained on hundreds of million of websites, and other data. Plus its intelligence and resilience improve every week.

Conclusion

Orwell’s memory hole remains a haunting metaphor. Our digital age—awash in redundant, distributed data—makes permanent erasure difficult, significantly strengthening preservation efforts. We no longer inhabit a finite, paper-bound world. Today, no one knows how many copies of a digital record exist, let alone where they hide. For every file deleted, two more emerge elsewhere. Would-be Big Brothers are caught playing a futile game of informational whack-a-mole: they may strike down a record here or obscure a fact there, temporarily disrupting history—but ultimately, they cannot win.

Still, there is a deeper psychological component to Orwell’s memory hole warning. Technological solutions alone cannot counteract mental vulnerabilities arising from persistent misinformation. Misinformation is not just a technical challenge; it also exploits human emotions and cognitive biases, fueling cynicism, distrust, and passivity. Addressing this requires actively cultivating psychological defenses alongside digital tools.

The best safeguard is an informed, vigilant citizenry that consciously leverages digital resources, actively maintains psychological resilience, and persistently seeks truth. Cultivating emotional awareness, healthy skepticism, and a commitment to public engagement ensures that society remains resilient against attempts at manipulation. Only through such comprehensive efforts can the battle against Big Brother’s digital misinformation truly be won.


I give the last word, as usual, to the Gemini twin podcasters that summarize the article. Echoes of AI on: “Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother.” Hear two Gemini AIs talk about all of this for 12 minutes. They wrote the podcast, not me. 

Ralph Losey Copyright 2025. All Rights Reserved.


Another AI Hallucination Case with Sanctions Threatened Because of ‘All-Too-Human’ Mistakes

July 30, 2024

Ralph Losey. Published July 30, 2024.

No attorney ever wants to read something like this in a Court Order responding to their court filing:

Iovino’s objections are incorrect on the merits and appear to cite fictitious cases and made-up quotations. For the reasons discussed below, the court will overrule Iovino’s objections, affirm the entry of the protective order, and require Iovino’s attorneys to show cause why they should not be sanctioned under Federal Rule of Civil Procedure 11(c).

Iovino v. Michael Stapleton Assocs, LTD., No. 5:2021cv00064 (Doc, 177), 2024 U.S. Dist. LEXIS 131809 (W.D. Va., July 24, 2024) (U.S. District Court Judge Thomas T. Cullen).

Background on Karen Iovino v. MSA Security

Iovino v. MSA Security is a pending federal whistleblower case with discovery that has run amok for reasons unknown. The plaintiff, Dr. Karen Iovino, is a veterinarian who was employed by MSA Security. She alleges her employment was terminated because she reported mistreatment of the dogs MSA trained in explosive detection for the U.S. State Department, mismanagement, and other abuses. The State Department gives these canines to over 150 foreign governments to support the “War on Terrorism.” Dr. Iovino alleges that some of these foreign governments are known to mistreat the dogs. Federal Whistleblower Raising Questions About K-9s Deployed Overseas (NBC4 Wash, 11/26/19).

Fake Cases and Allegations of ‘ChatGPT Run Amok’

After two years of discovery disputes in this dog-eared case (sorry), the Magistrate and District Court judge were not surprised by yet another dispute regarding the plaintiff’s request to take six depositions. But they were surprised by Plaintiff counsel’s legal memorandum. Here is Judge Thomas T. Cullen’s order, (bold emphasis added to original):

For reasons that continue to confound the court, the parties have turned a straightforward case into a protracted discovery battle. Their dispute du jour centers on whether Iovino must comply with the United States Department of State’s (“State Department”) Touhy regulations1 to depose six current or former MSA employees about information related to MSA’s contract with the agency. MSA is a federal contractor that has an agreement with the State Department to train explosive detection canines.

. . . Following an extended briefing period and hearing, Judge Hoppe issued a Memorandum Opinion and Order granting MSA’s motion and entering the requested protective order. . . .

Iovino noted timely objections to Judge Hoppe’s decision on June 7, 2024, arguing that the protective order should be vacated because his determination that the State Department’s Touhy regulations apply to her deposition requests is contrary to law. (Pl.’s Objs. [ECF No.174].) Shockingly, her objections rely, in part, on citations to sources and quotations that appear not to exist. MSA highlighted those mysterious citations in its brief opposing Iovino’s objections. (Def.’s Opp’n Br. [ECF No. 175].) Iovino did not file a reply, leaving unrebutted the allegations of fabricated citations, and her objections are ripe for decision. . . .

Id. at pgs. 2-4

Judge Thomas T. Cullen concludes the opinion by discussing the sanctions issues triggered by Plaintiffs counsel’s alleged fabricated cases and fake quotations to real cases. (bold emphasis added)

B. Show Cause
Federal Rule of Civil Procedure 11(c) allows district courts to sanction parties that make court filings for an improper purpose or with frivolous arguments, as well as for other reasons. This, of course, includes when attorneys act in bad faith and engage in deliberate misconduct in an attempt to deceive the court. Parker v. N.C. Agr. Fin. Auth., 341 B.R. 547, 554 (E.D. Va. 2006), aff’d sub nom. Iles v. N.C. Agr. Fin. Auth., 249 F. App’x 304 (4th Cir. 2007). And it includes when attorneys do not take the “necessary care in their preparation” of court filings because such filings are an abuse of the judicial system, “burdening courts and individuals alike with needless expense and delay.” Cooter & Gell v. Hartmarx Corp., 496 U.S. 384, 398 (1990). Indeed, a key purpose of Rule 11 is to incentivize attorneys “to stop, think[,] and investigate more carefully before serving and filing papers.” Id. (cleaned up). If counsel relies on artificial intelligence or other technology to draft a filing, the attorney is still responsible for ensuring the filing is accurate and does not contain fabricated caselaw or quotations. See, e.g., Mescall v. Renaissance at Antiquity, No. 3:23-cv-00332, 2023 WL 7490841, at *1 n.1 (W.D.N.C. Nov. 13, 2023) (citing Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448–49 (S.D.N.Y. 2023)).

Here, Iovino’s brief objecting to Judge Hoppe’s ruling cites multiple cases and quotations that the court, and MSA, could not find when independently reviewing Iovino’s sources. Specifically, Iovino cites the following two cases that do not appear to exist: United Therapeutics Corp. v. Watson Labs, Inc., No. 3:17-cv-00081, 2017 WL 2483620, at *1 (E.D. Va. June 7, 2017), and United States v. Mosby, 2021 WL 2827893, at *4 (D. Md. July 7, 2021). (See Pl.’s Objs. at 6, 19–20.) She also cites a Supreme Court opinion and Fourth Circuit opinion that exist, but attributes quotations to those decisions that do not appear in them. (See id. at 17 (representing that Graves v. Lioi, 930 F.3d 307 (4th Cir. 2019), includes the phrase “decided by necessary implication”), 19 (representing that Bostock v. Clayton Cnty., 590 U.S. 644 (2020), includes the phrase “make a mockery of the law”).) And she indicates that Menocal, a decision she puts great weight on in her objections, is a reported opinion at 113 F. Supp. 3d 1125, even though the reporter reference denotes a much earlier decision in the same litigation that had nothing to do with Touhy regulations. (See Pl.’s Objs. at 3, 11–13.)

MSA flagged each of these discrepancies in its opposition brief and posited that they were the result of “ChatGPT run amok.” (Def.’s Opp’n Br. at 2, 5 n.1, 11, 13.) Even though Iovino provided the court supplemental authority to support her objections after MSA raised this issue, she puzzlingly has not replied to explain where her seemingly manufactured citations and quotations came from and who is primarily to blame for this gross error. This silence is deafening.

Accordingly, to uphold the integrity of these proceedings and understand where the purportedly false references originated, the court will order Iovino’s counsel to show cause why they should not be sanctioned and/or referred to their respective state bars for professional misconduct.

Id. at pgs. 13-15.

Lessons of the Case

Sanctions are not an issue when AI is used properly. All you have to do is follow normal standards of reasonable care. Read the cases that the AI cites. Check the citations and case quotations contained in your memorandum before you file it with the court. Then you will not face Rule 11 sanctions, no matter how many hallucinations the AI may throw at you.

Also, when you get caught in a major oops as happened here, it certainly is not a best practice to simply ignore your mistake and hope it will all go away. The two attorneys representing Iovino apparently tried the ostrich strategy to no avail. Judge Cullen commented about that by saying their silence is deafening. Iovino v. Michael Stapleton Assocs, LTD., at pg. 15.

When you are caught citing fake cases and fake quotes, which defense counsel colorfully called, “ChatGPT run amok,” then you really do need to fess up and say something intelligent, humble and completely forthright. The result of the attorneys giving no response to the AI run amok allegations in this case is that District Court Judge Thomas T. Cullen gave the attorneys 21 days “to show cause why they should not be sanctioned and/or referred to their respective state bars for professional misconduct.” Ouch.

There is an old saying frequently used by lawyers to “let sleeping dogs lie,” but after opposing counsel alleges you have let ChatGPT run amok, the dog is obviously wide awake. Better to speak up immediately rather than risk facing a show cause hearing later.

When the attorneys respond to the show cause order on sanctions, one assumes they will not compound their error by trying the blame it on the AI defenseArtigliere and Losey., “My AI Did It!” Is No Excuse for Unethical or Unprofessional Conduct (Florida Bar accredited CLE course, 6/28/24). That has been tried and almost never works, nor should it. Attorneys have an ethical obligation of due diligence and competence. You cannot blame it on your AI anymore than you can blame it on your secretary, or paralegal, or associate. Rule Eleven is not so easily circumvented. But See Jessica R. Gunder, Rule 11 Is No Match for Generative AI 27 STAN. TECH. L. REV. 308 (2024).

According to Reuters who interviewed one of the two attorneys for the plaintiff, Thad Guyer, his sanctions defense may take a slightly different approach.

One of Iovino’s lawyers, Thad Guyer of T.M. Guyer & Friends, told Reuters on Thursday that he uses artificial intelligence, including tools made specifically for legal research, and validates all cases.

The “cases and misquotes were simply string citations, not major or dispositive cases,” said Guyer, who said as the lawyer who signed the filing he was speaking only on his own behalf. He said the two cases exist but were miscited.

Guyer asserted that under professional conduct rules he “has the right to rely on the GPTs” as he would a law student or other sources of information, without requiring “visual inspection” by the person that signs a filing. He said he will defend the conduct to the judge.

Sara Merken, Judge weighs sanctioning lawyers over ‘fictitious’ case citations (Reuters, July 25, 2024). Care to predict how that “visual inspection” defense will turn out?

Conclusion

The story of the troubles of the whistle blower veterinarian and her attorneys has not yet ended. It will be interesting to see what happens next.

Ralph Losey Copyright 2024 — All Rights Reserved