Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations

Ralph Losey, December 17, 2025

I. Introduction: The Untested Expert in Your Office

AI walks into your office like a consulting expert who works fast, inexpensively, and speaks with knowing confidence. And, like any untested expert, is capable of being spectacularly wrong. Still, try AI out, just be sure to cross-examine it before using the work-product. This article will show you how.

A friendly-looking robot with a white exterior and glowing blue eyes, set against a wooden background. The robot has a broad smile and a tagline that reads, 'AI is only too happy to please.'
Want AI to do legal research? Find a great case on point? Beware: any ‘Uncrossed AI’ might happily make one up for you. [All images in this article by Ralph Losey using AI tools.]

Lawyers are discovering AI hallucinations the hard way. Courts are sanctioning attorneys who accept AI’s answers at face value and paste them into briefs without a single skeptical question. In the first, Mata v. Avianca, Inc., a lawyer submitted a brief filled with invented cases that looked plausible but did not exist. The judge did not blame the machine. The judge blamed the lawyer. In Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024), the Second Circuit again confronted AI-generated citations that dissolved under scrutiny. Case dismissed. French legal scholar Damien Charlotin has catalogued almost seven hundred similar decisions worldwide in his AI Hallucination Cases project. The pattern is the same: the lawyer treated AI’s private, untested opinion as if it were ready for court. It wasn’t. It never is.

A holographic figure resembling a consultant sits at a table with two lawyers, one male and one female, who appear to be observing the figure's briefcase labeled 'ANSWERS.' Books are placed on the table.
Never accept research or opinions before you skeptically cross-examine the AI.

The solution is not fear or avoidance. It is preparation. Think of AI the way you think of an expert you are preparing to testify. You probe their reasoning. You make sure they are not simply trying to agree with you. You examine their assumptions. You confirm that every conclusion has a basis you can defend. When you apply that same discipline to AI — simple, structured, lawyerly questioning — the hallucinations fall away and the real value emerges.

This article is not about trials. It is about applying cross-examination instincts in the office to control a powerful, fast-talking, low-budget consulting expert who lives in your laptop.

Click here to see video on YouTube of Losey’s encounters with unprepared AIs.

II. AI as Consulting Expert and Testifying Expert: A Hybrid Metaphor That Works

Experienced litigators understand the difference between a consulting expert and a testifying expert. A consulting expert works in private. You explore theories. You stress-test ideas. The expert can make mistakes, change positions, or tell you that your theory is weak. None of it harms the case because none of it leaves the room. It is not discoverable.

Once you convert that same person into a testifying expert, everything changes. Their methodology must be clear. Their assumptions must be sound. Their sources must be disclosed. Their opinions must withstand cross-examination. Their credibility must be earned. Discovery of them is open subject to minor restraints.

AI Should always start as a secret consulting expert. It answers privately, often brilliantly, sometimes sloppily, and occasionally with complete fabrications. But the moment you rely on its words in a brief, a declaration, a demand letter, a discovery response, or a client advisory, you have promoted that consulting expert to a testifying one. Judges and opposing counsel will evaluate its work that way — even if you didn’t.

This hybrid metaphor — part expert preparation, part cross-examination — is the most accurate way to understand AI in legal practice. It gives you a familiar, legally sound framework for interrogating AI before staking your reputation on its output.

A lawyer seated at a desk reading documents, with a holographic figure representing AI or an expert consultant displayed next to him.
Working with AI and carefully examining its early drafts.

III. Why Lawyers Fear AI Today: The Hallucination Problem Is Real, but Preventable

AI hallucinations sound exotic, but they are neither mysterious nor unpredictable. They arise from familiar causes:

Anyone who has ever supervised an over-confident junior associate will recognize these patterns or response. Ask vague questions and reward polished answers, and you will get polished answers whether they are correct or not.

The problem is not that AI hallucinates. The problem is that lawyers forget to interrogate the hallucination before adopting it.

Never rely on an AI that has not been cross examined.

Both lawyer and judicial frustration is mounting. Charlotin’s global hallucination database reads like a catalogue of avoidable errors. Lawyers cite nonexistent cases, rely on invented quotations, or submit timelines that collapse the moment a judge asks a basic question. Courts have stopped treating these problems as innocent misunderstandings about new technology. Increasingly, they see them as failures of competence and diligence.

The encouraging news is that hallucinations collapse under even moderate questioning. AI improvises confidently in silence. It becomes accurate under pressure.

That pressure is supplied by cross-examination.

A female business professional discussing strategies with a humanoid robot in a modern office setting, displaying the text 'PREPARE INTERROGATE VERIFY' on a screen in the background.
Team approach to AI prep works well, including other AIs.

IV. Five Cross-Examination Techniques for AI

The techniques below are adapted from how lawyers question both their own experts and adverse ones. They require no technical training. They rely entirely on skills lawyers already use: asking clear questions, demanding reasoning, exposing assumptions, and verifying claims.

The five techniques are:

  1. Ask for the basis of the opinion.
  2. Probe uncertainty and limits.
  3. Present the opposing argument.
  4. Test internal consistency.
  5. Build a verification pathway.

Each can be implemented through simple, repeatable prompts.

A woman in a business suit stands confidently in a courtroom-like setting, pointing with one finger while holding a tablet. Next to her is a humanoid robot. A large sign in the background displays the words 'BASIS', 'UNCERTAINTY', 'OPPOSING', 'CONSISTENCY', and 'VERIFY'. Sky-high view of city buildings is visible through the window.
Click to see YouTube video of this associate’s presentation to partners of the AI cross-exam.

1. Ask for the Basis of the Opinion

AI developers use the word “mechanism.” Lawyers use reasoning, methodology, procedure, or logic. Whatever the label, you need to know how the model reached its conclusion.

Instead of asking, “What’s the law on negligent misrepresentation in Florida?” ask:

“Walk me through your reasoning step by step. List the elements, the leading cases, and the authorities you are relying on. For each step, explain why the case applies.”

This produces a reasoning ladder rather than a polished paragraph. You can inspect the rungs and see where the structure holds or collapses.

Ask AI explicitly to:

  • identify each reasoning step
  • list assumptions about facts or law
  • cite authorities for each step
  • rate confidence in each part of the analysis

If the reasoning chain buckles, the hallucination reveals itself.

A lawyer in a suit examining a transparent, futuristic humanoid robot's head with a flashlight in a library setting.
Click here for short YouTube video animation about reasoning cross.

2. Probe Uncertainty and Limits

AI tries to be helpful and agreeable. It will give you certainty, even though it is fake. The original AI training data from the Internet never said, “I don’t know the answer.” So now you have to train your AI in prompts and project instructions to admit it does not know. You must demand honesty. You must demand truth over agreement with your own thoughts and desires. Repeatedly specify to AI in instructions to admit when it does not know the answer, or is uncertain. Get it to explain to you what is does not know; to explain what it cannot provide citations to support. Get it to reveal the unknowns.

A friendly robot with a smile sitting at a desk with a computer keyboard, in front of two screens displaying error messages '404 ANSWER NOT FOUND' and 'ANSWERS NOT FOUND.' The robot appears to be ready to improvise.
Most AIs do not like to admit they don’t know. Do you?

Ask your AI:

  • “What do you not know that might affect this conclusion?”
  • “What facts would change your analysis?”
  • “Which part of your reasoning is weakest?”
  • “Which assumptions are unstated or speculative?”

Good human experts do this instinctively. They mark the edges of their expertise. AI will also do it, but only when asked.

A man in a suit stands in a courtroom, holding a tablet and speaking confidently, with a holographic display of connected data points in the background.
Click here for YouTube animation of AI cross of its unknowns.

3. Present the Opposing Argument

If you only ask, “Why am I right?” AI will gladly tell you why you are right. Sycophantism is one of its worst habits.

Counteract that by assigning it the opposing role:

  • “Give me the strongest argument against your conclusion.”
  • “How would opposing counsel attack this reasoning?”
  • “What weaknesses in my theory would they highlight?”

This is the same preparation you would do with a human expert before deposition: expose vulnerabilities privately so they do not explode publicly.

A lawyer in a formal suit stands in a courtroom, examining a holographic chessboard with blue and orange outlines representing opposing arguments.
Quality control by counter-arguments. Click here for short YouTube animation.

4. Test Internal Consistency

Hallucinations are brittle. Real reasoning is sturdy.

You expose the difference by asking the model to repeat or restructure its own analysis.

  • “Restate your answer using a different structure.”
  • Summarize your prior answer in three bullet points and identify inconsistencies.”
  • “Explain your earlier analysis focusing only on law; now do the same focusing only on facts.”

If the second answer contradicts the first, you know the foundation is weak.

This is impeachment in the office, not in the courtroom.

A digitally created robot face divided in half, with one side featuring cool metallic tones and glowing blue elements, and the other side displaying warmer hues with a glowing red effect.
Click here for YouTube animation on contradictions.

5. Build a Verification Pathway

Hallucinations survive only when no one checks the sources.

Verification destroys them.

Always:

  • read every case AI cites and make sure the court cited actually issued the opinion (of course, also check case history to verify it is still good law)
  • confirm that the quotations appear in the opinion (sometime small errors creep in)
  • check jurisdiction, posture, and relevance (normal lawyer or paralegal analysis)
  • verify every critical factual claim and legal conclusion

This is not “extra work” created by AI. It is the same work lawyers owe courts and clients. The difference is simply that AI can produce polished nonsense faster than a junior associate. Overall, after you learn the AI testing skills, the time and money saved will be significant. This associate practically works for free with no breaks for sleep, much less food or coffee.

Your job is to slow it down. Turn it off while you check its work.

An older man in a suit sits at a table, writing notes on a document, while a humanoid robot with blue eyes sits beside him in a professional setting.
Always carefully check the work of your AIs.

V. How Cross-Examination Dramatically Reduces Hallucinations

Cross-examination is not merely a metaphor here. It is the mechanism — in the lawyer’s meaning of the word — that exposes fabrication and reveals truth.

Consider three realistic hypotheticals.

1. E-Discovery Misfire

AI says a custodian likely has “no relevant emails” based on role assumptions.

You ask: “List the assumptions you relied on.”

It admits it is basing its view on a generic corporate structure.

You know this company uses engineers in customer-facing negotiations.

Hallucination avoided.

2. Employment Retaliation Timeline

AI produces a clean timeline that looks authoritative.

You ask: “Which dates are certain and which were inferred?”

AI discloses that it guessed the order of two meetings because the record was ambiguous.

You go back to the documents.

Hallucination avoided.

3. Contract Interpretation

AI asserts that Paragraph 14 controls termination rights.

You ask: “Show me the exact language you relied on and identify any amendments that affect it.”

It re-reads the contract and reverses itself.

Hallucination avoided.

The common thread: pressure reveals quality.

Without pressure, hallucinations pass for analysis.

A businessman in a suit points at a digital display showing a timeline with events and an inconsistency highlighted in red, seated next to a humanoid robot on a table with a laptop.
Work closely with your AI to improve and verify its output.

VI. Why Litigators Have a Natural Advantage — And How Everyone Else Can Learn

Litigators instinctively challenge statements. They distrust unearned confidence. They ask what assumptions lie beneath a conclusion. They know how experts wilt when they cannot defend their methodology.

But adversarial reasoning is not limited to courtrooms. Transactional lawyers use it in negotiations. In-house lawyers use it in risk assessments. Judges use it in weighing credibility. Paralegals and case managers use it in preparing witnesses and assembling factual narratives.

Anyone in the legal profession can practice:

  • asking short, precise questions
  • demanding reasoning, not just conclusions
  • exploring alternative explanations
  • surfacing uncertainty
  • checking for consistency

Cross-examining AI is not a trial skill. It is a thinking skill — one shared across the profession.

A business meeting in an office featuring a woman in a suit presenting to a robot resembling Iron Man, while a man in a suit sits at a laptop, with a display showing academic citations and data in the background.
Thinking like a lawyer is a prerequisite for AI training; be skeptical and objective.

VII. The Lawyer’s Advantage Over AI

AI is inexpensive, fast, tireless, and deeply cross-disciplinary. It can outline arguments, summarize thousands of pages, and identify patterns across cases at a speed humans cannot match. It never complains about deadlines and never asks for a retainer.

Human experts outperform AI when judgment, nuance, emotional intelligence, or domain mastery are decisive. But those experts are not available for every issue in every matter.

AI provides breadth. Lawyers provide judgment.

AI provides speed. Lawyers provide skepticism.

AI provides possibilities. Lawyers decide what is real.

Properly interrogated, AI becomes a force multiplier for the profession.

Uninterrogated, it becomes a liability.

A professional meeting room with a diverse group of lawyers and a robot figure. The human leader gestures confidently while presenting. A screen behind them displays phrases like 'Challenge assumptions,' 'Expose weak logic,' and 'Ask better questions.'
Good lawyers challenge and refine their AI output.

VIII. Courts Expect Verification — And They Are Right

Judges are not asking lawyers to become engineers or to audit model weights. They are asking lawyers to verify their work.

In hallucination sanction cases, courts ask basic questions:

  • Did you read the cases before citing them?
  • Did you confirm that the case exists in any reporter?
  • Did you verify the quotations?
  • Did you investigate after concerns were raised?

When the answer is no, blame falls on the lawyer, not on the software.

Verification is the heart of legal practice.

It just takes a few minutes to spot and correct the hallucinated cases. The AI needs your help.

IX. Practical Protocol: How to Cross-Examine Your AI Before You Rely on It

A reliable process helps prevent mistakes. Here is a simple, repeatable, three-phase protocol.

Phase 1: Prepare

  1. Clarify the task.

Ask narrow, jurisdiction-specific, time-anchored questions.

  1. Provide context.

Give procedural posture, factual background, and applicable law.

  1. Request reasoning and sources up front.

Tell AI you will be reviewing the foundation.

Phase 2: Interrogate

  1. Ask for step-by-step reasoning.
  2. Probe what the model does not know.
  3. Have it argue the opposite side.
  4. Ask for the analysis again, in a different structure.

This phase mimics preparing your own expert — in private.

Phase 3: Verify

  1. Check every case in a trusted database.
  2. Confirm factual claims against your own record.
  3. Decide consciously which parts to adopt, revise, or discard.

Do all this and if a judge or client later asks, “What did you do to verify this?” – you have a real answer.

Business meeting involving a lawyer presenting to a man and a humanoid robot, with a digital presentation on a screen that includes flowchart-style prompts.
It takes some training and experience, but keeping your AI under control is really not that hard.

X. The Positive Side: AI Becomes Powerful After Cross-Examination

Once you adopt this posture, AI becomes far less dangerous and far more valuable.

When you know you can expose hallucinations with a few well-crafted questions, you stop fearing the tool. You start seeing it as an idea generator, a drafting assistant, a logic checker, and even a sparring partner. It shows you the shape of opposing arguments. It reveals where your theory is vulnerable. It highlights ambiguities you had overlooked.

Cross-examination does not weaken AI.

It strengthens the partnership between human lawyer and machine.

A lawyer and a humanoid robot stand together in a courtroom, representing a blend of human expertise and artificial intelligence in legal practice.
Click here for video animation on YouTube.

XI. Conclusion: The Return of the Lawyer

Cross-examining your AI is not a theatrical performance. It is the methodical preparation that seasoned litigators use whenever they evaluate expert opinions. When you ask AI for its basis, test alternative explanations, probe uncertainty, check consistency, and verify its claims, you transform raw guesses into analysis that can withstand scrutiny.

Two professionals interacting with a futuristic robot in an office setting, analyzing a digital display that highlights the concept of 'Inference Gap Needs Judgment' amidst various data points and inferences.
Complex assignments always take more time but the improved quality AI can bring is well worth it.

Courts are no longer forgiving lawyers who fall for a sycophantic AI and skip this step. But they respect lawyers who demonstrate skeptical, adversarial reasoning — the kind that prevents hallucinations, avoids sanctions, and earns judicial confidence. More importantly, this discipline unlocks AI’s real advantages: speed, breadth, creativity, and cross-disciplinary insight.

The cure for hallucinations is not technical.

It is skeptical, adversarial reasoning.

Cross-examine first. Rely second.

That is how AI becomes a trustworthy partner in modern practice.

See the animation of our goodbye summary on the YouTube video. Click here.

Ralph Losey Copyright 2025 — All Rights Reserved


Discover more from e-Discovery Team

Subscribe to get the latest posts sent to your email.

3 Responses to Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations

  1. […] used Google’s NotebookLM to analyze my last article, Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. I started with the debate feature, where two AIs have a respectful argument about whatever source […]

  2. […] of 2025 that the cure for AI hallucinations isn’t better code—it’s better lawyering. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. We must skeptically supervise our AI, treating it not as an oracle, but as a secret consulting […]

  3. […] a Slide Deck, Infographic and a Video!: Ralph Losey used Google’s NotebookLM to analyze his previous article and generate two podcasts, a slide deck, a video and an infographic! Illustrates the impressive […]

Leave a Reply to The Kitchen Sink for January 9, 2026Cancel reply

Discover more from e-Discovery Team

Subscribe now to keep reading and get access to the full archive.

Continue reading