Ralph Losey, December 17, 2025
I. Introduction: The Untested Expert in Your Office
AI walks into your office like a consulting expert who works fast, inexpensively, and speaks with knowing confidence. And, like any untested expert, is capable of being spectacularly wrong. Still, try AI out, just be sure to cross-examine it before using the work-product. This article will show you how.

Lawyers are discovering AI hallucinations the hard way. Courts are sanctioning attorneys who accept AI’s answers at face value and paste them into briefs without a single skeptical question. In the first, Mata v. Avianca, Inc., a lawyer submitted a brief filled with invented cases that looked plausible but did not exist. The judge did not blame the machine. The judge blamed the lawyer. In Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024), the Second Circuit again confronted AI-generated citations that dissolved under scrutiny. Case dismissed. French legal scholar Damien Charlotin has catalogued almost seven hundred similar decisions worldwide in his AI Hallucination Cases project. The pattern is the same: the lawyer treated AI’s private, untested opinion as if it were ready for court. It wasn’t. It never is.

The solution is not fear or avoidance. It is preparation. Think of AI the way you think of an expert you are preparing to testify. You probe their reasoning. You make sure they are not simply trying to agree with you. You examine their assumptions. You confirm that every conclusion has a basis you can defend. When you apply that same discipline to AI — simple, structured, lawyerly questioning — the hallucinations fall away and the real value emerges.
This article is not about trials. It is about applying cross-examination instincts in the office to control a powerful, fast-talking, low-budget consulting expert who lives in your laptop.
II. AI as Consulting Expert and Testifying Expert: A Hybrid Metaphor That Works
Experienced litigators understand the difference between a consulting expert and a testifying expert. A consulting expert works in private. You explore theories. You stress-test ideas. The expert can make mistakes, change positions, or tell you that your theory is weak. None of it harms the case because none of it leaves the room. It is not discoverable.
Once you convert that same person into a testifying expert, everything changes. Their methodology must be clear. Their assumptions must be sound. Their sources must be disclosed. Their opinions must withstand cross-examination. Their credibility must be earned. Discovery of them is open subject to minor restraints.
AI Should always start as a secret consulting expert. It answers privately, often brilliantly, sometimes sloppily, and occasionally with complete fabrications. But the moment you rely on its words in a brief, a declaration, a demand letter, a discovery response, or a client advisory, you have promoted that consulting expert to a testifying one. Judges and opposing counsel will evaluate its work that way — even if you didn’t.
This hybrid metaphor — part expert preparation, part cross-examination — is the most accurate way to understand AI in legal practice. It gives you a familiar, legally sound framework for interrogating AI before staking your reputation on its output.

III. Why Lawyers Fear AI Today: The Hallucination Problem Is Real, but Preventable
AI hallucinations sound exotic, but they are neither mysterious nor unpredictable. They arise from familiar causes:
• lack of factual context
• ambiguous or overly broad prompts
• overgeneralization from incomplete data
• gaps or bias in the training set
• the model’s instinct to infer patterns that are not really there
• its reluctance to admit “I don’t know”
• its tendency toward flattery and agreement
Anyone who has ever supervised an over-confident junior associate will recognize these patterns or response. Ask vague questions and reward polished answers, and you will get polished answers whether they are correct or not.
The problem is not that AI hallucinates. The problem is that lawyers forget to interrogate the hallucination before adopting it.
Both lawyer and judicial frustration is mounting. Charlotin’s global hallucination database reads like a catalogue of avoidable errors. Lawyers cite nonexistent cases, rely on invented quotations, or submit timelines that collapse the moment a judge asks a basic question. Courts have stopped treating these problems as innocent misunderstandings about new technology. Increasingly, they see them as failures of competence and diligence.
The encouraging news is that hallucinations collapse under even moderate questioning. AI improvises confidently in silence. It becomes accurate under pressure.
That pressure is supplied by cross-examination.

IV. Five Cross-Examination Techniques for AI
The techniques below are adapted from how lawyers question both their own experts and adverse ones. They require no technical training. They rely entirely on skills lawyers already use: asking clear questions, demanding reasoning, exposing assumptions, and verifying claims.
The five techniques are:
- Ask for the basis of the opinion.
- Probe uncertainty and limits.
- Present the opposing argument.
- Test internal consistency.
- Build a verification pathway.
Each can be implemented through simple, repeatable prompts.

1. Ask for the Basis of the Opinion
AI developers use the word “mechanism.” Lawyers use reasoning, methodology, procedure, or logic. Whatever the label, you need to know how the model reached its conclusion.
Instead of asking, “What’s the law on negligent misrepresentation in Florida?” ask:
“Walk me through your reasoning step by step. List the elements, the leading cases, and the authorities you are relying on. For each step, explain why the case applies.”
This produces a reasoning ladder rather than a polished paragraph. You can inspect the rungs and see where the structure holds or collapses.
Ask AI explicitly to:
- identify each reasoning step
- list assumptions about facts or law
- cite authorities for each step
- rate confidence in each part of the analysis
If the reasoning chain buckles, the hallucination reveals itself.

2. Probe Uncertainty and Limits
AI tries to be helpful and agreeable. It will give you certainty, even though it is fake. The original AI training data from the Internet never said, “I don’t know the answer.” So now you have to train your AI in prompts and project instructions to admit it does not know. You must demand honesty. You must demand truth over agreement with your own thoughts and desires. Repeatedly specify to AI in instructions to admit when it does not know the answer, or is uncertain. Get it to explain to you what is does not know; to explain what it cannot provide citations to support. Get it to reveal the unknowns.

Ask your AI:
- “What do you not know that might affect this conclusion?”
- “What facts would change your analysis?”
- “Which part of your reasoning is weakest?”
- “Which assumptions are unstated or speculative?”
Good human experts do this instinctively. They mark the edges of their expertise. AI will also do it, but only when asked.

3. Present the Opposing Argument
If you only ask, “Why am I right?” AI will gladly tell you why you are right. Sycophantism is one of its worst habits.
Counteract that by assigning it the opposing role:
- “Give me the strongest argument against your conclusion.”
- “How would opposing counsel attack this reasoning?”
- “What weaknesses in my theory would they highlight?”
This is the same preparation you would do with a human expert before deposition: expose vulnerabilities privately so they do not explode publicly.

4. Test Internal Consistency
Hallucinations are brittle. Real reasoning is sturdy.
You expose the difference by asking the model to repeat or restructure its own analysis.
- “Restate your answer using a different structure.”
- “Summarize your prior answer in three bullet points and identify inconsistencies.”
- “Explain your earlier analysis focusing only on law; now do the same focusing only on facts.”
If the second answer contradicts the first, you know the foundation is weak.
This is impeachment in the office, not in the courtroom.

5. Build a Verification Pathway
Hallucinations survive only when no one checks the sources.
Verification destroys them.
Always:
- read every case AI cites and make sure the court cited actually issued the opinion (of course, also check case history to verify it is still good law)
- confirm that the quotations appear in the opinion (sometime small errors creep in)
- check jurisdiction, posture, and relevance (normal lawyer or paralegal analysis)
- verify every critical factual claim and legal conclusion
This is not “extra work” created by AI. It is the same work lawyers owe courts and clients. The difference is simply that AI can produce polished nonsense faster than a junior associate. Overall, after you learn the AI testing skills, the time and money saved will be significant. This associate practically works for free with no breaks for sleep, much less food or coffee.
Your job is to slow it down. Turn it off while you check its work.

V. How Cross-Examination Dramatically Reduces Hallucinations
Cross-examination is not merely a metaphor here. It is the mechanism — in the lawyer’s meaning of the word — that exposes fabrication and reveals truth.
Consider three realistic hypotheticals.
1. E-Discovery Misfire
AI says a custodian likely has “no relevant emails” based on role assumptions.
You ask: “List the assumptions you relied on.”
It admits it is basing its view on a generic corporate structure.
You know this company uses engineers in customer-facing negotiations.
Hallucination avoided.
2. Employment Retaliation Timeline
AI produces a clean timeline that looks authoritative.
You ask: “Which dates are certain and which were inferred?”
AI discloses that it guessed the order of two meetings because the record was ambiguous.
You go back to the documents.
Hallucination avoided.
3. Contract Interpretation
AI asserts that Paragraph 14 controls termination rights.
You ask: “Show me the exact language you relied on and identify any amendments that affect it.”
It re-reads the contract and reverses itself.
Hallucination avoided.
The common thread: pressure reveals quality.
Without pressure, hallucinations pass for analysis.

VI. Why Litigators Have a Natural Advantage — And How Everyone Else Can Learn
Litigators instinctively challenge statements. They distrust unearned confidence. They ask what assumptions lie beneath a conclusion. They know how experts wilt when they cannot defend their methodology.
But adversarial reasoning is not limited to courtrooms. Transactional lawyers use it in negotiations. In-house lawyers use it in risk assessments. Judges use it in weighing credibility. Paralegals and case managers use it in preparing witnesses and assembling factual narratives.
Anyone in the legal profession can practice:
- asking short, precise questions
- demanding reasoning, not just conclusions
- exploring alternative explanations
- surfacing uncertainty
- checking for consistency
Cross-examining AI is not a trial skill. It is a thinking skill — one shared across the profession.

VII. The Lawyer’s Advantage Over AI
AI is inexpensive, fast, tireless, and deeply cross-disciplinary. It can outline arguments, summarize thousands of pages, and identify patterns across cases at a speed humans cannot match. It never complains about deadlines and never asks for a retainer.
Human experts outperform AI when judgment, nuance, emotional intelligence, or domain mastery are decisive. But those experts are not available for every issue in every matter.
AI provides breadth. Lawyers provide judgment.
AI provides speed. Lawyers provide skepticism.
AI provides possibilities. Lawyers decide what is real.
Properly interrogated, AI becomes a force multiplier for the profession.
Uninterrogated, it becomes a liability.

VIII. Courts Expect Verification — And They Are Right
Judges are not asking lawyers to become engineers or to audit model weights. They are asking lawyers to verify their work.
In hallucination sanction cases, courts ask basic questions:
- Did you read the cases before citing them?
- Did you confirm that the case exists in any reporter?
- Did you verify the quotations?
- Did you investigate after concerns were raised?
When the answer is no, blame falls on the lawyer, not on the software.
Verification is the heart of legal practice.
IX. Practical Protocol: How to Cross-Examine Your AI Before You Rely on It
A reliable process helps prevent mistakes. Here is a simple, repeatable, three-phase protocol.
Phase 1: Prepare
- Clarify the task.
Ask narrow, jurisdiction-specific, time-anchored questions.
- Provide context.
Give procedural posture, factual background, and applicable law.
- Request reasoning and sources up front.
Tell AI you will be reviewing the foundation.
Phase 2: Interrogate
- Ask for step-by-step reasoning.
- Probe what the model does not know.
- Have it argue the opposite side.
- Ask for the analysis again, in a different structure.
This phase mimics preparing your own expert — in private.
Phase 3: Verify
- Check every case in a trusted database.
- Confirm factual claims against your own record.
- Decide consciously which parts to adopt, revise, or discard.
Do all this and if a judge or client later asks, “What did you do to verify this?” – you have a real answer.

X. The Positive Side: AI Becomes Powerful After Cross-Examination
Once you adopt this posture, AI becomes far less dangerous and far more valuable.
When you know you can expose hallucinations with a few well-crafted questions, you stop fearing the tool. You start seeing it as an idea generator, a drafting assistant, a logic checker, and even a sparring partner. It shows you the shape of opposing arguments. It reveals where your theory is vulnerable. It highlights ambiguities you had overlooked.
Cross-examination does not weaken AI.
It strengthens the partnership between human lawyer and machine.

XI. Conclusion: The Return of the Lawyer
Cross-examining your AI is not a theatrical performance. It is the methodical preparation that seasoned litigators use whenever they evaluate expert opinions. When you ask AI for its basis, test alternative explanations, probe uncertainty, check consistency, and verify its claims, you transform raw guesses into analysis that can withstand scrutiny.

Courts are no longer forgiving lawyers who fall for a sycophantic AI and skip this step. But they respect lawyers who demonstrate skeptical, adversarial reasoning — the kind that prevents hallucinations, avoids sanctions, and earns judicial confidence. More importantly, this discipline unlocks AI’s real advantages: speed, breadth, creativity, and cross-disciplinary insight.
The cure for hallucinations is not technical.
It is skeptical, adversarial reasoning.
Cross-examine first. Rely second.
That is how AI becomes a trustworthy partner in modern practice.
Ralph Losey Copyright 2025 — All Rights Reserved
Discover more from e-Discovery Team
Subscribe to get the latest posts sent to your email.


[…] used Google’s NotebookLM to analyze my last article, Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. I started with the debate feature, where two AIs have a respectful argument about whatever source […]
[…] of 2025 that the cure for AI hallucinations isn’t better code—it’s better lawyering. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. We must skeptically supervise our AI, treating it not as an oracle, but as a secret consulting […]
[…] a Slide Deck, Infographic and a Video!: Ralph Losey used Google’s NotebookLM to analyze his previous article and generate two podcasts, a slide deck, a video and an infographic! Illustrates the impressive […]