2025 Year in Review: Beyond Adoption—Entering the Era of AI Entanglement and Quantum Law

December 31, 2025

Ralph Losey, December 31, 2025

As I sit here reflecting on 2025—a year that began with the mind-bending mathematics of the multiverse and ended with the gritty reality of cross-examining algorithms—I am struck by a singular realization. We have moved past the era of mere AI adoption. We have entered the era of entanglement, where we must navigate the new physics of quantum law using the ancient legal tools of skepticism and verification.

A split image illustrating two concepts: on the left, 'AI Adoption' showing an individual with traditional tools and paperwork; on the right, 'AI Entanglement' featuring the same individual surrounded by advanced technology and integrated AI systems.
In 2025 we moved from AI Adoption to AI Entanglement. All images by Losey using many AIs.

We are learning how to merge with AI and remain in control of our minds, our actions. This requires human training, not just AI training. As it turns out, many lawyers are well prepared by past legal training and skeptical attitude for this new type of human training. We can quickly learn to train our minds to maintain control while becoming entangled with advanced AIs and the accelerated reasoning and memory capacities they can bring.

A futuristic woman with digital circuitry patterns on her face interacts with holographic data displays in a high-tech environment.
Trained humans can enhance by total entanglement with AI and not lose control or separate identity. Click here or the image to see video on YouTube.

In 2024, we looked at AI as a tool, a curiosity, perhaps a threat. By the end of 2025, the tool woke up—not with consciousness, but with “agency.” We stopped typing prompts into a void and started negotiating with “agents” that act and reason. We learned to treat these agents not as oracles, but as ‘consulting experts’—brilliant but untested entities whose work must remain privileged until rigorously cross-examined and verified by a human attorney. That put the human legal minds in control and stops the hallucinations in what I called “H-Y-B-R-I-D” workflows of the modern law office.

We are still way smarter than they are and can keep our own agency and control. But for how long? The AI abilities are improving quickly but so are our own abilities to use them. We can be ready. We must. To stay ahead, we should begin the training in earnest in 2026.

A humanoid robot with glowing accents stands looking out over a city skyline at sunset, next to a man in a suit who observes the scene thoughtfully.
Integrate your mind and work with full AI entanglement. Click here or the image to see video on YouTube.

Here is my review of the patterns, the epiphanies, and the necessary illusions of 2025.

I. The Quantum Prelude: Listening for Echoes in the Multiverse

We began the year not in the courtroom, but in the laboratory. In January, and again in October, we grappled with a shift in physics that demands a shift in law. When Google’s Willow chip in January performed a calculation in five minutes that would take a classical supercomputer ten septillion years, it did more than break a speed record; it cracked the door to the multiverse. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan. 2025).

The scientific consensus solidified in October when the Nobel Prize in Physics was awarded to three pioneers—including Google’s own Chief Scientist of Quantum Hardware, Michel Devoret—for proving that quantum behavior operates at a macroscopic level. Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago; and Google’s New ‘Quantum Echoes Algorithm’ and My Last Article, ‘Quantum Echo’ (Oct. 2025).

For lawyers, the implication of “Quantum Echoes” is profound: we are moving from a binary world of “true/false” to a quantum world of “probabilistic truth”. Verification is no longer about identical replication, but about “faithful resonance”—hearing the echo of validity within an accepted margin of error.

But this new physics brings a twin peril: Q-Day. As I warned in January, the same resonance that verifies truth also dissolves secrecy. We are racing toward the moment when quantum processors will shatter RSA encryption, forcing lawyers to secure client confidences against a ‘harvest now, decrypt later’ threat that is no longer theoretical.

We are witnessing the birth of Quantum Law, where evidence is authenticated not by a hash value, but by ‘replication hearings’ designed to test for ‘faithful resonance.’ We are moving toward a legal standard where truth is defined not by an identical binary match, but by whether a result falls within a statistically accepted bandwidth of similarity—confirming that the digital echo rings true.

A digital display showing a quantum interference graph with annotations for expected and actual results, including a fidelity score of 99.2% and data on error rates and system status.
Quantum Replication Hearings Are Probable in the Future.

II. China Awakens and Kick-Starts Transparency

While the quantum future dangers gestated, AI suffered a massive geopolitical shock on January 30, 2025. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. The release of China’s DeepSeek not only scared the market for a short time; it forced the industry’s hand on transparency. It accelerated the shift from ‘black box’ oracles to what Dario Amodei calls ‘AI MRI’—models that display their ‘chain of thought.’ See my DeepSeek sequel, Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand. This display feature became the cornerstone of my later 2025 AI testing.

My Why the Release article also revealed the hype and propaganda behind China’s DeepSeek. Other independent analysts eventually agreed and the market quickly rebounded and the political, military motives became obvious.

A digital artwork depicting two armed soldiers facing each other, one representing the United States with the American flag in the background and the other representing China with the Chinese flag behind. Human soldiers are flanked by robotic machines symbolizing advanced military technology, set against a futuristic backdrop.
The Arms Race today is AI, tomorrow Quantum. So far, propaganda is the weapon of choice of AI agents.

III. Saving Truth from the Memory Hole

Reeling from China’s propaganda, I revisited George Orwell’s Nineteen Eighty-Four to ask a pressing question for the digital age: Can truth survive the delete key? Orwell feared the physical incineration of inconvenient facts. Today, authoritarian revisionism requires only code. In the article I also examine the “Great Firewall” of China and its attempt to erase the history of Tiananmen Square as a grim case study of enforced collective amnesia. Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother

My conclusion in the article was ultimately optimistic. Unlike paper, digital truth thrives on redundancy. I highlighted resources like the Internet Archive’s Wayback Machine—which holds over 916 billion web pages—as proof that while local censorship is possible, global erasure is nearly unachievable. The true danger we face is not the disappearance of records, but the exhaustion of the citizenry. The modern “memory hole” is psychological; it relies on flooding the zone with misinformation until the public becomes too apathetic to distinguish truth from lies. Our defense must be both technological preservation and psychological resilience.

A graphic depiction of a uniformed figure with a Nazi armband operating a machine that processes documents, with an eye in the background and the slogan 'IGNORANCE IS STRENGTH' prominently displayed at the top.
Changing history to support political tyranny. Orwell’s warning.

Despite my optimism, I remained troubled in 2025 about our geo-political situation and the military threats of AI controlled by dictators, including, but not limited to, the Peoples Republic of China. One of my articles on this topic featured the last book of Henry Kissinger, which he completed with Eric Schmidt just days before his death in late 2024 at age 100. Henry Kissinger and His Last Book – GENESIS: Artificial Intelligence, Hope, and the Human Spirit. Kissinger died very worried about the great potential dangers of a Chinese military with an AI advantage. The same concern applies to a quantum advantage too, although that is thought to be farther off in time.

IV. Bench Testing the AI models of the First Half of 2025

I spent a great deal of time in 2025 testing the legal reasoning abilities of the major AI players, primarily because no one else was doing it, not even AI companies themselves. So I wrote seven articles in 2025 concerning benchmark type testing of legal reasoning. In most tests I used actual Bar exam questions that were too new to be part of the AI training. I called this my Bar Battle of the Bots series, listed here in sequential order:

  1. Breaking the AI Black Box: A Comparative Analysis of Gemini, ChatGPT, and DeepSeek. February 6, 2025
  2. Breaking New Ground: Evaluating the Top AI Reasoning Models of 2025. February 12, 2025
  3. Bar Battle of the Bots – Part One. February 26, 2025
  4. Bar Battle of the Bots – Part Two. March 5, 2025
  5. New Battle of the Bots: ChatGPT 4.5 Challenges Reigning Champ ChatGPT 4o.  March 13, 2025
  6. Bar Battle of the Bots – Part Four: Birth of Scorpio. May 2025
  7. Bots Battle for Supremacy in Legal Reasoning – Part Five: Reigning Champion, Orion, ChatGPT-4.5 Versus Scorpio, ChatGPT-o3. May 2025.
Two humanoid robots fighting against each other in a boxing ring, surrounded by a captivated audience.
Battle of the legal bots, 7-part series.

The test concluded in May when the prior dominance of ChatGPT-4o (Omni) and ChatGPT-4.5 (Orion) was challenged by the “little scorpion,” ChatGPT-o3. Nicknamed Scorpio in honor of the mythic slayer of Orion, this model displayed a tenacity and depth of legal reasoning that earned it a knockout victory. Specifically, while the mighty Orion missed the subtle ‘concurrent client conflict’ and ‘fraudulent inducement’ issues in the diamond dealer hypothetical, the smaller Scorpio caught them—proving that in law, attention to ethical nuance beats raw processing power. Of course, there have been many models released since then May 2025 and so I may do this again in 2026. For legal reasoning the two major contenders still seem to be Gemini and ChatGPT.

Aside for legal reasoning capabilities, these tests revealed, once again, that all of the models remained fundamentally jagged. See e.g., The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7% (Sec. 5 – Study Consistent with Jagged Frontier research of Harvard and others). Even the best models missed obvious issues like fraudulent inducement or concurrent conflicts of interest until pushed. The lesson? AI reasoning has reached the “average lawyer” level—a “C” grade—but even when it excels, it still lacks the “superintelligent” spark of the top 3% of human practitioners. It also still suffers from unexpected lapses of ability, living as all AI now does, on the Jagged Frontier. This may change some day, but we have not seen it yet.

A stylized illustration of a jagged mountain range with a winding path leading to the peak, set against a muted blue and beige background, labeled 'JAGGED FRONTIER.'
See Harvard Business School’s Navigating the Jagged Technological Frontier and my humble papers, From Centaurs To Cyborgs, and Navigating the AI Frontier.

V. The Shift to Agency: From Prompters to Partners

If 2024 was the year of the Chatbot, 2025 was the year of the Agent. We saw the transition from passive text generators to “agentic AI”—systems capable of planning, executing, and iterating on complex workflows. I wrote two articles on AI agents in 2025. In June, From Prompters to Partners: The Rise of Agentic AI in Law and Professional Practice and in November, The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%.

Agency was mentioned in many of my other articles in 2025. For instance, in my June and July as part of my release the ‘Panel of Experts’—a free custom GPT tool that demonstrated AI’s surprising ability to split into multiple virtual personas to debate a problem. Panel of Experts for Everyone About Anything, Part One and Part Two and Part Three .Crucially, we learned that ‘agentic’ teams work best when they include a mandatory ‘Contrarian’ or Devil’s Advocate. This proved that the most effective cure for AI sycophancy—its tendency to blindly agree with humans—is structural internal dissent.

By the end of 2025 we were already moving from AI adoption to close entanglement of AI into our everyday lives

An artistic representation of a human hand reaching out to a robotic hand, signifying the concept of 'entanglement' in AI technology, with the year 2025 prominently displayed.
Close hybrid multimodal methods of AI use were proven effective in 2025 and are leading inexorably to full AI entanglement.

This shift forced us to confront the role of the “Sin Eater”—a concept I explored via Professor Ethan Mollick. As agents take on more autonomous tasks, who bears the moral and legal weight of their errors? In the legal profession, the answer remains clear: we do. This reality birthed the ‘AI Risk-Mitigation Officer‘—a new career path I profiled in July. These professionals are the modern Sin Eaters, standing as the liability firewall between autonomous code and the client’s life, navigating the twin perils of unchecked risk and paralysis by over-regulation.

But agency operates at a macro level, too. In June, I analyzed the then hot Trump–Musk dispute to highlight a new legal fault line: the rise of what I called the ‘Sovereign Technologist.’ When private actors control critical infrastructure—from satellite networks to foundation models—they challenge the state’s monopoly on power. We are still witnessing a constitutional stress-test where the ‘agency’ of Tech Titans is becoming as legally disruptive as the agents they build.

As these agents became more autonomous, the legal profession was forced to confront an ancient question in a new guise: If an AI acts like a person, should the law treat it like one? In October, I explored this in From Ships to Silicon: Personhood and Evidence in the Age of AI. I traced the history of legal fictions—from the steamship Siren to modern corporations—to ask if silicon might be next.

While the philosophical debate over AI consciousness rages, I argued the immediate crisis is evidentiary. We are approaching a moment where AI outputs resemble testimony. This demands new tools, such as the ALAP (AI Log Authentication Protocol) and Replication Hearings, to ensure that when an AI ‘takes the stand,’ we can test its veracity with the same rigor we apply to human witnesses.

VI. The New Geometry of Justice: Topology and Archetypes

To understand these risks, we had to look backward to move forward. I turned to the ancient visual language of the Tarot to map the “Top 22 Dangers of AI,” realizing that archetypes like The Fool (reckless innovation) and The Tower (bias-driven collapse) explain our predicament better than any white paper. See, Archetypes Over Algorithms; Zero to One: A Visual Guide to Understanding the Top 22 Dangers of AI. Also see, Afraid of AI? Learn the Seven Cardinal Dangers and How to Stay Safe.

But visual metaphors were only half the equation; I also needed to test the machine’s own ability to see unseen connections. In August, I launched a deep experiment titled Epiphanies or Illusions? (Part One and Part Two), designed to determine if AI could distinguish between genuine cross-disciplinary insights and apophenia—the delusion of seeing meaningful patterns in random data, like a face on Mars or a figure in toast.

I challenged the models to find valid, novel connections between unrelated fields. To my surprise, they succeeded, identifying five distinct patterns ranging from judicial linguistic styles to quantum ethics. The strongest of these epiphanies was the link between mathematical topology and distributed liability—a discovery that proved AI could do more than mimic; it could synthesize new knowledge

This epiphany lead to investigation of the use of advanced mathematics with AI’s help to map liability. In The Shape of Justice, I introduced “Topological Jurisprudence”—using topological network mapping to visualize causation in complex disasters. By mapping the dynamic links in a hypothetical we utilized topology to do what linear logic could not: mathematically exonerate the innocent parties. The topological map revealed that the causal lanes merged before the control signal reached the manufacturer’s product, proving the manufacturer had zero causal connection to the crash despite being enmeshed in the system. We utilized topology to do what linear logic could not: mathematically exonerate the innocent parties in a chaotic system.

A person in a judicial robe stands in front of a glowing, intricate, knot-like structure representing complex data or ideas, symbolizing the intersection of law and advanced technology.
Topological Jurisprudence: the possible use of AI to find order in chaos with higher math. Click here to see YouTube video introduction.

VII. The Human Edge: The Hybrid Mandate

Perhaps the most critical insight of 2025 came from the Stanford-Carnegie Mellon study I analyzed in December: Hybrid AI teams beat fully autonomous agents by 68.7%.

This data point vindicated my long-standing advocacy for the “Centaur” or “Cyborg” approach. This vindication led to the formalization of the H-Y-B-R-I-D protocol: Human in charge, Yield programmable steps, Boundaries on usage, Review with provenance, Instrument/log everything, and Disclose usage. This isn’t just theory; it is the new standard of care.

My “Human Edge” article buttressed the need for keeping a human in control. I wrote this in January 2025 and it remains a persona favorite. The Human Edge: How AI Can Assist But Never Replace. Generative AI is a one-dimensional thinking tool My ‘Human Edge’ article buttressed the need for keeping a human in control… AI is a one-dimensional thinking tool, limited to what I called ‘cold cognition’—pure data processing devoid of the emotional and biological context that drives human judgment. Humans remain multidimensional beings of empathy, intuition, and awareness of mortality.

AI can simulate an apology, but it cannot feel regret. That existential difference is the ‘Human Edge’ no algorithm can replicate. This self-evident claim of human edge is not based on sentimental platitudes; it is a measurable performance metric.

I explored the deeper why behind this metric in June, responding to the question of whether AI would eventually capture all legal know-how. In AI Can Improve Great Lawyers—But It Can’t Replace Them, I argued that the most valuable legal work is contextual and emergent. It arises from specific moments in space and time—a witness’s hesitation, a judge’s raised eyebrow—that AI, lacking embodied awareness, cannot perceive.

We must practice ‘ontological humility.’ We must recognize that while AI is a ‘brilliant parrot’ with a photographic memory, it has no inner life. It can simulate reasoning, but it cannot originate the improvisational strategy required in high-stakes practice. That capability remains the exclusive province of the human attorney.

A futuristic office scene featuring humanoid robots and diverse professionals collaborating at high-tech desks, with digital displays in a skyline setting.
AI data-analysis servants assisting trained humans with project drudge-work. Close interaction approaching multilevel entanglement. Click here or image for YouTube animation.

Consistent with this insight, I wrote at the end of 2025 that the cure for AI hallucinations isn’t better code—it’s better lawyering. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. We must skeptically supervise our AI, treating it not as an oracle, but as a secret consulting expert. As I warned, the moment you rely on AI output without verification, you promote it to a ‘testifying expert,’ making its hallucinations and errors discoverable. It must be probed, challenged, and verified before it ever sees a judge. Otherwise, you are inviting sanctions for misuse of AI.

Infographic titled 'Cross-Examine Your AI: A Lawyer's Guide to Preventing Hallucinations' outlining a protocol for legal professionals to verify AI-generated content. Key sections highlight the problem of unchecked AI, the importance of verification, and a three-phase protocol involving preparation, interrogation, and verification.
Infographic of Cross-Exam ideas. Click here for full size image.

VII. Conclusion: Guardians of the Entangled Era

As we close the book on 2025, we stand at the crossroads described by Sam Altman and warned of by Henry Kissinger. We have opened Pandora’s box, or perhaps the Magician’s chest. The demons of bias, drift, and hallucination are out, alongside the new geopolitical risks of the “Sovereign Technologist.” But so is Hope. As I noted in my review of Dario Amodei’s work, we must balance the necessary caution of the “AI MRI”—peering into the black box to understand its dangers—with the “breath of fresh air” provided by his vision of “Machines of Loving Grace.” promising breakthroughs in biology and governance.

The defining insight of this year’s work is that we are not being replaced; we are being promoted. We have graduated from drafters to editors, from searchers to verifiers, and from prompters to partners. But this promotion comes with a heavy mandate. The future belongs to those who can wield these agents with a skeptic’s eye and a humanist’s heart.

We must remember that even the most advanced AI is a one-dimensional thinking tool. We remain multidimensional beings—anchored in the physical world, possessed of empathy, intuition, and an acute awareness of our own mortality. That is the “Human Edge,” and it is the one thing no quantum chip can replicate.

Let us move into 2026 not as passive users entangled in a web we do not understand, but as active guardians of that edge—using the ancient tools of the law to govern the new physics of intelligence

Infographic summarizing the key advancements and societal implications of AI in 2025, highlighting topics such as quantum computing, agentic AI, and societal risk management.
Click here for full size infographic suitable for framing for super-nerds and techno-historians.

Ralph Losey Copyright 2025 — All Rights Reserved


Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations

December 17, 2025

Ralph Losey, December 17, 2025

I. Introduction: The Untested Expert in Your Office

AI walks into your office like a consulting expert who works fast, inexpensively, and speaks with knowing confidence. And, like any untested expert, is capable of being spectacularly wrong. Still, try AI out, just be sure to cross-examine it before using the work-product. This article will show you how.

A friendly-looking robot with a white exterior and glowing blue eyes, set against a wooden background. The robot has a broad smile and a tagline that reads, 'AI is only too happy to please.'
Want AI to do legal research? Find a great case on point? Beware: any ‘Uncrossed AI’ might happily make one up for you. [All images in this article by Ralph Losey using AI tools.]

Lawyers are discovering AI hallucinations the hard way. Courts are sanctioning attorneys who accept AI’s answers at face value and paste them into briefs without a single skeptical question. In the first, Mata v. Avianca, Inc., a lawyer submitted a brief filled with invented cases that looked plausible but did not exist. The judge did not blame the machine. The judge blamed the lawyer. In Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024), the Second Circuit again confronted AI-generated citations that dissolved under scrutiny. Case dismissed. French legal scholar Damien Charlotin has catalogued almost seven hundred similar decisions worldwide in his AI Hallucination Cases project. The pattern is the same: the lawyer treated AI’s private, untested opinion as if it were ready for court. It wasn’t. It never is.

A holographic figure resembling a consultant sits at a table with two lawyers, one male and one female, who appear to be observing the figure's briefcase labeled 'ANSWERS.' Books are placed on the table.
Never accept research or opinions before you skeptically cross-examine the AI.

The solution is not fear or avoidance. It is preparation. Think of AI the way you think of an expert you are preparing to testify. You probe their reasoning. You make sure they are not simply trying to agree with you. You examine their assumptions. You confirm that every conclusion has a basis you can defend. When you apply that same discipline to AI — simple, structured, lawyerly questioning — the hallucinations fall away and the real value emerges.

This article is not about trials. It is about applying cross-examination instincts in the office to control a powerful, fast-talking, low-budget consulting expert who lives in your laptop.

Click here to see video on YouTube of Losey’s encounters with unprepared AIs.

II. AI as Consulting Expert and Testifying Expert: A Hybrid Metaphor That Works

Experienced litigators understand the difference between a consulting expert and a testifying expert. A consulting expert works in private. You explore theories. You stress-test ideas. The expert can make mistakes, change positions, or tell you that your theory is weak. None of it harms the case because none of it leaves the room. It is not discoverable.

Once you convert that same person into a testifying expert, everything changes. Their methodology must be clear. Their assumptions must be sound. Their sources must be disclosed. Their opinions must withstand cross-examination. Their credibility must be earned. Discovery of them is open subject to minor restraints.

AI Should always start as a secret consulting expert. It answers privately, often brilliantly, sometimes sloppily, and occasionally with complete fabrications. But the moment you rely on its words in a brief, a declaration, a demand letter, a discovery response, or a client advisory, you have promoted that consulting expert to a testifying one. Judges and opposing counsel will evaluate its work that way — even if you didn’t.

This hybrid metaphor — part expert preparation, part cross-examination — is the most accurate way to understand AI in legal practice. It gives you a familiar, legally sound framework for interrogating AI before staking your reputation on its output.

A lawyer seated at a desk reading documents, with a holographic figure representing AI or an expert consultant displayed next to him.
Working with AI and carefully examining its early drafts.

III. Why Lawyers Fear AI Today: The Hallucination Problem Is Real, but Preventable

AI hallucinations sound exotic, but they are neither mysterious nor unpredictable. They arise from familiar causes:

Anyone who has ever supervised an over-confident junior associate will recognize these patterns or response. Ask vague questions and reward polished answers, and you will get polished answers whether they are correct or not.

The problem is not that AI hallucinates. The problem is that lawyers forget to interrogate the hallucination before adopting it.

Never rely on an AI that has not been cross examined.

Both lawyer and judicial frustration is mounting. Charlotin’s global hallucination database reads like a catalogue of avoidable errors. Lawyers cite nonexistent cases, rely on invented quotations, or submit timelines that collapse the moment a judge asks a basic question. Courts have stopped treating these problems as innocent misunderstandings about new technology. Increasingly, they see them as failures of competence and diligence.

The encouraging news is that hallucinations collapse under even moderate questioning. AI improvises confidently in silence. It becomes accurate under pressure.

That pressure is supplied by cross-examination.

A female business professional discussing strategies with a humanoid robot in a modern office setting, displaying the text 'PREPARE INTERROGATE VERIFY' on a screen in the background.
Team approach to AI prep works well, including other AIs.

IV. Five Cross-Examination Techniques for AI

The techniques below are adapted from how lawyers question both their own experts and adverse ones. They require no technical training. They rely entirely on skills lawyers already use: asking clear questions, demanding reasoning, exposing assumptions, and verifying claims.

The five techniques are:

  1. Ask for the basis of the opinion.
  2. Probe uncertainty and limits.
  3. Present the opposing argument.
  4. Test internal consistency.
  5. Build a verification pathway.

Each can be implemented through simple, repeatable prompts.

A woman in a business suit stands confidently in a courtroom-like setting, pointing with one finger while holding a tablet. Next to her is a humanoid robot. A large sign in the background displays the words 'BASIS', 'UNCERTAINTY', 'OPPOSING', 'CONSISTENCY', and 'VERIFY'. Sky-high view of city buildings is visible through the window.
Click to see YouTube video of this associate’s presentation to partners of the AI cross-exam.

1. Ask for the Basis of the Opinion

AI developers use the word “mechanism.” Lawyers use reasoning, methodology, procedure, or logic. Whatever the label, you need to know how the model reached its conclusion.

Instead of asking, “What’s the law on negligent misrepresentation in Florida?” ask:

“Walk me through your reasoning step by step. List the elements, the leading cases, and the authorities you are relying on. For each step, explain why the case applies.”

This produces a reasoning ladder rather than a polished paragraph. You can inspect the rungs and see where the structure holds or collapses.

Ask AI explicitly to:

  • identify each reasoning step
  • list assumptions about facts or law
  • cite authorities for each step
  • rate confidence in each part of the analysis

If the reasoning chain buckles, the hallucination reveals itself.

A lawyer in a suit examining a transparent, futuristic humanoid robot's head with a flashlight in a library setting.
Click here for short YouTube video animation about reasoning cross.

2. Probe Uncertainty and Limits

AI tries to be helpful and agreeable. It will give you certainty, even though it is fake. The original AI training data from the Internet never said, “I don’t know the answer.” So now you have to train your AI in prompts and project instructions to admit it does not know. You must demand honesty. You must demand truth over agreement with your own thoughts and desires. Repeatedly specify to AI in instructions to admit when it does not know the answer, or is uncertain. Get it to explain to you what is does not know; to explain what it cannot provide citations to support. Get it to reveal the unknowns.

A friendly robot with a smile sitting at a desk with a computer keyboard, in front of two screens displaying error messages '404 ANSWER NOT FOUND' and 'ANSWERS NOT FOUND.' The robot appears to be ready to improvise.
Most AIs do not like to admit they don’t know. Do you?

Ask your AI:

  • “What do you not know that might affect this conclusion?”
  • “What facts would change your analysis?”
  • “Which part of your reasoning is weakest?”
  • “Which assumptions are unstated or speculative?”

Good human experts do this instinctively. They mark the edges of their expertise. AI will also do it, but only when asked.

A man in a suit stands in a courtroom, holding a tablet and speaking confidently, with a holographic display of connected data points in the background.
Click here for YouTube animation of AI cross of its unknowns.

3. Present the Opposing Argument

If you only ask, “Why am I right?” AI will gladly tell you why you are right. Sycophantism is one of its worst habits.

Counteract that by assigning it the opposing role:

  • “Give me the strongest argument against your conclusion.”
  • “How would opposing counsel attack this reasoning?”
  • “What weaknesses in my theory would they highlight?”

This is the same preparation you would do with a human expert before deposition: expose vulnerabilities privately so they do not explode publicly.

A lawyer in a formal suit stands in a courtroom, examining a holographic chessboard with blue and orange outlines representing opposing arguments.
Quality control by counter-arguments. Click here for short YouTube animation.

4. Test Internal Consistency

Hallucinations are brittle. Real reasoning is sturdy.

You expose the difference by asking the model to repeat or restructure its own analysis.

  • “Restate your answer using a different structure.”
  • Summarize your prior answer in three bullet points and identify inconsistencies.”
  • “Explain your earlier analysis focusing only on law; now do the same focusing only on facts.”

If the second answer contradicts the first, you know the foundation is weak.

This is impeachment in the office, not in the courtroom.

A digitally created robot face divided in half, with one side featuring cool metallic tones and glowing blue elements, and the other side displaying warmer hues with a glowing red effect.
Click here for YouTube animation on contradictions.

5. Build a Verification Pathway

Hallucinations survive only when no one checks the sources.

Verification destroys them.

Always:

  • read every case AI cites and make sure the court cited actually issued the opinion (of course, also check case history to verify it is still good law)
  • confirm that the quotations appear in the opinion (sometime small errors creep in)
  • check jurisdiction, posture, and relevance (normal lawyer or paralegal analysis)
  • verify every critical factual claim and legal conclusion

This is not “extra work” created by AI. It is the same work lawyers owe courts and clients. The difference is simply that AI can produce polished nonsense faster than a junior associate. Overall, after you learn the AI testing skills, the time and money saved will be significant. This associate practically works for free with no breaks for sleep, much less food or coffee.

Your job is to slow it down. Turn it off while you check its work.

An older man in a suit sits at a table, writing notes on a document, while a humanoid robot with blue eyes sits beside him in a professional setting.
Always carefully check the work of your AIs.

V. How Cross-Examination Dramatically Reduces Hallucinations

Cross-examination is not merely a metaphor here. It is the mechanism — in the lawyer’s meaning of the word — that exposes fabrication and reveals truth.

Consider three realistic hypotheticals.

1. E-Discovery Misfire

AI says a custodian likely has “no relevant emails” based on role assumptions.

You ask: “List the assumptions you relied on.”

It admits it is basing its view on a generic corporate structure.

You know this company uses engineers in customer-facing negotiations.

Hallucination avoided.

2. Employment Retaliation Timeline

AI produces a clean timeline that looks authoritative.

You ask: “Which dates are certain and which were inferred?”

AI discloses that it guessed the order of two meetings because the record was ambiguous.

You go back to the documents.

Hallucination avoided.

3. Contract Interpretation

AI asserts that Paragraph 14 controls termination rights.

You ask: “Show me the exact language you relied on and identify any amendments that affect it.”

It re-reads the contract and reverses itself.

Hallucination avoided.

The common thread: pressure reveals quality.

Without pressure, hallucinations pass for analysis.

A businessman in a suit points at a digital display showing a timeline with events and an inconsistency highlighted in red, seated next to a humanoid robot on a table with a laptop.
Work closely with your AI to improve and verify its output.

VI. Why Litigators Have a Natural Advantage — And How Everyone Else Can Learn

Litigators instinctively challenge statements. They distrust unearned confidence. They ask what assumptions lie beneath a conclusion. They know how experts wilt when they cannot defend their methodology.

But adversarial reasoning is not limited to courtrooms. Transactional lawyers use it in negotiations. In-house lawyers use it in risk assessments. Judges use it in weighing credibility. Paralegals and case managers use it in preparing witnesses and assembling factual narratives.

Anyone in the legal profession can practice:

  • asking short, precise questions
  • demanding reasoning, not just conclusions
  • exploring alternative explanations
  • surfacing uncertainty
  • checking for consistency

Cross-examining AI is not a trial skill. It is a thinking skill — one shared across the profession.

A business meeting in an office featuring a woman in a suit presenting to a robot resembling Iron Man, while a man in a suit sits at a laptop, with a display showing academic citations and data in the background.
Thinking like a lawyer is a prerequisite for AI training; be skeptical and objective.

VII. The Lawyer’s Advantage Over AI

AI is inexpensive, fast, tireless, and deeply cross-disciplinary. It can outline arguments, summarize thousands of pages, and identify patterns across cases at a speed humans cannot match. It never complains about deadlines and never asks for a retainer.

Human experts outperform AI when judgment, nuance, emotional intelligence, or domain mastery are decisive. But those experts are not available for every issue in every matter.

AI provides breadth. Lawyers provide judgment.

AI provides speed. Lawyers provide skepticism.

AI provides possibilities. Lawyers decide what is real.

Properly interrogated, AI becomes a force multiplier for the profession.

Uninterrogated, it becomes a liability.

A professional meeting room with a diverse group of lawyers and a robot figure. The human leader gestures confidently while presenting. A screen behind them displays phrases like 'Challenge assumptions,' 'Expose weak logic,' and 'Ask better questions.'
Good lawyers challenge and refine their AI output.

VIII. Courts Expect Verification — And They Are Right

Judges are not asking lawyers to become engineers or to audit model weights. They are asking lawyers to verify their work.

In hallucination sanction cases, courts ask basic questions:

  • Did you read the cases before citing them?
  • Did you confirm that the case exists in any reporter?
  • Did you verify the quotations?
  • Did you investigate after concerns were raised?

When the answer is no, blame falls on the lawyer, not on the software.

Verification is the heart of legal practice.

It just takes a few minutes to spot and correct the hallucinated cases. The AI needs your help.

IX. Practical Protocol: How to Cross-Examine Your AI Before You Rely on It

A reliable process helps prevent mistakes. Here is a simple, repeatable, three-phase protocol.

Phase 1: Prepare

  1. Clarify the task.

Ask narrow, jurisdiction-specific, time-anchored questions.

  1. Provide context.

Give procedural posture, factual background, and applicable law.

  1. Request reasoning and sources up front.

Tell AI you will be reviewing the foundation.

Phase 2: Interrogate

  1. Ask for step-by-step reasoning.
  2. Probe what the model does not know.
  3. Have it argue the opposite side.
  4. Ask for the analysis again, in a different structure.

This phase mimics preparing your own expert — in private.

Phase 3: Verify

  1. Check every case in a trusted database.
  2. Confirm factual claims against your own record.
  3. Decide consciously which parts to adopt, revise, or discard.

Do all this and if a judge or client later asks, “What did you do to verify this?” – you have a real answer.

Business meeting involving a lawyer presenting to a man and a humanoid robot, with a digital presentation on a screen that includes flowchart-style prompts.
It takes some training and experience, but keeping your AI under control is really not that hard.

X. The Positive Side: AI Becomes Powerful After Cross-Examination

Once you adopt this posture, AI becomes far less dangerous and far more valuable.

When you know you can expose hallucinations with a few well-crafted questions, you stop fearing the tool. You start seeing it as an idea generator, a drafting assistant, a logic checker, and even a sparring partner. It shows you the shape of opposing arguments. It reveals where your theory is vulnerable. It highlights ambiguities you had overlooked.

Cross-examination does not weaken AI.

It strengthens the partnership between human lawyer and machine.

A lawyer and a humanoid robot stand together in a courtroom, representing a blend of human expertise and artificial intelligence in legal practice.
Click here for video animation on YouTube.

XI. Conclusion: The Return of the Lawyer

Cross-examining your AI is not a theatrical performance. It is the methodical preparation that seasoned litigators use whenever they evaluate expert opinions. When you ask AI for its basis, test alternative explanations, probe uncertainty, check consistency, and verify its claims, you transform raw guesses into analysis that can withstand scrutiny.

Two professionals interacting with a futuristic robot in an office setting, analyzing a digital display that highlights the concept of 'Inference Gap Needs Judgment' amidst various data points and inferences.
Complex assignments always take more time but the improved quality AI can bring is well worth it.

Courts are no longer forgiving lawyers who fall for a sycophantic AI and skip this step. But they respect lawyers who demonstrate skeptical, adversarial reasoning — the kind that prevents hallucinations, avoids sanctions, and earns judicial confidence. More importantly, this discipline unlocks AI’s real advantages: speed, breadth, creativity, and cross-disciplinary insight.

The cure for hallucinations is not technical.

It is skeptical, adversarial reasoning.

Cross-examine first. Rely second.

That is how AI becomes a trustworthy partner in modern practice.

See the animation of our goodbye summary on the YouTube video. Click here.

Ralph Losey Copyright 2025 — All Rights Reserved


Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago

October 24, 2025

Meanwhile, Even Bigger Breakthroughs by Google Continue

By Ralph Losey, October 21, 2025.

The Nobel Prize in Physics was just awarded to quantum physics pioneers John Clarke, Michel H. Devoret, and John M. Martinis for discoveries they made at UC Berkeley in the 1980s. They proved that quantum tunneling, where subatomic particles can break through seemingly impenetrable barriers, can also occur in the macroscopic world of electrical circuits. So yes, Schrödinger’s cat really could die.

A digital illustration featuring three scientists with varying facial expressions, posed in a futuristic setting, symbolizing breakthroughs in quantum computing. In the foreground, there is an artistic depiction of a cat with a skull overlay, creating a surreal contrast.
Quantum Physics Pioneers take home the Nobel Prize: John Clarke, Michel H. Devoret, and John M. Martinis. All images in this article are by Ralph Losey using AI image generation tools.

Their experiments showed that entire circuits can behave as single quantum objects, bridging the gap between theory and engineering. That breakthrough insight paved the way for construction of quantum computers, including the latest by Google.

Both Devoret and Martinis were recruited years ago by Google to help design its quantum processors. Although John Martinis (right, in the image above) recently departed to start his own company, Qolab, Michel Devoret (center) remains at Google Quantum AI as the Chief Scientist of Quantum Hardware. Last year, two other Google scientists, John Jumper and Demis Hassabis, shared a Nobel prize in chemistry for their groundbreaking work in AI.

Google is clearly on a roll here. As Google CEO Sundar Pichai joked in his congratulatory post on LinkedIn: “Hope Demis Hassabis and John Jumper are teaching you the secret handshake.”

A human hand shakes a holographic robotic hand in front of a quantum computer, with a Google logo in the background.
The secret handshake to Google’s Nobel Prizes is the combination of AI and Quantum.

🔹 Willow Breaks Through Its Own Barriers

Less than a year ago, Google’s new quantum chip, Willow, tunneled through its own barriers, performing in five minutes a calculation that would have taken ten septillion years (10²⁴) on the fastest classical supercomputers. That’s far longer than anyone’s estimate for the age of our universe—a good definition of mind-boggling.

This result led Hartmut Neven, director of Google’s Quantum Artificial Intelligence Lab, to suggest it offers strong evidence for the many-worlds or multiverse interpretation of quantum mechanics—the idea that computation may occur across near-infinite parallel universes. Neven and a number of leading researchers subscribe to this view.

I explored that seemingly crazy hypothesis in Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan 9, 2025). Oddly enough, it became my most-read article of all time—thank you, readers.

Today’s piece updates that story. The Nobel Prize recognition is icing on the cake, but progress has not slowed. Quantum computers—and the law—remain one of the most exciting frontiers in legal-tech. So much so that I’m developing a short online course on quantum computing and law, with more courses on prompt engineering for legal professionals coming soon. Subscribe to e-DiscoveryTeam.com to be notified when they launch.

The work of this year’s Nobel laureates—Clarke, Devoret, and Martinis—was done forty years ago, so delay in recognition is hardly unusual in this field. Perhaps someday Neven and other many-worlds interpreters of quantum physics will receive their own Nobel Prize for demonstrating multiverse-scale applications. In my view, far more evidence than speed alone will be required.

After all, it defies common sense to imagine, as the multiverse hypothesis suggests, that every quantum event splits reality, spawning a near-infinite array of universes. For example, one where Schrödinger’s cat is alive and another slightly different unoiverse where it is dead. It makes Einstein’s “spooky action at a distance seem tame by comparison.

An illustrated depiction of Schrödinger's cat concept, featuring a cartoon cat and a skeleton inside a wooden box, symbolizing the quantum mechanics thought experiment.
Spooky questions: Why are ‘you’ conscious in this particular universe? Are you dead in another?

In the meantime—whatever the true mechanism—quantum computers and AI are already producing tangible social and legal consequences in cryptography, cybercrime, and evidentiary law. See, The Quantum Age and Its Impacts on the Civil Justice System (Rand, April 29, 2025); Quantum-Readiness: Migration to Post-Quantum Cryptography (NIST, NSA, August, 2023); Quantum Computing Explained (NIST 8/22/2025); but see, Keith Martin, Is a quantum-cryptography apocalypse imminent? (The Conversation , 6/2/25) (“Expert opinion is highly divided on when we can expect serious quantum computing to emerge,” with estimates ranging from imminent to 20 years or more.)

Whether you believe in the multiverse or not, the practical implications for law and technology are already arriving.

Abstract illustration representing the multiverse theory with multiple cosmic spheres and the text 'MULTIVERSE THEORY' and 'INFINITE PARALLEL UNIVERSES'.
Might this theory someday seem like common sense? Or will most Universes discard it as another ‘spooky’ idea of experimental scientists?

🔹 Atlantic Quantum Joins Google Quantum AI

On October 2, 2025, Hartmut Neven, Founder and Lead, Google Quantum AI, announced in a short post titled “We’re scaling quantum computing even faster with Atlantic Quantum” that Google had just acquired. Atlantic Quantum is an MIT-founded startup developing superconducting quantum hardware. The announcement, written in Neven’s signature understated style, framed the deal as a practical step on Google’s long road toward “a large error-corrected quantum computer and real-world applications.”

Neven explained that Atlantic Quantum’s modular chip stack, which integrates qubits and superconducting control electronics within the cryogenic stage, will allow Google to “more effectively scale our superconducting qubit hardware.” That phrase may sound routine to non-engineers, but it represents a significant leap in design philosophy: merging computation and control at the cold stage reduces signal loss, simplifies architecture, and makes modular scaling—the key to fault-tolerant machines—realistically achievable. This is another great acquisition by Google.

Independent reporting quickly confirmed the deal’s importance. In Atlantic Quantum Joins Google Quantum AI, The Quantum Insider’s Matt Swayne summarized the deal succinctly:

• Google Quantum AI has acquired Atlantic Quantum, an MIT-founded startup developing superconducting quantum hardware, to accelerate progress toward error-corrected quantum computers. . . .
• The deal underscores a broader industry trend of major technology companies absorbing research-intensive startups to advance quantum computing, a field still years from large-scale commercial deployment.

The article noted that the integration of Atlantic Quantum’s modular chip-stack technology into Google’s program was aimed at one of quantum computing’s toughest engineering hurdles: scaling systems to become practical and fault-tolerant.

The MIT-born startup, founded in 2021 by a group of physicists determined to push superconducting design beyond incremental improvements, focused on embedding control electronics directly within the quantum processor. That approach reduces noise, simplifies wiring, and makes modular expansion far more realistic. For another take on the Atlantic story, see Atlantic Quantum and Google Quantum AI are “Joining Up” (Quantum Computing Report, 10/02/25).

These articles place the transaction within a broader wave of global investment in quantum technologies. Large-scale commercial deployment may still be years away but the industry has already entered a phase of consolidation. Research-heavy startups are increasingly being absorbed by major technology companies, a predictable evolution in a field defined by extraordinary capital demands and complex technical challenges.

For Google, the acquisition is less about headlines and more about infrastructure control, owning every layer of the superconducting stack from design to fabrication. For the industry, it signals that the next phase of quantum development will likely follow the same arc as classical computing: early-stage innovation absorbed by large, well-capitalized firms that can bear the cost of scaling.

For lawyers and regulators, that pattern has familiar consequences: intellectual-property concentration, antitrust scrutiny, export-control compliance, and the evidentiary standards that will eventually govern how outputs from such corporate-owned quantum systems are regulated and presented in court.

An illustration depicting the concept of innovation in the technology industry, contrasting 'Early-Stage Innovation' represented by small fish and a light bulb, with 'Large, Well-Capitalized Firms' represented by a shark featuring the Google logo. The background includes circuit patterns, symbolizing the tech ecosystem.
Familiar pattern and legal issues continue in our Universe.

🔹 Willow and the Many-Worlds Question

Before the Nobel bell rang in Stockholm, Google’s Quantum AI group had already changed the conversation with its Willow processor.

In my earlier piece on Willow’s mind-bending computations, I quoted Hartmut Neven’s ‘parallel universes’ framing to describe its behavior. Some heard music; others heard marketing. Others, like me, saw trouble ahead.

The Nobel Prize did not validate the many-worlds interpretation of quantum mechanics, nor did it disprove it. Neven has not backed away from the theory, nor have others, and Neven has just gotten the best talent from MIT to join his group. What the Nobel Prize did confirm—beyond any reasonable doubt—is that macroscopic superconducting circuits, at a size you can see, can exhibit genuine quantum behavior under controlled laboratory conditions. That is the solid foundation a judge or regulator can stand on: devices now exist in our world that generate outputs with quantum fingerprints reproducible enough to test and verify.

Meanwhile, the frontier continues to move. In September 2025, researchers at UNSW Sydney demonstrated entanglement between two atomic nuclei separated by roughly twenty nanometers, See, “New entanglement breakthrough links cores of atoms, brings quantum computers closer” (The Conversation, Sept. 2025). Twenty nanometers is not big, but it is large enough to measure.

Moreover, even though the electrical circuits themselves are large enough to photograph, the quantum energy was not. That could only be measured indirectly. The researchers used coupled electrons as what lead scientist Professor Andrea Morello called “telephones” to pass quantum correlations and make those measurements.

An artistic representation of quantum entanglement, featuring glowing atomic particles connected by luminous paths, illustrating the complex interactions in quantum mechanics.
Electrons acting like telephones passing quantum correlations on measurable scales.

The telephone metaphor is apt. It captures the engineering ambition behind the result—connecting quantum rooms with wires, not whispers. Whispers don’t echo. Entanglement is not a philosophical idea; it is a measurable resource that can be distributed, controlled, and eventually commercialized. It can even call home.

For the legal system, this is where things become concrete. When entanglement leaves the lab and enters communications or sensing devices, courts will be asked to evaluate evidence that can be measured and described but cannot be seen directly. The question will no longer be “Is this real?” but “How do we authenticate what can be measured but not observed?”

That’s the moment when the physics of quantum control becomes the jurisprudence of evidence—and it’s coming faster than most practitioners realize.

A surreal painting depicting several figures whispering to each other in an arched, dimly lit setting, with wave-like patterns of light radiating from a central source.
Whispers Don’t Echo.

🔹 Defining the Echo: When Evidence Repeats With a Slight Accent

The many-worlds interpretation of quantum mechanics has always sat on the thin line between physics and philosophy. First proposed in 1957 by Hugh Everett, it replaces the familiar ‘collapse‘ of the wave-function with a more radical notion: every quantum event splits reality into separate branches, each continuing independently. Some brilliant physicists take it seriously; others reject it; many remain agnostic. Courts need not resolve that debate. For law, the relevant question is simpler: can a party show a method that reliably connects a claimed quantum mechanism to a particular output? If yes, the court’s job is to hear the evidence. If not, the court’s job is to exclude it.

In its early decades, the idea was mostly dismissed as metaphysical excess. Then  Bryce DeWittDavid DeutschMax Tegmark and Sean Carroll each found ways to refine and defend it. David Deutsch, known as the Father of Quantum Comnputing, first argued that quantum computers might actually use this multiplicity to perform computations—each universe branch carrying part of the load. See e.g., Deutsch, The Fabric of Reality: The Science of Parallel Universes–and Its Implications (Penguin, 1997) (Chapter 9, Quantum Computers). Deutsch even speculates in his next (2011) book The Beginning of Infinity (pg. 294) that some fiction, such as alternate history, could occur somewhere in the multiverse, as long as it is consistent with the laws of physics.

The many-world’s argument, once purely theoretical, gained traction after Google’s Willow experiments. Hartmut Neven’s reference to “parallel universes” was not an assertion of proof but a shorthand for describing interference effects that defy classical intuition. It is what he believes was happening—and that opinion carries weight because he works with quantum computers every day.

When quantum behavior became experimentally measurable in superconducting circuits that were large enough to photograph, the Everett question—’Are we branching universes or sampling probabilities?‘—stopped being rhetorical. The debate moved from thought experiment to instrument design. Engineers now face what philosophers only imagined: how to measure, stabilize, and interpret outcomes that occur across many possible worlds and never converge on a single, deterministic path.

For the law, the relevance lies not in metaphysics but in method. Whether the universe splits or probabilities collapse, the data these machines produce are inherently probabilistic—repeatable only within margins, each time with a slight accent. The courtroom analog to wave-function collapse is the evidentiary demand for reproducibility. If the physics no longer promises identical outputs, the law must decide what counts as reliable sameness—echoes with an accent.

That shift from metaphysics to methodology is the lawyer’s version of a measurement problem. It’s not about believing in the multiverse. It’s about learning how to authenticate evidence that depends on it.

A vibrant abstract representation of quantum physics, featuring concentric circles and spheres radiating in a spectrum of colors, symbolizing subatomic particles and quantum behavior.
Repeatable measurements through parallel universes to explain quantum computer calculations. Crazy but true?

🔹 The Law Listens: Authenticating Echoes in Practice

If each quantum record is an echo, the law’s task is to decide which echoes can be trusted. That requires method, not metaphysics. The legal system already has the tools—authentication, replication, expert testimony—but they need recalibration for an age when precision itself is probabilistic.

1. Authentication in context.
Under Rule 901(b)(9), evidence generated by a process or system must be shown to produce accurate results. In a quantum context, that showing might include the type of qubit, its error-correction protocol, calibration logs, environmental controls, and the precise code path that produced the output. The burden of proof doesn’t change; only the evidentiary ingredients do.

2. Replication hearings.
In classical computing, replication is binary—either a hash matches, or it doesn’t. In quantum systems, replication becomes statistical. The question is no longer “Can this be bit-for-bit identical?” but “Does this fall within the accepted variance?” Probabilistic systems demand statistical fidelity, not sameness. A replication hearing becomes a comparison of distributions, not exact strings of bits.

Similar logic already guides quantum sensing and metrology, where entanglement and superposition improve precision in measuring magnetic fields, time, and gravitational effects. See Quantum sensing and metrology for fundamental physics (NSF, 2024); Review of qubit-based quantum sensing (Springer, 2025); Advances in multiparameter quantum sensing and metrology (arXiv, 2/24/25); Collective quantum enhancement in critical quantum sensing (Nature, 2/22/25). Those readings vary from one run to the next, yet the variance itself confirms the physics—each measurement is a statistically faithful echo of the same underlying reality. The variances are within a statistically acceptable range of error.

An abstract illustration showing a silhouette of a person standing next to a swirling vortex surrounded by circular shapes and geometric lines, representing concepts of quantum mechanics and the multiverse.
Each measurements is slightly different but similar enough to be statistically faithful echoes of the same underlying reality.

🔹 Two Examples from the Quantum Frontier

1. Quantum Chemistry In Practice.

One of the most mature quantum applications today is the Variational Quantum Eigensolver (VQE), a hybrid quantum-classical algorithm used to estimate the ground-state energy of molecules. See, The Variational Quantum Eigensolver: A review of methods and best practices (Phys. Rep., 2023); Greedy gradient-free adaptive variational quantum algorithms on a noisy intermediate scale quantum computer (Nature, 5/28/25). Also see, Distributed Implementation of Variational Quantum Eigensolver to Solve QUBO Problems (arXiv, 8/27/25); How Does Variational Quantum Eigensolver Simulate Molecules? (Quantum Tech Explained, YouTube video, Sept. 2025).

VQE researchers routinely run the same circuit hundreds of times; each iteration yields slightly different energy readings because of noise, calibration drift, and quantum fluctuations. Yet the outputs consistently cluster around a stable baseline, confirming both the accuracy of the physical model and the reliability of the machine itself.

Now picture a pharmaceutical patent dispute where one party submits quantum-derived binding data for a new molecule. The opposing side demands replication. A court applying Rule 702 may not expect identical numbers—but it could require expert testimony showing that results consistently fall within a scientifically accepted margin of error. If they do, that should become a legally sufficient echo.

This is reminiscent of prior disputes e-discovery concerning the use of AI to find relevant documents. It has been accepted by all courts that perfection, such as 100% recall, is never required, but reasonable efforts are required. Judge Andrew Peck, Hyles vNew York City, No. 10 Civ. 3119 (AT)(AJP), 2016 WL 4077114 (S.D.N.Y. Aug. 1, 2016). This also follows the official commentary of Rule 702, on expert testimony, where “perfection is not required.” Fed. R. Evid. 702, Advisory Committee Note to 2023 Amendment.

The reasonable efforts can be proven by numerics and testimony. See for instance my writings in the TAR Course: Fifteenth Class- Step Seven – ZEN Quality Assurance Tests (e-Discovery Team, 2015) (Zero Error Numerics); ei-Recall (e-Discovery Team, 2015); Some Legal Ethics Quandaries on Use of AI, the Duty of Competence, and AI Practice as a Legal Specialty (May, 2024).

An illustration emphasizing the phrase 'Reasonable efforts required, not perfection,' featuring a checklist with a checkmark, scales of justice, and a prohibition symbol.
There is no perfect case, evidence or efforts. In reality, ‘perfect is the enemy of the good.’

2. Quantum-Secure Archives.

As quantum computing and quantum cryptography advance, most (but not all) of today’s encryption will become obsolete. This means the vast amount of encrypted data stored in corporate and governmental archives—maintained for regulatory, evidentiary, and operational purposes—may soon be an open book to attackers. Yes, you should be concerned.

Rich DuBose and Mohan Rao, Harvest now, decrypt later: Why today’s encrypted data isn’t safe forever (Hashi Corp., May 21, 2025) explain:

Most of today’s encryption relies on mathematical problems that classical computers can’t solve efficiently — like factoring large numbers, which is the foundation of the Rivest–Shamir–Adleman (RSA) algorithm, or solving discrete logarithms, which are used in Elliptic Curve Cryptography (ECC) and the Digital Signature Algorithm (DSA). Quantum computers, however, could solve these problems rapidly using specialized techniques such as Shor’s Algorithm, making these widely used encryption methods vulnerable in a post-quantum world.

Also see, Dan Kent, Quantum-Safe Cryptography: The Time to Start Is Now (Gov.Tech., 4/30/25) and Amit Katwala, The Quantum Apocalypse Is Coming. Be Very Afraid (Wired, Mar. 24, 2025), warning that cybersecurity analysts already call this future inflection point Q-Day—the day a  quantum computer can crack the most widely used encryption. As Katwala writes:

On Q-Day, everything could become vulnerable, for everyone: emails, text messages, anonymous posts, location histories, bitcoin wallets, police reports, hospital records, power stations, the entire global financial system.

Most responsible organizations with large archives of sensitive data have been preparing for Q-Day for years. So too have those on the other side—nation-states, intelligence services, and organized criminal groups—who are already harvesting encrypted troves today to decrypt later. See, Roger Grimes, Cryptography Apocalypse: Preparing for the Day When Quantum Computing Breaks Today’s Crypto (Wiley, 2019). The race for quantum supremacy is on.

Now imagine a company that migrates its document-management system to post-quantum cryptography in 2026. A year later, a breach investigation surfaces files whose verification depends on hybrid key-exchange algorithms and certificate chains. The plaintiff calls them anomalies; the defense calls them echoes. The court won’t choose sides by theory—it will follow the evidence, the logs, and the math.

An artistic representation of an hourglass with celestial spheres and swirling galaxies, symbolizing the concept of time and the multiverse in quantum physics.
The metrics are what should matter, not the many theories

🔹 Building the Quantum Record

Judicial findings and transparency. Courts can adapt existing frameworks rather than invent new ones. A short findings order could document:
(a) authentication steps taken;
(b) observed variance;
(c) expert consensus on reliability; and
(d) scope limits of admissibility.
Such transparency builds a common-law record—the first body of quantum-forensic precedent. I predict it will be coming soon to a universe near you!

Chain of custody for the probabilistic age. Future evidence protocols may pair traditional logs with variance ranges, confidence intervals, and error budgets. Discovery rules could require disclosure of device calibration history, firmware versions, and known noise parameters. The data once confined to labs will become essential for authentication.

The law doesn’t need new virtues for quantum evidence; it needs old ones refined. Transparency, documentation, and replication remain the gold standard. What changes is the expectation of sameness. The goal is no longer perfect duplication, but faithful resonance: the trusted echo that still carries truth through uncertainty.

An artistic depiction of a swirling vortex, featuring an hourglass shape with vibrant colors, symbolizing the concept of multiverses and quantum physics. Small planets are depicted within the flow, representing various realities branching out from a central point of light.
Metrics carry the truth through uncertainty.

🔹 Conclusion: The Sound of Evidence

The Nobel Committee rang the bell. Google’s engineers adding instruments. Labs in Sydney and elsewhere wired new rooms together. The rest of us—lawyers, paralegals, judges, legal-techs, investigators—must learn how to listen for echoes without hearing ghosts. That means resisting hype, insisting on method, and updating our checklists to match what the devices actually do.

Eight months ago in Quantum Leap, I described a canyon where a single strike of an impossible calculation set the walls humming. This time, the sound came from Stockholm. If the next echo is from quantum evidence in your courtroom—perhaps as a motion in limine over non-identical logs—don’t panic. Listen for the rhythm beneath the noise. The law’s task is to hear the pattern, not silence the world.

Science, like law, advances by listening closely to what reality whispers back. The Nobel Committee just honored three physicists for demonstrating that quantum behavior can be engineered, measured, and replicated—its fingerprints recorded even when the phenomenon itself remains invisible. Their achievement marks a shift from theory to tested evidence, a shift the courts will soon confront as well.

When engineers speak of quantum advantage, they mean a moment when machines perform tasks that classical systems cannot. The legal system will have its own version: a time when quantum-derived outputs begin to appear in contracts, forensic analysis, and evidentiary records. The challenge will not be cosmic. It will be procedural. How do you test, authenticate, and trust results that vary within the bounds of physics itself?

The answer, as always, lies in method. Law does not require perfection; it requires transparency and proof of process. When the next Daubert hearing concerns a quantum model rather than a mass spectrometer, the same questions will apply: Was the procedure sound? Were the results reproducible within accepted error? Were the foundations laid? The physics may evolve, but the evidentiary logic remains timeless.

In the end, what matters is not whether the universe splits or probabilities collapse. What matters is whether we can recognize an honest echo when we hear one—and admit it into evidence.

An artistic representation of a cosmic hourglass surrounded by swirling galaxies and planets, symbolizing time, the universe, and the concept of the multiverse.
It is only a matter of time before quantum generated evidence seeks admission to your world.

🔹 Postscript.

Minutes before this article was published Google announced an important new discovery called “Quantum ECHO.” Yes, same name as this article, written by Ralph Losey with no advance notice from Google of the discovery or name. A spooky entanglement, perhaps? Ralph will publish a sequel soon that spells out what Google has done now. In the meantime, here is Google’s announcement by Hartmut Neven\ and Vadim Smelyanskiy, Our Quantum Echoes algorithm is a big step toward real-world applications for quantum computing (Google, 10/22/25).

🔹 Subscribe and Learn More

If this exploration of Quantum Echoes and evidentiary method has sparked your curiosity, you can find much more at e-DiscoveryTeam.com — where I continue to write about artificial intelligence, quantum computing, evidence, e-discovery, and the future of law. Go there to subscribe and receive email notices of new blogs and upcoming courses, and special events — including an online course, with a working title Quantum Law: From Entanglement to Evidence,‘ that will expand on the ideas introduced here. It will discuss how quantum physics and AI converge in the practice of law, from authentication and reliability to discovery and expert testimony.

That program will be followed by two other, longer online courses that are also near completion:

  • Beginner “GPT-4 Level” Prompt Engineering for Legal Professionals,’ a practical foundation in AI literacy and applied reasoning.
  • Advanced “GPT-5 Level” Prompt Engineering for Legal Professionals,’ an in-depth study of prompt design, model evaluation, and AI ethics.

All courses are part of my continuing effort to help the legal profession adapt responsibly to the next wave of technology — with integrity, experience and whatever wisdom I may have accidentally gathered from a long life on Earth.

A contemplative figure stands in a futuristic hallway lined with framed portals, each leading to different cosmic landscapes, while a bright light emanates from above.
Ralph looking back on the many worlds of technology he has been in. What a long, strange trip its been.

Subscribe at e-DiscoveryTeam.com for notices of new articles, course announcements, and research updates.

Because the future of law won’t be written by those who fear new tools, but by those who understand the evidence they produce.


Ralph C. Losey is an attorney, educator, and author of e-DiscoveryTeam.com, where he writes about artificial intelligence, quantum computing, evidence, e-discovery, and emerging technology in law.

© 2025 Ralph C. Losey. All rights reserved.