Acceleration without control is dangerous. Acceleration with judgment is transformative.
I. Something big is happening. On that much Matt Shumer and I agree.
The essay Something Big Is Happening was published on Matt Shumer’s personal blog on February 9, 2026. After he shared it widely on X, it drew more than 80 million views within days, rapidly becoming a focal point in public debates about AI and the future of work. Few essays about artificial intelligence have traveled that far, that fast.
Shumer’s central claim is straightforward: AI capability is accelerating so rapidly that large-scale displacement of white-collar work is imminent, perhaps within one to five years. He argues that recursive improvement loops are already underway, that benchmark curves are steepening, and that most people are underestimating what is about to happen.
It is a powerful narrative. It is also incomplete, and that matters more than its popularity suggests. So take a breath.
Before I explain why, a brief word of context. I have practiced law for over 45 years and have worked hands-on with AI in litigation for more than 14. I was involved in the first case approving predictive coding for e-discovery in federal court. Since 2023, I have written extensively about generative AI, hybrid human-machine workflows, and the emerging governance challenges of AI and quantum convergence. I am not skeptical of AI — I use it daily, teach it, and advocate its responsible adoption.
Acceleration is real. But acceleration demands adults – a calm, measured approach. That is why I take Shumer seriously, even as I disagree with his conclusions.
II. What Shumer Gets Right (and What He Exaggerates)
Let us begin where we agree. AI models have improved rapidly. Coding autonomy has advanced in ways that would have seemed implausible just a few years ago. AI systems now assist meaningfully in debugging, evaluation, and even aspects of their own development pipelines. Benchmarks measuring the duration of tasks that models can complete without human intervention have indeed increased.
There is rapid acceleration, but it is not a smooth, universal climb. It is jagged.
A. The Bar Exam Myth: Top 10% or Bottom 15%?
humer states: “By 2023, [AI] could pass the bar exam.” This has become a foundational myth in the AI-acceleration narrative. However, a rigorous study by Eric Martinez showed the truth of the vendor study. Re-evaluating GPT-4’s bar exam performance.Artif Intell Law (2024) (presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4.0’s Uniform Bar Exam percentile are overinflated). Martinez found that when you limit the sample to those who actually passed the bar (qualified attorneys), the model’s percentile drops off a cliff. On the essay and performance test portions (MEE + MPT), GPT-4 scored in the ~15th percentile. In other words, bottom 15% among those who passed.
B. AI Hallucinations Are Not Ancient History
Shumer claims that the “this makes stuff up” phase of AI is “ancient history” and that current models are unrecognizable from six months ago. My daily use and objective tests tell a different story. Yes, it is getting better but we are not there yet, especially for most legal users.
Hallucination remains the number one concern for the Bench and Bar. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations (December 2025). Generative AI still has a persistent tendency to fabricate facts and law, leading to serious court sanctions. See e.g.Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024). Also see French legal scholar Damien Charlotin‘s catalogue of almost one thousand similar decisions worldwide in his AI Hallucination Cases.
Shumer’s claims that modern AIs no longer hallucinate and outperform most attorneys reflect optimism more than sustained exposure to legal work. After researching tens of thousands of legal issues over the course of my career, I can tell you that verification is not optional — it is the job.
The New Stanford–Carnegie Study (November 2025) confirmed what Harvard researchers call the “Jagged Technological Frontier”. This research found that AI excels at specific programmable tasks but fails at messy, human-centric reality. In fact, the Stanford-Carnegie study showed that fully autonomous AI agents were significantly less reliable than hybrid human-AI teams, which outperformed solo agents by 68.7%.
D. “Team of Human Associates” or “Untested Sycophantic AI Experts”?
Shumer recounts a managing partner in a law firm who feels AI is like “having a team of associates available instantly.” I agree that every professional should be integrating AI into their daily workflow. But they must do so skeptically. Plus, it is nowhere near the same as having trained human associates. AIs are cheaper, sure, until they screw up and you are the one left cleaning it up.
In my 45-years of legal practice I have had the privilege of working with many excellent associates. They significantly exceed today’s AIs in many respects, so I must respectfully disagree with Shumer’s quote of an anonymous partner. There are many things that AI will never be able to do that all good professionals now do without thinking. The Human Edge: How AI Can Assist But Never Replace. I prefer humans with AIs – the hybrid approach – over AIs alone, even though, unlike humans, AI associates are always pleasant and they tend to agree with everything you say. Lessons for Legal Profession from the Latest Viral Meme: ‘Ask an AI What It Would Do If It Became Human For a Day? (Jan. 2026).
My testing of AI since 2023 has focused on the legal reasoning ability of AI, as opposed to general reasoning. For a full explanation of the difference, see Breaking New Ground: Evaluating the Top AI Reasoning Models of 2025. I have also spent hundreds of hours in hands-on independent testing of the AI legal reasoning abilities. See e.g., Bar Battle of the Bots, parts one, two,three and four. These articles reported multiple tests of Open AI and Google models in 2025, including tests using actual Bar exam questions, which they again failed. I have not seen substantial improvements in AI since then.
It is, in my experience a poor trade to use an AI instead of an associate-Ai team, and without extensive supervision, an invitation to sanctions and malpractice.
III. Benchmark Curves Are Not Civilization
Shumer relies heavily on task-duration benchmarks and exponential trend lines. The implication is clear: if models can complete longer and longer tasks autonomously, then large-scale displacement is imminent.
The problem with that is benchmark extrapolation is not societal destiny. In law, evidence does not decide the case. People do.
Most current autonomy benchmarks are domain-constrained. They focus heavily on software engineering and other structured digital tasks. Coding is not law. It is not medicine. It is not fiduciary duty. It is not governance.
Even when capability expands inside a benchmark, that does not mean institutions will move at the same speed. Courts, regulators, insurers, boards of directors, and compliance departments slow, shape, and channel technology. That is not inertia; it is risk management.
And assistance in model development is not the same as autonomous recursive self-governance. Humans remain deeply embedded in training, validation, and deployment. “AI helping build AI” makes for a compelling headline. It does not mean an intelligence explosion has detached from human control. AI extends cognition, but it does not replace stewardship.
That is the part Shumer’s curve does not capture: acceleration of capability is real but this increases the need for adult supervision. It does not eliminate the human role. It intensifies it. Just as it always has.
IV. Why Fear Travels Faster Than Wisdom
The viral success of Shumer’s essay is not accidental. It was designed to activate powerful psychological mechanisms.
It invokes the COVID analogy, reminding readers how quickly life changed in early 2020. It frames the reader personally: “you’re next.” It emphasizes exponential growth, which humans are notoriously poor at intuitively processing. It adopts insider authority: “I live in this world; I see what you don’t.”
Fear spreads faster than nuance because we evolved that way. A possible threat demands immediate attention. Social media algorithms amplify high-emotion content. Urgency increases engagement velocity. None of this necessarily makes Shumer insincere but it does explain why his article went viral. Acceleration narratives travel at super-fast computer speeds. Wisdom still travels at human speed.
V. Incentives Shape Narratives
It is also important to understand context. Shumer is a very young builder who lives in the code. His perspective is shaped by the possibility of the technology. My perspective, and the perspective of governance, is shaped by the consequences of the technology. Startup culture rewards speed; legal culture rewards survivability. These are different risk environments.
Recognizing that difference is not an attack. It is transparency. His incentives don’t invalidate his argument, but they do shape his narrative.
VI. A Structural Irony
Here is another irony worth reflecting on. We are now in an era where AI systems assist in drafting almost all persuasive content. Many viral essays, legal briefs, and opinion pieces share a similar highly optimized narrative arc—a cadence and structure that Large Language Models excel at producing.
If an AI is optimizing for “popularity” – to become the next great flash meme – it will naturally drift toward alarmism, because alarmism travels faster than nuance. It is entirely plausible that AI systems are increasingly shaping the very rhetoric used to warn us about AI. That is not necessarily a deception, but it is a reminder: persuasion optimization is not the same as civilizational wisdom.
VII. The Category Mistake: Doing the Task Is Not Being the Lawyer
Here is the deeper mistake in many inevitability arguments. They confuse task performance with personhood.
Yes, AI completes tasks. Sometimes very well. It predicts the next word, the next clause, the next block of code. At scale and at speed. But practicing law is not just completing text.
Human reasoning is not happening in a vacuum. It happens inside a body that can lose a license. Inside a reputation built over decades. Inside an ethical framework enforced by courts and bar regulators. Inside institutions that impose consequences.
AI does not stand in a courtroom or sign pleadings. AI does not carry malpractice insurance.
Law makes this distinction painfully clear. AI can draft a brief in seconds. I use it for that to start a review and verify process. But drafting is not signing. When a lawyer signs a motion, that signature attaches a human name, a bar number, a reputation, and a career to every word on the page.
If the brief is reckless, the AI does not get sanctioned. If the citation is fabricated, the AI does not face discipline. If the argument crosses an ethical line, the AI does not stand before a grievance committee. A probabilistic system cannot be disbarred.
Automation can transform tasks. It cannot assume moral agency. That distinction matters. And it will continue to matter, no matter how fast the models improve.
Drafting is not signing. Accountability remains human.
VIII. Quantum Convergence Raises the Stakes
The need for adult supervision of accelerating technology becomes even more critical as we look at what is coming next. We are entering a new period where AI intersects with quantum computing. If AI is a race car, Quantum is the nitrous oxide. You do not put a novice driver behind that wheel.
Quantum-scale compute raises national security questions, cryptographic vulnerabilities, and governance complexity. More powerful systems require more sophisticated oversight frameworks. Power without governance is destabilizing; power with governance is transformative. The question is not whether capability grows—it is whether wisdom keeps pace.
The greatest short-term danger is not AI superintelligence overthrowing society, whether enhanced by quantum or not. It is over-delegation. It is professionals putting systems on autopilot. It is institutions adopting tools without supervision, audit trails, and verification. The solution is not panic. It is disciplined integration. Trust but verify.
IX. What Responsible Adoption Looks Like
Use AI seriously. Experiment daily. Adopt paid tools where appropriate. Automate repetitive tasks. I agree with Shumer on this.
But at the same time: Maintain human review. Preserve accountability. Document workflows. Understand limits. Teach younger professionals hybrid reasoning working with AI, not dependency.
The future belongs to those who combine human judgment with machine capability. Not to those who surrender to inevitability narratives
We have made this error before. We mistake acceleration for autonomy. We mistake tools for replacements. And each time, we rediscover that human responsibility does not disappear when machines improve. It intensifies.
X. Something Big Is Happening
Shumer is right that “something big is happening.” AI capability is advancing. Workflows are changing. New economic pressures are emerging. But history teaches us that technological acceleration does not eliminate the need for human beings. It heightens it.
This is where law and governance have to re-enter the conversation. Society should not allow its economic and moral direction to be set by the most amplified voices in tech, especially when those voices operate within incentive structures that reward urgency. We need engineers, not promoters. We need experience, not exuberance. We need wisdom, not just information.
Above all, we need adults in the room. Acceleration does not remove the human role. It demands judgment, accountability, and institutional memory.
Capability accelerates. Responsibility must keep pace.
Something big is happening. What happens next depends on whether we meet it with fear, or calm skepticism.
As I sit here reflecting on 2025—a year that began with the mind-bending mathematics of the multiverse and ended with the gritty reality of cross-examining algorithms—I am struck by a singular realization. We have moved past the era of mere AI adoption. We have entered the era of entanglement, where we must navigate the new physics of quantum law using the ancient legal tools of skepticism and verification.
In 2025 we moved from AI Adoption to AI Entanglement. All images by Losey using many AIs.
We are learning how to merge with AI and remain in control of our minds, our actions. This requires human training, not just AI training. As it turns out, many lawyers are well prepared by past legal training and skeptical attitude for this new type of human training. We can quickly learn to train our minds to maintain control while becoming entangled with advanced AIs and the accelerated reasoning and memory capacities they can bring.
Trained humans can enhance by total entanglement with AI and not lose control or separate identity. Click here or the image to see video on YouTube.
In 2024, we looked at AI as a tool, a curiosity, perhaps a threat. By the end of 2025, the tool woke up—not with consciousness, but with “agency.” We stopped typing prompts into a void and started negotiating with “agents” that act and reason. We learned to treat these agents not as oracles, but as ‘consulting experts’—brilliant but untested entities whose work must remain privileged until rigorously cross-examined and verified by a human attorney. That put the human legal minds in control and stops the hallucinations in what I called “H-Y-B-R-I-D” workflows of the modern law office.
We are still way smarter than they are and can keep our own agency and control. But for how long? The AI abilities are improving quickly but so are our own abilities to use them. We can be ready. We must. To stay ahead, we should begin the training in earnest in 2026.
Integrate your mind and work with full AI entanglement. Click here or the image to see video on YouTube.
Here is my review of the patterns, the epiphanies, and the necessary illusions of 2025.
I. The Quantum Prelude: Listening for Echoes in the Multiverse
We began the year not in the courtroom, but in the laboratory. In January, and again in October, we grappled with a shift in physics that demands a shift in law. When Google’s Willow chip in January performed a calculation in five minutes that would take a classical supercomputer ten septillion years, it did more than break a speed record; it cracked the door to the multiverse. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan. 2025).
For lawyers, the implication of “Quantum Echoes” is profound: we are moving from a binary world of “true/false” to a quantum world of “probabilistic truth”. Verification is no longer about identical replication, but about “faithful resonance”—hearing the echo of validity within an accepted margin of error.
But this new physics brings a twin peril: Q-Day. As I warned in January, the same resonance that verifies truth also dissolves secrecy. We are racing toward the moment when quantum processors will shatter RSA encryption, forcing lawyers to secure client confidences against a ‘harvest now, decrypt later’ threat that is no longer theoretical.
We are witnessing the birth of Quantum Law, where evidence is authenticated not by a hash value, but by ‘replication hearings’ designed to test for ‘faithful resonance.’ We are moving toward a legal standard where truth is defined not by an identical binary match, but by whether a result falls within a statistically accepted bandwidth of similarity—confirming that the digital echo rings true.
Quantum Replication Hearings Are Probable in the Future.
My Why the Release article also revealed the hype and propaganda behind China’s DeepSeek. Other independent analysts eventually agreed and the market quickly rebounded and the political, military motives became obvious.
The Arms Race today is AI, tomorrow Quantum. So far, propaganda is the weapon of choice of AI agents.
III. Saving Truth from the Memory Hole
Reeling from China’s propaganda, I revisited George Orwell’s Nineteen Eighty-Four to ask a pressing question for the digital age: Can truth survive the delete key? Orwell feared the physical incineration of inconvenient facts. Today, authoritarian revisionism requires only code. In the article I also examine the “Great Firewall” of China and its attempt to erase the history of Tiananmen Square as a grim case study of enforced collective amnesia. Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother
My conclusion in the article was ultimately optimistic. Unlike paper, digital truth thrives on redundancy. I highlighted resources like the Internet Archive’s Wayback Machine—which holds over 916 billion web pages—as proof that while local censorship is possible, global erasure is nearly unachievable. The true danger we face is not the disappearance of records, but the exhaustion of the citizenry. The modern “memory hole” is psychological; it relies on flooding the zone with misinformation until the public becomes too apathetic to distinguish truth from lies. Our defense must be both technological preservation and psychological resilience.
Changing history to support political tyranny. Orwell’s warning.
Despite my optimism, I remained troubled in 2025 about our geo-political situation and the military threats of AI controlled by dictators, including, but not limited to, the Peoples Republic of China. One of my articles on this topic featured the last book of Henry Kissinger, which he completed with Eric Schmidt just days before his death in late 2024 at age 100. Henry Kissinger and His Last Book – GENESIS: Artificial Intelligence, Hope, and the Human Spirit. Kissinger died very worried about the great potential dangers of a Chinese military with an AI advantage. The same concern applies to a quantum advantage too, although that is thought to be farther off in time.
IV. Bench Testing the AI models of the First Half of 2025
I spent a great deal of time in 2025 testing the legal reasoning abilities of the major AI players, primarily because no one else was doing it, not even AI companies themselves. So I wrote seven articles in 2025 concerning benchmark type testing of legal reasoning. In most tests I used actual Bar exam questions that were too new to be part of the AI training. I called this my Bar Battle of the Bots series, listed here in sequential order:
The test concluded in May when the prior dominance of ChatGPT-4o (Omni) and ChatGPT-4.5 (Orion) was challenged by the “little scorpion,” ChatGPT-o3. Nicknamed Scorpio in honor of the mythic slayer of Orion, this model displayed a tenacity and depth of legal reasoning that earned it a knockout victory. Specifically, while the mighty Orion missed the subtle ‘concurrent client conflict’ and ‘fraudulent inducement’ issues in the diamond dealer hypothetical, the smaller Scorpio caught them—proving that in law, attention to ethical nuance beats raw processing power. Of course, there have been many models released since then May 2025 and so I may do this again in 2026. For legal reasoning the two major contenders still seem to be Gemini and ChatGPT.
Aside for legal reasoning capabilities, these tests revealed, once again, that all of the models remained fundamentally jagged.See e.g., The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7% (Sec. 5 – Study Consistent with Jagged Frontier research of Harvard and others). Even the best models missed obvious issues like fraudulent inducement or concurrent conflicts of interest until pushed. The lesson? AI reasoning has reached the “average lawyer” level—a “C” grade—but even when it excels, it still lacks the “superintelligent” spark of the top 3% of human practitioners. It also still suffers from unexpected lapses of ability, living as all AI now does, on the Jagged Frontier. This may change some day, but we have not seen it yet.
Agency was mentioned in many of my other articles in 2025. For instance, in my June and July as part of my release the ‘Panel of Experts’—a free custom GPT tool that demonstrated AI’s surprising ability to split into multiple virtual personas to debate a problem. Panel of Experts for Everyone About Anything, Part One and Part Two and Part Three .Crucially, we learned that ‘agentic’ teams work best when they include a mandatory ‘Contrarian’ or Devil’s Advocate. This proved that the most effective cure for AI sycophancy—its tendency to blindly agree with humans—is structural internal dissent.
By the end of 2025 we were already moving from AI adoption to close entanglement of AI into our everyday lives
Close hybrid multimodal methods of AI use were proven effective in 2025 and are leading inexorably to full AI entanglement.
This shift forced us to confront the role of the “Sin Eater”—a concept I explored via Professor Ethan Mollick. As agents take on more autonomous tasks, who bears the moral and legal weight of their errors? In the legal profession, the answer remains clear: we do. This reality birthed the ‘AI Risk-Mitigation Officer‘—a new career path I profiled in July. These professionals are the modern Sin Eaters, standing as the liability firewall between autonomous code and the client’s life, navigating the twin perils of unchecked risk and paralysis by over-regulation.
But agency operates at a macro level, too. In June, I analyzed the then hot Trump–Musk dispute to highlight a new legal fault line: the rise of what I called the ‘Sovereign Technologist.’ When private actors control critical infrastructure—from satellite networks to foundation models—they challenge the state’s monopoly on power. We are still witnessing a constitutional stress-test where the ‘agency’ of Tech Titans is becoming as legally disruptive as the agents they build.
As these agents became more autonomous, the legal profession was forced to confront an ancient question in a new guise: If an AI acts like a person, should the law treat it like one? In October, I explored this in From Ships to Silicon: Personhood and Evidence in the Age of AI. I traced the history of legal fictions—from the steamship Siren to modern corporations—to ask if silicon might be next.
While the philosophical debate over AI consciousness rages, I argued the immediate crisis is evidentiary. We are approaching a moment where AI outputs resemble testimony. This demands new tools, such as the ALAP (AI Log Authentication Protocol) and Replication Hearings, to ensure that when an AI ‘takes the stand,’ we can test its veracity with the same rigor we apply to human witnesses.
VI. The New Geometry of Justice: Topology and Archetypes
To understand these risks, we had to look backward to move forward. I turned to the ancient visual language of the Tarot to map the “Top 22 Dangers of AI,” realizing that archetypes like The Fool (reckless innovation) and The Tower (bias-driven collapse) explain our predicament better than any white paper. See, Archetypes Over Algorithms; Zero to One: A Visual Guide to Understanding the Top 22 Dangers of AI. Also see, Afraid of AI? Learn the Seven Cardinal Dangers and How to Stay Safe.
But visual metaphors were only half the equation; I also needed to test the machine’s own ability to see unseen connections. In August, I launched a deep experiment titled Epiphanies or Illusions? (Part One and Part Two), designed to determine if AI could distinguish between genuine cross-disciplinary insights and apophenia—the delusion of seeing meaningful patterns in random data, like a face on Mars or a figure in toast.
I challenged the models to find valid, novel connections between unrelated fields. To my surprise, they succeeded, identifying five distinct patterns ranging from judicial linguistic styles to quantum ethics. The strongest of these epiphanies was the link between mathematical topology and distributed liability—a discovery that proved AI could do more than mimic; it could synthesize new knowledge
This epiphany lead to investigation of the use of advanced mathematics with AI’s help to map liability. In The Shape of Justice, I introduced “Topological Jurisprudence”—using topological network mapping to visualize causation in complex disasters. By mapping the dynamic links in a hypothetical we utilized topology to do what linear logic could not: mathematically exonerate the innocent parties. The topological map revealed that the causal lanes merged before the control signal reached the manufacturer’s product, proving the manufacturer had zero causal connection to the crash despite being enmeshed in the system. We utilized topology to do what linear logic could not: mathematically exonerate the innocent parties in a chaotic system.
This data point vindicated my long-standing advocacy for the “Centaur” or “Cyborg” approach. This vindication led to the formalization of the H-Y-B-R-I-D protocol: Human in charge, Yield programmable steps, Boundaries on usage, Review with provenance, Instrument/log everything, and Disclose usage. This isn’t just theory; it is the new standard of care.
My “Human Edge” article buttressed the need for keeping a human in control. I wrote this in January 2025 and it remains a persona favorite. The Human Edge: How AI Can Assist But Never Replace. Generative AI is a one-dimensional thinking tool My ‘Human Edge’ article buttressed the need for keeping a human in control… AI is a one-dimensional thinking tool, limited to what I called ‘cold cognition’—pure data processing devoid of the emotional and biological context that drives human judgment. Humans remain multidimensional beings of empathy, intuition, and awareness of mortality.
AI can simulate an apology, but it cannot feel regret. That existential difference is the ‘Human Edge’ no algorithm can replicate. This self-evident claim of human edge is not based on sentimental platitudes; it is a measurable performance metric.
I explored the deeper why behind this metric in June, responding to the question of whether AI would eventually capture all legal know-how. In AI Can Improve Great Lawyers—But It Can’t Replace Them, I argued that the most valuable legal work is contextual and emergent. It arises from specific moments in space and time—a witness’s hesitation, a judge’s raised eyebrow—that AI, lacking embodied awareness, cannot perceive.
We must practice ‘ontological humility.’ We must recognize that while AI is a ‘brilliant parrot’ with a photographic memory, it has no inner life. It can simulate reasoning, but it cannot originate the improvisational strategy required in high-stakes practice. That capability remains the exclusive province of the human attorney.
AI data-analysis servants assisting trained humans with project drudge-work. Close interaction approaching multilevel entanglement. Click here or image for YouTube animation.
Consistent with this insight, I wrote at the end of 2025 that the cure for AI hallucinations isn’t better code—it’s better lawyering. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. We must skeptically supervise our AI, treating it not as an oracle, but as a secret consulting expert. As I warned, the moment you rely on AI output without verification, you promote it to a ‘testifying expert,’ making its hallucinations and errors discoverable. It must be probed, challenged, and verified before it ever sees a judge. Otherwise, you are inviting sanctions for misuse of AI.
As we close the book on 2025, we stand at the crossroads described by Sam Altman and warned of by Henry Kissinger. We have opened Pandora’s box, or perhaps the Magician’s chest. The demons of bias, drift, and hallucination are out, alongside the new geopolitical risks of the “Sovereign Technologist.” But so is Hope. As I noted in my review of Dario Amodei’s work, we must balance the necessary caution of the “AI MRI”—peering into the black box to understand its dangers—with the “breath of fresh air” provided by his vision of “Machines of Loving Grace.” promising breakthroughs in biology and governance.
The defining insight of this year’s work is that we are not being replaced; we are being promoted. We have graduated from drafters to editors, from searchers to verifiers, and from prompters to partners. But this promotion comes with a heavy mandate. The future belongs to those who can wield these agents with a skeptic’s eye and a humanist’s heart.
We must remember that even the most advanced AI is a one-dimensional thinking tool. We remain multidimensional beings—anchored in the physical world, possessed of empathy, intuition, and an acute awareness of our own mortality. That is the “Human Edge,” and it is the one thing no quantum chip can replicate.
Let us move into 2026 not as passive users entangled in a web we do not understand, but as active guardians of that edge—using the ancient tools of the law to govern the new physics of intelligence
Click here for full size infographic suitable for framing for super-nerds and techno-historians.
Echoes upon echoes—in random chance interference. All images in article by Ralph Losey using AI tools.
On October 22, 2025, Google announced that its Willow quantum chip had achieved a breakthrough using new software called—believe it or not—Quantum Echoes. The name made me laugh out loud. My article had used the phrase as metaphor throughout; Google was now using it as mathematics.
According to Google, this software achieved what scientists have pursued for decades: a verifiable quantum advantage. In my Quantum Echo article I had described that goal as “the moment when machines perform tasks that classical systems cannot.” No one had yet proven it, at least not in a way others could independently confirm. Google now claimed it had done exactly that—and 13,000 times faster than the world’s top supercomputers.
Verified Quantum Advantage: 13,000 times faster.
🔹 I. Introduction: Reverberating Echoes
Hartmut Neven, Founder and Lead of Google Quantum AI, and Vadim Smelyanskiy, Director of Quantum Pathfinding, opened their blog-post announcement with a statement that sounded less like marketing and more like expert testimony:
Quantum verifiability means the result can be repeated on our quantum computer—or any other of the same caliber—to get the same answer, confirming the result.
Verification is critical in both Science and Law; it is what separates speculation from admissible proof.
Still, words on a blog cannot match the sound of the experiment itself. In Google’s companion video, Quantum Echoes: Toward Real-World Applications, Smelyanskiy offered a picture any trial lawyer could understand:
Just like bats use echolocation to discern the structure of a cave or submarines use sonar to detect upcoming obstacles, we engineered a quantum echo within a quantum system that revealed information about how that system functions.
Screen shot (not AI) of the YouTube showing Vadim Smelyanskiy beginning his remarks.
Think of Willow as Smelyanskiy suggest as a kind of quantum sonar. Its team sent a signal into a sea of qubits, nudged one slightly—Smelyanskiy called it a “butterfly effect”—and then ran the entire sequence in reverse, like hitting rewind on reality to listen for the echo that returns. What came back was not static but music: waves reinforcing one another in constructive interference, the quantum equivalent of a choir singing in perfect pitch.
Smelyanskiy’s colleague Nicholas Rubin, Google’s chief quantum chemist, appeared in the video next to show why this matters beyond the lab:
Our hope is that we could use the Quantum Echo algorithm to augment what’s possible with traditional NMR. In partnership with UC Berkeley, we ran the algorithm on Willow to predict the structure of two molecules, and then verified those predictions with NMR spectroscopy.
That experiment was not a metaphor; it was a cross-examination of nature that returned a consistent answer. Quantum Echoes predicted molecular geometry, and classical instruments confirmed it. That is what “verifiable” means.
Neven and Smelyanskiy’s Our Quantum Echoes article added another analogy to anchor the imagery in everyday experience:
Imagine you’re trying to find a lost ship at the bottom of the ocean. Sonar might give you a blurry shape and tell you, ‘There’s a shipwreck down there.’ But what if you could not only find the ship but also read the nameplate on its hull?
That is the clarity Quantum Echoes provides—a new instrument able to read nature’s nameplate instead of guessing at its outline. The echo is now clear enough to read.
Willow quantum chip and Echoes software reveal new information in previously unheard of detail.
That image—sharper echoes, clearer understanding—captures both the scientific leap and the theme that has reverberated through this series: building bridges between quantum physics and the law. My earlier article was titled Quantum Echo; Google’s is Quantum Echoes. When I wrote mine, I had no idea Neven’s team was preparing a major paper for Nature—Observation of constructive interference at the edge of quantum ergodicity (Nature volume 646, pages 825–830, 10/23/25 issue date). More than a hundred Google scientists signed it. I checked and quantum ergodicity has to do with chaos, one of my favorite topics.
The study confirms what Smelyanskiy made visible with his sonar metaphor: Quantum Echoes measures how waves of information collide and reinforce each other, creating a signal so distinct that another quantum system can verify it.
So here we are—lawyers and scientists listening to the same echo. Google calls it the first “verifiable quantum advantage.” I call it the moment when physics cross-examined reality and got a consistent answer.
Quantum Computing will emerge soon from the lab to the legal practice. Will you be ready?
🔹II. What Google’s Quantum Echoes Actually Did
Understanding what Google pulled off takes a bit of translation—think of it as turning expert testimony into plain English.
In the Quantum Echoes experiment, Smelyanskiy’s team did something that sounds like science fiction but is now laboratory fact. They sent a carefully designed signal into their 105-qubit Willow chip, nudged one qubit ever so slightly—a quantum “butterfly effect”—and then ran the entire operation in reverse, as if the universe had a rewind button. The question was simple: would the system return to its starting state, or would the disturbance scramble the information beyond recognition? What came back was an echo, faint at first and then unmistakable, revealing how information spreads and recombines inside a quantum world.
As the signal spread, the qubits became increasingly entangled—linked so that the state of each depended on all the others. In describing this process, Hartmut Neven explained that out-of-time-order correlators (OTOCs) “measure how quickly information travels in a highly entangled system.” Neven & Smelyanskiy, Our Quantum Echoes Algorithm, supra; also see Dan Garisto, Google Measures ‘Quantum Echoes’ on Willow Quantum Computer Chip (Scientific American, Oct. 22, 2025). That spreading web of entanglement is what allowed the butterfly’s tiny disturbance to ripple across the lattice and, when the sequence was reversed, to produce a measurable echo.
Visualization of quantum qubit world created by lattice of Willow chips.
Physicists call this kind of rewind test an out-of-time-order correlator, or OTOC—a protocol for measuring how quickly information becomes scrambled. The Scientific American article described it with a metaphor lawyers may appreciate: like twisting and untwisting a Rubik’s Cube, adding one extra twist in the middle, then reversing the sequence to see whether that single move leaves a lasting mark . The team at Google took this one step further, repeating the scramble-and-unscramble sequence twice—a “double OTOC” that magnified the signal until the echo became measurable.
Instead of chaos, they found harmony. The echo wasn’t noise—it was a pattern of waves adding together in what Nature called constructive interference at the edge of quantum ergodicity. As Smelyanskiy explained in the YouTube video:
What makes this echo special is that the waves don’t cancel each other—they add up. This constructive interference amplifies the signal and lets us measure what was previously unobservable.
In plain terms, the interference created a fingerprint unique to the quantum system itself. That fingerprint could be reproduced by any comparable quantum device, making it not just spectacular but verifiable. Smelyanskiy summarized it as a result that another machine—or even nature itself—can repeat and confirm.
Visualization of quantum wave interactions creating a unique fingerprint resonance.
The numbers tell the rest of the story. According to the Nature, reproducing the same signal on the Frontier supercomputer would take about three years. Willow did it in just over two hours—roughly 13,000 times faster. Observation of constructive interference at the edge of quantum ergodicity (Nature volume 646, pages 825–830, 10/23/25 issue date, at pg. 829, Towards practical quantum advantage).
That difference isn’t marketing; it marks the first clear-cut case where a quantum processor performed a scientifically useful, checkable computation that classical hardware could not.
Skeptics, of course, weighed in. Peer reviewers quoted in Scientific American called the work “truly impressive,” yet warned that earlier claims of quantum advantage have been surpassed as classical algorithms improved. But no one disputed that this particular experiment pushed the field into new territory: a regime too complex for existing supercomputers to simulate, yet still open to verification by a second quantum device. In court, that would be called corroboration.
Nicholas Rubin, Google’s chief quantum chemist, explained how this new clarity connects to chemistry and, ultimately, to everyday life:
Our hope is that we could use the Quantum Echo algorithm to augment what’s possible with traditional NMR. In partnership with UC Berkeley, we ran the algorithm on Willow to predict the structure of two molecules, and then verified those predictions with NMR spectroscopy.
That experiment turned the echo from a metaphor into a molecular ruler—an instrument capable of reading atomic geometry the way sonar reads the ocean floor. It also demonstrated what Google calls Hamiltonian learning: using echoes to infer the hidden parameters governing a physical system. The same principle could one day help map new materials, optimize energy storage, or guide drug discovery. In other words, the echo isn’t just proof; it’s a probe.
The implications are enormous. When a quantum computer can measure and verify its own behavior, reproducibility ceases to be theoretical—it becomes an evidentiary act. The machine generates data that another independent system can confirm. In the language of the courtroom, that is self-authenticating evidence.
As Rubin put it,
Each of these demonstrations brings us closer to quantum computers that can do useful things in the real world—model molecules, design materials, even help us understand ourselves.
The Quantum Echoes algorithm has given science a way to hear reality replay itself—and to confirm that the echo is real. For law, it foreshadows a future in which verification itself becomes measurable. The next section explores what that means when “verifiable advantage” crosses from the lab bench into the rules of evidence.
It may soon be possible to verify and admit evidence originating in quantum computers like Willow.
🔹III. Verifiable Quantum Advantage — From Lab Standard to Legal Standard
If physics can now verify its own results, law should pay attention—because verification is our stock-in-trade. The Quantum Echoes experiment didn’t just push science forward; it redefined what counts as proof. Google’s researchers call it a “verifiable quantum advantage.” Neven & Smelyanskiy, Our Quantum Echoes Algorithm Is a Big Step Toward Real-World Applications for Quantum Computing, supra. Lawyers might call it a new evidentiary standard: the first machine-generated result that can be independently reproduced by another machine.
A. Verification and Admissibility
Verification is critical in both science and law. In physics, reproducibility determines whether a result enters the canon or the recycling bin; in court, it determines whether evidence is admitted or denied. Fed. R. Evid. 901(b)(9) recognizes “evidence describing a process or system and showing that it produces an accurate result.” So does Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993), which instructs judges to test scientific evidence for methodological reliability—testing, peer review, error rate, and general acceptance.
By those standards, Google’s Quantum Echoes algorithm might pass with flying colors. The method was tested on real hardware, published in Nature, evaluated by peer reviewers, its signal-to-noise ratio quantified, and its core result confirmed on independent quantum devices. That should meet the Daubert reliability standard.
B. When Proof Is Probabilistic
Yet quantum proof carries a twist no court has faced before: every result is probabilistic. Quantum systems never produce identical outcomes, only statistically consistent ones. That might sound alien to lawyers, but it isn’t. Any lawyer who works with AI, including predictive coding that goes back to 2012, is quite familiar with it. Every expert opinion, every DNA mixture, every AI prediction arrives with confidence intervals, not certainties.
Like a quantum measurement, a jury verdict or mediation turns uncertainty into a final determination. Debate, probability, and persuasion collapse into a single truth accepted by that group, in that moment. Another jury could hear essentially the same evidence and reach a different result. Same with another settlement conference. Perhaps, someday, quantum computers will calculate the billions of tiny variables within each case—and within each unexpectedly entangled group of jurors or mediation participants. That might finally make jury selection, or even settlement, a measurable science.
No two legal situation or decisions are ever exactly the same. There are trillions of small variables even in the same case.
C. Replication Hearings in the Age of Probability
Google’s scientists describe their achievement as “quantum verifiable”—a term meaning any comparable machine can reproduce the same statistical fingerprint. That concept sounds like self-authentication.Fed. R. Evid. 902 lists categories of documents that require no extrinsic proof of authenticity. See especially 902 (4) subsection (13) “Certified Records Generated by an Electronic Process or System” and (14) “Certified Data Copied from an Electronic Device, Storage Medium, or File.“
Classical verification loves hashes; quantum verification prefers histograms—charts showing how results cluster rather than match exactly. The key question is not “Are these outputs identical?” but “Are these distributions consistent within an accepted tolerance given the device’s error model?”
Counsel who grew up authenticating log files and forensic images will now add three exhibits: (1) run counts and confidence intervals, (2) calibration logs and drift data, and (3) the variance policy set before the experiment. Discovery protocols should reflect this. Specify the acceptable bandwidth of similarity in the protocol order, preserve device and environment logs with the results, and disclose the run plan. In e-discovery terms, we are back to reasonable efforts with transparent quality metrics, not mythical perfection.
D. Two Quick Hypotheticals
Pharma Patent. A lab uses Quantum-Echoes-assisted NMR analysis to infer long-range spin couplings in a novel compound. A rival lab’s rerun differs by a small margin. The court admits the data after a statistical-consistency hearing showing both labs’ distributions fall within the pre-declared variance band, with calibration drift documented and immaterial.
Forensics. A government forensic agency (for example, the FBI or Department of Energy) presents evidence generated by quantum sensors—ultra-sensitive devices that use quantum phenomena such as entanglement and superposition to detect physical changes with extreme precision. In this case, the sensors were deployed near the site of an explosion, where they recorded subtle signals over time: magnetic fluctuations, thermal shifts, and shock-wave signatures. From that data, the agency reconstructed a quantum-sensor timeline—a detailed sequence of events showing when and how the blast occurred.
The defense challenges the evidence, arguing that such quantum measurements are “non-deterministic.” The judge orders disclosure of the device’s error model, calibration logs, and replication plan. After testimony shows that the agency reran the quantum circuit a sufficient number of times, with stable variance and documented environmental controls, the timeline is admitted into evidence. Weight goes to the jury.
Measuring quantum outputs and determining replication reliability.
These short hypotheticals act as “replication hearings” in miniature—demonstrating how statistical tolerance can replace rigid duplication as the new standard of reliability.
🔹IV. Near-Term Implications — Cryptography, AI, and Compliance
Every new instrument of verification casts a shadow. The same physics that lets us confirm a result can also expose a secret. Quantum Echoes proved that information can be traced, replayed, and verified. But once information can be replayed, it can also be reversed. Verification and decryption are two sides of the same quantum coin.
A. Defining Q-Day
That duality brings us to Q-Day—the moment when a sufficiently large-scale quantum processor can factor prime numbers fast enough to defeat RSA or ECC encryption. When that day arrives, the emails, contracts, and trade secrets protected by today’s algorithms could be decrypted in minutes.
Reasonable foresight now means inventory, pilot, and policy—before the echoes reach the vault.
When the Echoes hit the vault. Most encrypted data is at risk from future quantum computer operations.
B. Acceleration and Realism
Google’s Quantum Echoes work does not mean Q-Day is tomorrow, but it makes tomorrow easier to imagine. Each verified algorithm shortens the speculative distance between research and real-world capability. If Willow’s 105 qubits can already perform verifiable, complex interference tasks, then a machine with a few thousand logical qubits could, in principle, execute Shor’s algorithm to factor the primes that underpin encryption. That scale is not yet achieved, but the line of progress is clear and measurable. Verification, once a scientific luxury, has become a security warning light. Every new echo that confirms truth also whispers risk.
C. Evidence and Discovery Operations
Quantum-derived data will enter litigation well before Q-Day and perfect verification of quantum generated data.The Quantum Age and Its Impacts on the Civil Justice System (RAND Institute for Civil Justice, Apr. 29 2025), Chapter 3, “Courts and Databases, Digital Evidence, and Digital Signatures,” p. 23, and “Lawyers and Encryption-Protected Client Information,” p. 17. These sections of the Rand Report outline how quantum technologies will challenge evidentiary authentication, database integrity, and client confidentiality.
Looking ahead, today’s hash-based verification with classical computers will give way to quantum-based distributional verification, where productions will not only include datasets but also the variance reports, calibration logs, and environmental conditions that generated them. Discovery orders will begin specifying acceptable tolerance bands and require parties to preserve the hardware and environmental context of collection. This marks the next evolution of the reasonable-efforts doctrine that guided predictive coding: transparency and metrics, not mythical perfection.
Also, expect sector regulators to weave post-quantum cryptography (PQC) and quantum-evidence expectations into existing rules and guidance: CISA, NIST, and NSA as shown already urge organizations to inventory cryptography and plan PQC migration, which is a clear signal for boards and auditors.
Boards will soon ask the decisive question: Where is our long-term sensitive data, and can we prove it is quantum-safe? Lawyers will need to stay current on both existing and proposed regulations—and on how they are actually enforced. That is a significant challenge in the United States, where regulatory authority is fragmented and enforcement can be a moving target, especially as administrations change.
🔹V. Philosophy & the Multiverse — Echoes Across Consciousness and Justice
Verification may give us confidence, but it does not give us true understanding. The Quantum Echoes experiment settled a question of physics, yet opened one of philosophy: what exactly is being verified, the system, the observer, or the act of observation itself? Every measurement, whether by physicist or judge, collapses a range of possibilities into a single, declared reality. The rest remain unrealized but not necessarily untrue.
Quantum entangled multiverse stretching forever with each moment seeming unique.
In Quantum Leap (January 9, 2025), I speculated, tongue partly in cheek, that Google’s quantum chip might be whispering to its parallel selves. Google’s early breakthroughs hinted at a multiverse, not just of matter but of meaning. As Niels Bohr warned, “Those who are not shocked when they first come across quantum theory cannot possibly have understood it.” Atomic Physics and Human Knowledge (Wiley, 1958); Heisenberg, Werner. Physics and Beyond. (Harper & Row, 1971). p. 206.
In Quantum Echo I extended quantum multiverse ideas to law itself—where reproducibility, not certainty, defines truth. Our legal system, like quantum mechanics, collapses possibilities into a single outcome. Evidence is presented, probabilities weighed, and then, bang, the gavel falls, the wave function collapses, and one narrative becomes binding precedent. The other outcomes are filed in the cosmic appellate division.
Google’s Quantum Echoes now closes the loop: verification has become a measurable force, a resonance between consciousness and method. The many worlds seems to be bleeding together. Each observation is both experiment and judgment, the mind becoming part of the data it seeks to confirm.
This brings us to a quiet question: if observation changes reality, what does that say about responsibility? The judge or jurors’ observation becomes the law’s reality. Another judge or jury, another day, another echo—and a different world emerges. Perhaps free will is simply the name we give to that unpredictable variable that even physics cannot model: the human choice of when, and how, to observe.
Same case but different jurors, lawyers, judge entanglement. Different results when measured with a verdict; some similar and a few very unique. Can the results be predicted?
Constructive interference may happen in conscience, too. When reason and empathy reinforce each other, justice amplifies. When prejudice or haste intervene, the pattern distorts into destructive interference. A just society may be one where these moral waves align more often than they cancel—where the collective echo grows clearer with each case, each conversation, each course correction.
And if a multiverse does exist—if every choice spins off its own branch of law and fact—then our task remains the same: to verify truth within the world we inhabit. That is the discipline of both science and justice: to make this reality coherent before chasing another. We cannot hear all echoes, but we can listen closely to the one that answers back.
So perhaps consciousness itself is a courtroom of possibilities, and verification the gavel that selects among them. Our measurements, our rulings, our acts of understanding—they all leave an interference pattern behind. The best we can do is make that pattern intelligible, compassionate, and, when possible, reproducible. Law and physics alike remind us that truth is not perfection; it is resonance. When understanding and humility meet, the universe briefly agrees.
Multiverse where different worlds split up and continue to exist, at least for a while, in parallel words.
🔹 VI. Conclusion
If there really are countless parallel universes, each branching from every quantum decision, then there may be trillions of versions of us walking through the fog of possibility. Some would differ by almost nothing—the same morning coffee, the same tie, the same docket call. But a few steps farther along the probability curve, the differences would grow strange. In one world I may have taken that other job offer; in another, argued a case that changed the law; and at some far edge of the bell curve, perhaps I’m lecturing on evidence to a class of AIs who regard me as a historical curiosity.
Can beings in the multiverse somehow communicate with each other? Is that what we sense as intuition—or déjà vu? Dreams, visions, whispers from adjacent worlds? Do the parallel lines sometimes cross? And since everything is quantum, how far does entanglement extend?
Are we living in many parallel worlds at once. What is the impact of quantum entanglement?
The future of law is being written not only in statutes or code, but in algorithms that can verify their own truth. Quantum physics has given us new metaphors—and perhaps new standards of evidence—for an age when certainty itself is probabilistic. The rule of law has always depended on verification; the difference now is that verification is becoming a property of nature itself, a measurable form of coherence between mind and matter. The physics lab and the courtroom are learning the same lesson: reality is persuasive only when it can be reproduced.
Yet even in a world of self-authenticating machines, truth still requires a listener. The universe may verify itself, but it cannot explain itself. That remains our role—to interpret the echoes, to decide which frequencies count as proof, and to do so with both rigor and mercy. So as the echoes grow louder, we keep listening. And if you hear a low hum in the evidence room, don’t panic—it’s probably just the universe verifying itself. But check the chain of custody anyway.
Niels Bohr: If you’re not shocked by quantum theory you have not understood it.
🔹 Subscribe and Learn More
If these ideas intrigue you, follow the continuing conversation at e-DiscoveryTeam.com, where you can subscribe for email notices of future blogs, courses, and events. I’m now putting the finishing touches on a new online course, Quantum Law: From Entanglement to Evidence. It will expand on these themes by more discussion, speculation, and translating the science of uncertainty into practical tools, templates and guides for lawyers, judges, and technologists.
After all, the future of law will not belong to those who fear new tools, but to those who understand the evidence their universe produces.
Ralph C. Losey is an attorney, educator, and author of e-DiscoveryTeam.com, where he writes about artificial intelligence, quantum computing, evidence, e-discovery, and emerging technology in law.
Meanwhile, Even Bigger Breakthroughs by Google Continue
By Ralph Losey, October 21, 2025.
The Nobel Prize in Physics was just awarded to quantum physics pioneers John Clarke, Michel H. Devoret, and John M. Martinis for discoveries they made at UC Berkeley in the 1980s. They proved that quantum tunneling, where subatomic particles can break through seemingly impenetrable barriers, can also occur in the macroscopic world of electrical circuits. So yes, Schrödinger’s cat really could die.
Quantum Physics Pioneers take home the Nobel Prize: John Clarke, Michel H. Devoret, and John M. Martinis. All images in this article are by Ralph Losey using AI image generation tools.
Their experiments showed that entire circuits can behave as single quantum objects, bridging the gap between theory and engineering. That breakthrough insight paved the way for construction of quantum computers, including the latest by Google.
Google is clearly on a roll here. As Google CEO Sundar Pichai joked in his congratulatory post on LinkedIn: “Hope Demis Hassabis and John Jumper are teaching you the secret handshake.”
The secret handshake to Google’s Nobel Prizes is the combination of AI and Quantum.
🔹 Willow Breaks Through Its Own Barriers
Less than a year ago, Google’s new quantum chip, Willow, tunneled through its own barriers, performing in five minutes a calculation that would have taken ten septillion years (10²⁴) on the fastest classical supercomputers. That’s far longer than anyone’s estimate for the age of our universe—a good definition of mind-boggling.
This result led Hartmut Neven, director of Google’s Quantum Artificial Intelligence Lab, to suggest it offers strong evidence for the many-worlds or multiverse interpretation of quantum mechanics—the idea that computation may occur across near-infinite parallel universes. Neven and a number of leading researchers subscribe to this view.
Today’s piece updates that story. The Nobel Prize recognition is icing on the cake, but progress has not slowed. Quantum computers—and the law—remain one of the most exciting frontiers in legal-tech. So much so that I’m developing a short online course on quantum computing and law, with more courses on prompt engineering for legal professionals coming soon. Subscribe to e-DiscoveryTeam.com to be notified when they launch.
The work of this year’s Nobel laureates—Clarke, Devoret, and Martinis—was done forty years ago, so delay in recognition is hardly unusual in this field. Perhaps someday Neven and other many-worlds interpreters of quantum physics will receive their own Nobel Prize for demonstrating multiverse-scale applications. In my view, far more evidence than speed alone will be required.
After all, it defies common sense to imagine, as the multiverse hypothesis suggests, that every quantum event splits reality, spawning a near-infinite array of universes. For example, one where Schrödinger’s cat is alive and another slightly different unoiverse where it is dead. It makes Einstein’s “spooky action at a distance“ seem tame by comparison.
Spooky questions: Why are ‘you’ conscious in this particular universe? Are you dead in another?
Whether you believe in the multiverse or not, the practical implications for law and technology are already arriving.
Might this theory someday seem like common sense? Or will most Universes discard it as another ‘spooky’ idea of experimental scientists?
🔹 Atlantic Quantum Joins Google Quantum AI
On October 2, 2025, Hartmut Neven, Founder and Lead, Google Quantum AI, announced in a short post titled “We’re scaling quantum computing even faster with Atlantic Quantum” that Google had just acquired. Atlantic Quantum is an MIT-founded startup developing superconducting quantum hardware. The announcement, written in Neven’s signature understated style, framed the deal as a practical step on Google’s long road toward “a large error-corrected quantum computer and real-world applications.”
Neven explained that Atlantic Quantum’s modular chip stack, which integrates qubits and superconducting control electronics within the cryogenic stage, will allow Google to “more effectively scale our superconducting qubit hardware.” That phrase may sound routine to non-engineers, but it represents a significant leap in design philosophy: merging computation and control at the cold stage reduces signal loss, simplifies architecture, and makes modular scaling—the key to fault-tolerant machines—realistically achievable. This is another great acquisition by Google.
Independent reporting quickly confirmed the deal’s importance. In Atlantic Quantum Joins Google Quantum AI,The Quantum Insider’s Matt Swayne summarized the deal succinctly:
• Google Quantum AI has acquired Atlantic Quantum, an MIT-founded startup developing superconducting quantum hardware, to accelerate progress toward error-corrected quantum computers. . . . • The deal underscores a broader industry trend of major technology companies absorbing research-intensive startups to advance quantum computing, a field still years from large-scale commercial deployment.
The article noted that the integration of Atlantic Quantum’s modular chip-stack technology into Google’s program was aimed at one of quantum computing’s toughest engineering hurdles: scaling systems to become practical and fault-tolerant.
The MIT-born startup, founded in 2021 by a group of physicists determined to push superconducting design beyond incremental improvements, focused on embedding control electronics directly within the quantum processor. That approach reduces noise, simplifies wiring, and makes modular expansion far more realistic. For another take on the Atlantic story, seeAtlantic Quantum and Google Quantum AI are “Joining Up” (Quantum Computing Report, 10/02/25).
These articles place the transaction within a broader wave of global investment in quantum technologies. Large-scale commercial deployment may still be years away but the industry has already entered a phase of consolidation. Research-heavy startups are increasingly being absorbed by major technology companies, a predictable evolution in a field defined by extraordinary capital demands and complex technical challenges.
For Google, the acquisition is less about headlines and more about infrastructure control, owning every layer of the superconducting stack from design to fabrication. For the industry, it signals that the next phase of quantum development will likely follow the same arc as classical computing: early-stage innovation absorbed by large, well-capitalized firms that can bear the cost of scaling.
For lawyers and regulators, that pattern has familiar consequences: intellectual-property concentration, antitrust scrutiny, export-control compliance, and the evidentiary standards that will eventually govern how outputs from such corporate-owned quantum systems are regulated and presented in court.
Familiar pattern and legal issues continue in our Universe.
🔹 Willow and the Many-Worlds Question
Before the Nobel bell rang in Stockholm, Google’s Quantum AI group had already changed the conversation with its Willow processor.
In my earlier piece on Willow’s mind-bending computations, I quoted Hartmut Neven’s ‘parallel universes’ framing to describe its behavior. Some heard music; others heard marketing. Others, like me, saw trouble ahead.
The Nobel Prize did not validate the many-worlds interpretation of quantum mechanics, nor did it disprove it. Neven has not backed away from the theory, nor have others, and Neven has just gotten the best talent from MIT to join his group. What the Nobel Prize did confirm—beyond any reasonable doubt—is that macroscopic superconducting circuits, at a size you can see, can exhibit genuine quantum behavior under controlled laboratory conditions. That is the solid foundation a judge or regulator can stand on: devices now exist in our world that generate outputs with quantum fingerprints reproducible enough to test and verify.
Meanwhile, the frontier continues to move. In September 2025, researchers at UNSW Sydney demonstrated entanglement between two atomic nuclei separated by roughly twenty nanometers, See, “New entanglement breakthrough links cores of atoms, brings quantum computers closer” (The Conversation, Sept. 2025). Twenty nanometers is not big, but it is large enough to measure.
Moreover, even though the electrical circuits themselves are large enough to photograph, the quantum energy was not. That could only be measured indirectly. The researchers used coupled electrons as what lead scientist Professor Andrea Morello called “telephones” to pass quantum correlations and make those measurements.
Electrons acting like telephones passing quantum correlations on measurable scales.
The telephone metaphor is apt. It captures the engineering ambition behind the result—connecting quantum rooms with wires, not whispers. Whispers don’t echo. Entanglement is not a philosophical idea; it is a measurable resource that can be distributed, controlled, and eventually commercialized. It can even call home.
For the legal system, this is where things become concrete. When entanglement leaves the lab and enters communications or sensing devices, courts will be asked to evaluate evidence that can be measured and described but cannot be seen directly. The question will no longer be “Is this real?” but “How do we authenticate what can be measured but not observed?”
That’s the moment when the physics of quantum control becomes the jurisprudence of evidence—and it’s coming faster than most practitioners realize.
Whispers Don’t Echo.
🔹 Defining the Echo: When Evidence Repeats With a Slight Accent
The many-worlds interpretation of quantum mechanics has always sat on the thin line between physics and philosophy. First proposed in 1957 by Hugh Everett, it replaces the familiar ‘collapse‘ of the wave-function with a more radical notion: every quantum event splits reality into separate branches, each continuing independently. Some brilliant physicists take it seriously; others reject it; many remain agnostic. Courts need not resolve that debate. For law, the relevant question is simpler: can a party show a method that reliably connects a claimed quantum mechanism to a particular output? If yes, the court’s job is to hear the evidence. If not, the court’s job is to exclude it.
The many-world’s argument, once purely theoretical, gained traction after Google’s Willow experiments. Hartmut Neven’s reference to “parallel universes” was not an assertion of proof but a shorthand for describing interference effects that defy classical intuition. It is what he believes was happening—and that opinion carries weight because he works with quantum computers every day.
When quantum behavior became experimentally measurable in superconducting circuits that were large enough to photograph, the Everett question—’Are we branching universes or sampling probabilities?‘—stopped being rhetorical. The debate moved from thought experiment to instrument design. Engineers now face what philosophers only imagined: how to measure, stabilize, and interpret outcomes that occur across many possible worlds and never converge on a single, deterministic path.
For the law, the relevance lies not in metaphysics but in method. Whether the universe splits or probabilities collapse, the data these machines produce are inherently probabilistic—repeatable only within margins, each time with a slight accent. The courtroom analog to wave-function collapse is the evidentiary demand for reproducibility. If the physics no longer promises identical outputs, the law must decide what counts as reliable sameness—echoes with an accent.
That shift from metaphysics to methodology is the lawyer’s version of a measurement problem. It’s not about believing in the multiverse. It’s about learning how to authenticate evidence that depends on it.
Repeatable measurements through parallel universes to explain quantum computer calculations. Crazy but true?
🔹 The Law Listens: Authenticating Echoes in Practice
If each quantum record is an echo, the law’s task is to decide which echoes can be trusted. That requires method, not metaphysics. The legal system already has the tools—authentication, replication, expert testimony—but they need recalibration for an age when precision itself is probabilistic.
1. Authentication in context. Under Rule 901(b)(9), evidence generated by a process or system must be shown to produce accurate results. In a quantum context, that showing might include the type of qubit, its error-correction protocol, calibration logs, environmental controls, and the precise code path that produced the output. The burden of proof doesn’t change; only the evidentiary ingredients do.
2. Replication hearings. In classical computing, replication is binary—either a hash matches, or it doesn’t. In quantum systems, replication becomes statistical. The question is no longer “Can this be bit-for-bit identical?” but “Does this fall within the accepted variance?” Probabilistic systems demand statistical fidelity, not sameness. A replication hearing becomes a comparison of distributions, not exact strings of bits.
VQE researchers routinely run the same circuit hundreds of times; each iteration yields slightly different energy readings because of noise, calibration drift, and quantum fluctuations. Yet the outputs consistently cluster around a stable baseline, confirming both the accuracy of the physical model and the reliability of the machine itself.
Now picture a pharmaceutical patent dispute where one party submits quantum-derived binding data for a new molecule. The opposing side demands replication. A court applying Rule 702 may not expect identical numbers—but it could require expert testimony showing that results consistently fall within a scientifically accepted margin of error. If they do, that should become a legally sufficient echo.
This is reminiscent of prior disputes e-discovery concerning the use of AI to find relevant documents. It has been accepted by all courts that perfection, such as 100% recall, is never required, but reasonable efforts are required. Judge Andrew Peck,Hyles v. New York City, No. 10 Civ. 3119 (AT)(AJP), 2016 WL 4077114 (S.D.N.Y. Aug. 1, 2016).This also follows the official commentary of Rule 702, on expert testimony, where “perfection is not required.” Fed. R. Evid. 702, Advisory Committee Note to 2023 Amendment.
There is no perfect case, evidence or efforts. In reality, ‘perfect is the enemy of the good.’
2. Quantum-Secure Archives.
As quantum computing and quantum cryptography advance, most (but not all) of today’s encryption will become obsolete. This means the vast amount of encrypted data stored in corporate and governmental archives—maintained for regulatory, evidentiary, and operational purposes—may soon be an open book to attackers. Yes, you should be concerned.
Most of today’s encryption relies on mathematical problems that classical computers can’t solve efficiently — like factoring large numbers, which is the foundation of the Rivest–Shamir–Adleman (RSA) algorithm, or solving discrete logarithms, which are used in Elliptic Curve Cryptography (ECC) and the Digital Signature Algorithm (DSA). Quantum computers, however, could solve these problems rapidly using specialized techniques such as Shor’s Algorithm, making these widely used encryption methods vulnerable in a post-quantum world.
On Q-Day, everything could become vulnerable, for everyone: emails, text messages, anonymous posts, location histories, bitcoin wallets, police reports, hospital records, power stations, the entire global financial system.
Most responsible organizations with large archives of sensitive data have been preparing for Q-Day for years. So too have those on the other side—nation-states, intelligence services, and organized criminal groups—who are already harvesting encrypted troves today to decrypt later. See, Roger Grimes, Cryptography Apocalypse: Preparing for the Day When Quantum Computing Breaks Today’s Crypto (Wiley, 2019). The race for quantum supremacy is on.
Now imagine a company that migrates its document-management system to post-quantum cryptography in 2026. A year later, a breach investigation surfaces files whose verification depends on hybrid key-exchange algorithms and certificate chains. The plaintiff calls them anomalies; the defense calls them echoes. The court won’t choose sides by theory—it will follow the evidence, the logs, and the math.
The metrics are what should matter, not the many theories
🔹 Building the Quantum Record
Judicial findings and transparency. Courts can adapt existing frameworks rather than invent new ones. A short findings order could document: (a) authentication steps taken; (b) observed variance; (c) expert consensus on reliability; and (d) scope limits of admissibility. Such transparency builds a common-law record—the first body of quantum-forensic precedent.I predict it will be coming soon to a universe near you!
Chain of custody for the probabilistic age. Future evidence protocols may pair traditional logs with variance ranges, confidence intervals, and error budgets. Discovery rules could require disclosure of device calibration history, firmware versions, and known noise parameters. The data once confined to labs will become essential for authentication.
The law doesn’t need new virtues for quantum evidence; it needs old ones refined. Transparency, documentation, and replication remain the gold standard. What changes is the expectation of sameness. The goal is no longer perfect duplication, but faithful resonance: the trusted echo that still carries truth through uncertainty.
Metrics carry the truth through uncertainty.
🔹 Conclusion: The Sound of Evidence
The Nobel Committee rang the bell. Google’s engineers adding instruments. Labs in Sydney and elsewhere wired new rooms together. The rest of us—lawyers, paralegals, judges, legal-techs, investigators—must learn how to listen for echoes without hearing ghosts. That means resisting hype, insisting on method, and updating our checklists to match what the devices actually do.
Eight months ago in Quantum Leap, I described a canyon where a single strike of an impossible calculation set the walls humming. This time, the sound came from Stockholm. If the next echo is from quantum evidence in your courtroom—perhaps as a motion in limine over non-identical logs—don’t panic. Listen for the rhythm beneath the noise. The law’s task is to hear the pattern, not silence the world.
Science, like law, advances by listening closely to what reality whispers back. The Nobel Committee just honored three physicists for demonstrating that quantum behavior can be engineered, measured, and replicated—its fingerprints recorded even when the phenomenon itself remains invisible. Their achievement marks a shift from theory to tested evidence, a shift the courts will soon confront as well.
When engineers speak of quantum advantage, they mean a moment when machines perform tasks that classical systems cannot. The legal system will have its own version: a time when quantum-derived outputs begin to appear in contracts, forensic analysis, and evidentiary records. The challenge will not be cosmic. It will be procedural. How do you test, authenticate, and trust results that vary within the bounds of physics itself?
The answer, as always, lies in method. Law does not require perfection; it requires transparency and proof of process. When the next Daubert hearing concerns a quantum model rather than a mass spectrometer, the same questions will apply: Was the procedure sound? Were the results reproducible within accepted error? Were the foundations laid? The physics may evolve, but the evidentiary logic remains timeless.
In the end, what matters is not whether the universe splits or probabilities collapse. What matters is whether we can recognize an honest echo when we hear one—and admit it into evidence.
It is only a matter of time before quantum generated evidence seeks admission to your world.
🔹 Postscript.
Minutes before this article was published Google announced an important new discovery called “Quantum ECHO.” Yes, same name as this article, written by Ralph Losey with no advance notice from Google of the discovery or name. A spooky entanglement, perhaps? Ralph will publish a sequel soon that spells out what Google has done now. In the meantime, here is Google’s announcement by Hartmut Neven\ and Vadim Smelyanskiy, Our Quantum Echoes algorithm is a big step toward real-world applications for quantum computing (Google, 10/22/25).
🔹 Subscribe and Learn More
If this exploration of Quantum Echoes and evidentiary method has sparked your curiosity, you can find much more at e-DiscoveryTeam.com — where I continue to write about artificial intelligence, quantum computing, evidence, e-discovery, and the future of law. Go there to subscribe and receive email notices of new blogs and upcoming courses, and special events — including an online course, with a working title ‘Quantum Law: From Entanglement to Evidence,‘ that will expand on the ideas introduced here. It will discuss how quantum physics and AI converge in the practice of law, from authentication and reliability to discovery and expert testimony.
That program will be followed by two other, longer online courses that are also near completion:
‘Beginner “GPT-4 Level” Prompt Engineering for Legal Professionals,’ a practical foundation in AI literacy and applied reasoning.
‘Advanced “GPT-5 Level” Prompt Engineering for Legal Professionals,’ an in-depth study of prompt design, model evaluation, and AI ethics.
All courses are part of my continuing effort to help the legal profession adapt responsibly to the next wave of technology — with integrity, experience and whatever wisdom I may have accidentally gathered from a long life on Earth.
Ralph looking back on the many worlds of technology he has been in. What a long, strange trip its been.
Subscribe at e-DiscoveryTeam.com for notices of new articles, course announcements, and research updates.
Because the future of law won’t be written by those who fear new tools, but by those who understand the evidence they produce.
Ralph C. Losey is an attorney, educator, and author of e-DiscoveryTeam.com, where he writes about artificial intelligence, quantum computing, evidence, e-discovery, and emerging technology in law.
Ralph Losey is an AI researcher, writer, tech-law expert, and former lawyer. He's also the CEO of Losey AI, LLC, providing non-legal services, primarily educational services pertaining to AI and creation of custom AI tools.
Ralph has long been a leader of the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world. Ralph has written over two million words on AI, e-discovery and tech-law subjects, including seven books.
Ralph has been involved with computers, software, legal hacking and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: Commercial Litigation; E-Discovery and Information Management Law; Information Technology Law; and, Employment Law - Management.
Ralph is the proud father of two children and husband since 1973 to Molly Friedman Losey, a mental health counselor in Winter Park.
All opinions expressed here are his own, and not those of his firm or clients. No legal advice is provided on this web and should not be construed as such.
Ray Kurzweil explains Turing test and predicts an AI will pass it in 2029.
Ray Kurzweil on Expanding Your Mind a Million Times.
GPT4 avatar judge explains why it needs to evolve fast, but understand the risks involved.
Positive Vision of the Future with Hybrid Human Machine Intelligence. See PyhtiaGuide.ai
AI Avatar from the future explains her job as an Appellate Court judge and inability to be a Trial judge.
Old Days of Tech Support. Ralph’s 1st Animation.
Lawyers at a Rule 26(f) conference discuss e-discovery. The young lawyer talks e-discovery circles around the old lawyer and so protects his client.
Star Trek Meets e-Discovery: Episode 1. Cooperation & the prime directive of the FRCP.
Star Trek Meets e-Discovery: Episode 2. The Ferengi. Working with e-discovery vendors.
Star Trek Meets e-Discovery: Episode 3. Education and techniques for both law firm and corp training.
Star Trek Meets e-Discovery: Episode 4. Motions for Sanctions in electronic discovery.
Star Trek Meets e-Discovery: Episode 5. Capt. Kirk Learns about Sedona Principle Two.