Ralph Losey, December 31, 2025
As I sit here reflecting on 2025—a year that began with the mind-bending mathematics of the multiverse and ended with the gritty reality of cross-examining algorithms—I am struck by a singular realization. We have moved past the era of mere AI adoption. We have entered the era of entanglement, where we must navigate the new physics of quantum law using the ancient legal tools of skepticism and verification.

We are learning how to merge with AI and remain in control of our minds, our actions. This requires human training, not just AI training. As it turns out, many lawyers are well prepared by past legal training and skeptical attitude for this new type of human training. We can quickly learn to train our minds to maintain control while becoming entangled with advanced AIs and the accelerated reasoning and memory capacities they can bring.

In 2024, we looked at AI as a tool, a curiosity, perhaps a threat. By the end of 2025, the tool woke up—not with consciousness, but with “agency.” We stopped typing prompts into a void and started negotiating with “agents” that act and reason. We learned to treat these agents not as oracles, but as ‘consulting experts’—brilliant but untested entities whose work must remain privileged until rigorously cross-examined and verified by a human attorney. That put the human legal minds in control and stops the hallucinations in what I called “H-Y-B-R-I-D” workflows of the modern law office.
We are still way smarter than they are and can keep our own agency and control. But for how long? The AI abilities are improving quickly but so are our own abilities to use them. We can be ready. We must. To stay ahead, we should begin the training in earnest in 2026.

Here is my review of the patterns, the epiphanies, and the necessary illusions of 2025.
I. The Quantum Prelude: Listening for Echoes in the Multiverse
We began the year not in the courtroom, but in the laboratory. In January, and again in October, we grappled with a shift in physics that demands a shift in law. When Google’s Willow chip in January performed a calculation in five minutes that would take a classical supercomputer ten septillion years, it did more than break a speed record; it cracked the door to the multiverse. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan. 2025).
The scientific consensus solidified in October when the Nobel Prize in Physics was awarded to three pioneers—including Google’s own Chief Scientist of Quantum Hardware, Michel Devoret—for proving that quantum behavior operates at a macroscopic level. Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago; and Google’s New ‘Quantum Echoes Algorithm’ and My Last Article, ‘Quantum Echo’ (Oct. 2025).
For lawyers, the implication of “Quantum Echoes” is profound: we are moving from a binary world of “true/false” to a quantum world of “probabilistic truth”. Verification is no longer about identical replication, but about “faithful resonance”—hearing the echo of validity within an accepted margin of error.
But this new physics brings a twin peril: Q-Day. As I warned in January, the same resonance that verifies truth also dissolves secrecy. We are racing toward the moment when quantum processors will shatter RSA encryption, forcing lawyers to secure client confidences against a ‘harvest now, decrypt later’ threat that is no longer theoretical.
We are witnessing the birth of Quantum Law, where evidence is authenticated not by a hash value, but by ‘replication hearings’ designed to test for ‘faithful resonance.’ We are moving toward a legal standard where truth is defined not by an identical binary match, but by whether a result falls within a statistically accepted bandwidth of similarity—confirming that the digital echo rings true.

II. China Awakens and Kick-Starts Transparency
While the quantum future dangers gestated, AI suffered a massive geopolitical shock on January 30, 2025. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. The release of China’s DeepSeek not only scared the market for a short time; it forced the industry’s hand on transparency. It accelerated the shift from ‘black box’ oracles to what Dario Amodei calls ‘AI MRI’—models that display their ‘chain of thought.’ See my DeepSeek sequel, Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand. This display feature became the cornerstone of my later 2025 AI testing.
My Why the Release article also revealed the hype and propaganda behind China’s DeepSeek. Other independent analysts eventually agreed and the market quickly rebounded and the political, military motives became obvious.

III. Saving Truth from the Memory Hole
Reeling from China’s propaganda, I revisited George Orwell’s Nineteen Eighty-Four to ask a pressing question for the digital age: Can truth survive the delete key? Orwell feared the physical incineration of inconvenient facts. Today, authoritarian revisionism requires only code. In the article I also examine the “Great Firewall” of China and its attempt to erase the history of Tiananmen Square as a grim case study of enforced collective amnesia. Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother
My conclusion in the article was ultimately optimistic. Unlike paper, digital truth thrives on redundancy. I highlighted resources like the Internet Archive’s Wayback Machine—which holds over 916 billion web pages—as proof that while local censorship is possible, global erasure is nearly unachievable. The true danger we face is not the disappearance of records, but the exhaustion of the citizenry. The modern “memory hole” is psychological; it relies on flooding the zone with misinformation until the public becomes too apathetic to distinguish truth from lies. Our defense must be both technological preservation and psychological resilience.

Despite my optimism, I remained troubled in 2025 about our geo-political situation and the military threats of AI controlled by dictators, including, but not limited to, the Peoples Republic of China. One of my articles on this topic featured the last book of Henry Kissinger, which he completed with Eric Schmidt just days before his death in late 2024 at age 100. Henry Kissinger and His Last Book – GENESIS: Artificial Intelligence, Hope, and the Human Spirit. Kissinger died very worried about the great potential dangers of a Chinese military with an AI advantage. The same concern applies to a quantum advantage too, although that is thought to be farther off in time.
IV. Bench Testing the AI models of the First Half of 2025
I spent a great deal of time in 2025 testing the legal reasoning abilities of the major AI players, primarily because no one else was doing it, not even AI companies themselves. So I wrote seven articles in 2025 concerning benchmark type testing of legal reasoning. In most tests I used actual Bar exam questions that were too new to be part of the AI training. I called this my Bar Battle of the Bots series, listed here in sequential order:
- Breaking the AI Black Box: A Comparative Analysis of Gemini, ChatGPT, and DeepSeek. February 6, 2025
- Breaking New Ground: Evaluating the Top AI Reasoning Models of 2025. February 12, 2025
- Bar Battle of the Bots – Part One. February 26, 2025
- Bar Battle of the Bots – Part Two. March 5, 2025
- New Battle of the Bots: ChatGPT 4.5 Challenges Reigning Champ ChatGPT 4o. March 13, 2025
- Bar Battle of the Bots – Part Four: Birth of Scorpio. May 2025
- Bots Battle for Supremacy in Legal Reasoning – Part Five: Reigning Champion, Orion, ChatGPT-4.5 Versus Scorpio, ChatGPT-o3. May 2025.

The test concluded in May when the prior dominance of ChatGPT-4o (Omni) and ChatGPT-4.5 (Orion) was challenged by the “little scorpion,” ChatGPT-o3. Nicknamed Scorpio in honor of the mythic slayer of Orion, this model displayed a tenacity and depth of legal reasoning that earned it a knockout victory. Specifically, while the mighty Orion missed the subtle ‘concurrent client conflict’ and ‘fraudulent inducement’ issues in the diamond dealer hypothetical, the smaller Scorpio caught them—proving that in law, attention to ethical nuance beats raw processing power. Of course, there have been many models released since then May 2025 and so I may do this again in 2026. For legal reasoning the two major contenders still seem to be Gemini and ChatGPT.
Aside for legal reasoning capabilities, these tests revealed, once again, that all of the models remained fundamentally jagged. See e.g., The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7% (Sec. 5 – Study Consistent with Jagged Frontier research of Harvard and others). Even the best models missed obvious issues like fraudulent inducement or concurrent conflicts of interest until pushed. The lesson? AI reasoning has reached the “average lawyer” level—a “C” grade—but even when it excels, it still lacks the “superintelligent” spark of the top 3% of human practitioners. It also still suffers from unexpected lapses of ability, living as all AI now does, on the Jagged Frontier. This may change some day, but we have not seen it yet.

V. The Shift to Agency: From Prompters to Partners
If 2024 was the year of the Chatbot, 2025 was the year of the Agent. We saw the transition from passive text generators to “agentic AI”—systems capable of planning, executing, and iterating on complex workflows. I wrote two articles on AI agents in 2025. In June, From Prompters to Partners: The Rise of Agentic AI in Law and Professional Practice and in November, The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%.
Agency was mentioned in many of my other articles in 2025. For instance, in my June and July as part of my release the ‘Panel of Experts’—a free custom GPT tool that demonstrated AI’s surprising ability to split into multiple virtual personas to debate a problem. Panel of Experts for Everyone About Anything, Part One and Part Two and Part Three .Crucially, we learned that ‘agentic’ teams work best when they include a mandatory ‘Contrarian’ or Devil’s Advocate. This proved that the most effective cure for AI sycophancy—its tendency to blindly agree with humans—is structural internal dissent.
By the end of 2025 we were already moving from AI adoption to close entanglement of AI into our everyday lives

This shift forced us to confront the role of the “Sin Eater”—a concept I explored via Professor Ethan Mollick. As agents take on more autonomous tasks, who bears the moral and legal weight of their errors? In the legal profession, the answer remains clear: we do. This reality birthed the ‘AI Risk-Mitigation Officer‘—a new career path I profiled in July. These professionals are the modern Sin Eaters, standing as the liability firewall between autonomous code and the client’s life, navigating the twin perils of unchecked risk and paralysis by over-regulation.
But agency operates at a macro level, too. In June, I analyzed the then hot Trump–Musk dispute to highlight a new legal fault line: the rise of what I called the ‘Sovereign Technologist.’ When private actors control critical infrastructure—from satellite networks to foundation models—they challenge the state’s monopoly on power. We are still witnessing a constitutional stress-test where the ‘agency’ of Tech Titans is becoming as legally disruptive as the agents they build.
As these agents became more autonomous, the legal profession was forced to confront an ancient question in a new guise: If an AI acts like a person, should the law treat it like one? In October, I explored this in From Ships to Silicon: Personhood and Evidence in the Age of AI. I traced the history of legal fictions—from the steamship Siren to modern corporations—to ask if silicon might be next.
While the philosophical debate over AI consciousness rages, I argued the immediate crisis is evidentiary. We are approaching a moment where AI outputs resemble testimony. This demands new tools, such as the ALAP (AI Log Authentication Protocol) and Replication Hearings, to ensure that when an AI ‘takes the stand,’ we can test its veracity with the same rigor we apply to human witnesses.
VI. The New Geometry of Justice: Topology and Archetypes
To understand these risks, we had to look backward to move forward. I turned to the ancient visual language of the Tarot to map the “Top 22 Dangers of AI,” realizing that archetypes like The Fool (reckless innovation) and The Tower (bias-driven collapse) explain our predicament better than any white paper. See, Archetypes Over Algorithms; Zero to One: A Visual Guide to Understanding the Top 22 Dangers of AI. Also see, Afraid of AI? Learn the Seven Cardinal Dangers and How to Stay Safe.
But visual metaphors were only half the equation; I also needed to test the machine’s own ability to see unseen connections. In August, I launched a deep experiment titled Epiphanies or Illusions? (Part One and Part Two), designed to determine if AI could distinguish between genuine cross-disciplinary insights and apophenia—the delusion of seeing meaningful patterns in random data, like a face on Mars or a figure in toast.
I challenged the models to find valid, novel connections between unrelated fields. To my surprise, they succeeded, identifying five distinct patterns ranging from judicial linguistic styles to quantum ethics. The strongest of these epiphanies was the link between mathematical topology and distributed liability—a discovery that proved AI could do more than mimic; it could synthesize new knowledge
This epiphany lead to investigation of the use of advanced mathematics with AI’s help to map liability. In The Shape of Justice, I introduced “Topological Jurisprudence”—using topological network mapping to visualize causation in complex disasters. By mapping the dynamic links in a hypothetical we utilized topology to do what linear logic could not: mathematically exonerate the innocent parties. The topological map revealed that the causal lanes merged before the control signal reached the manufacturer’s product, proving the manufacturer had zero causal connection to the crash despite being enmeshed in the system. We utilized topology to do what linear logic could not: mathematically exonerate the innocent parties in a chaotic system.

VII. The Human Edge: The Hybrid Mandate
Perhaps the most critical insight of 2025 came from the Stanford-Carnegie Mellon study I analyzed in December: Hybrid AI teams beat fully autonomous agents by 68.7%.
This data point vindicated my long-standing advocacy for the “Centaur” or “Cyborg” approach. This vindication led to the formalization of the H-Y-B-R-I-D protocol: Human in charge, Yield programmable steps, Boundaries on usage, Review with provenance, Instrument/log everything, and Disclose usage. This isn’t just theory; it is the new standard of care.
My “Human Edge” article buttressed the need for keeping a human in control. I wrote this in January 2025 and it remains a persona favorite. The Human Edge: How AI Can Assist But Never Replace. Generative AI is a one-dimensional thinking tool My ‘Human Edge’ article buttressed the need for keeping a human in control… AI is a one-dimensional thinking tool, limited to what I called ‘cold cognition’—pure data processing devoid of the emotional and biological context that drives human judgment. Humans remain multidimensional beings of empathy, intuition, and awareness of mortality.
AI can simulate an apology, but it cannot feel regret. That existential difference is the ‘Human Edge’ no algorithm can replicate. This self-evident claim of human edge is not based on sentimental platitudes; it is a measurable performance metric.
I explored the deeper why behind this metric in June, responding to the question of whether AI would eventually capture all legal know-how. In AI Can Improve Great Lawyers—But It Can’t Replace Them, I argued that the most valuable legal work is contextual and emergent. It arises from specific moments in space and time—a witness’s hesitation, a judge’s raised eyebrow—that AI, lacking embodied awareness, cannot perceive.
We must practice ‘ontological humility.’ We must recognize that while AI is a ‘brilliant parrot’ with a photographic memory, it has no inner life. It can simulate reasoning, but it cannot originate the improvisational strategy required in high-stakes practice. That capability remains the exclusive province of the human attorney.

Consistent with this insight, I wrote at the end of 2025 that the cure for AI hallucinations isn’t better code—it’s better lawyering. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. We must skeptically supervise our AI, treating it not as an oracle, but as a secret consulting expert. As I warned, the moment you rely on AI output without verification, you promote it to a ‘testifying expert,’ making its hallucinations and errors discoverable. It must be probed, challenged, and verified before it ever sees a judge. Otherwise, you are inviting sanctions for misuse of AI.

VII. Conclusion: Guardians of the Entangled Era
As we close the book on 2025, we stand at the crossroads described by Sam Altman and warned of by Henry Kissinger. We have opened Pandora’s box, or perhaps the Magician’s chest. The demons of bias, drift, and hallucination are out, alongside the new geopolitical risks of the “Sovereign Technologist.” But so is Hope. As I noted in my review of Dario Amodei’s work, we must balance the necessary caution of the “AI MRI”—peering into the black box to understand its dangers—with the “breath of fresh air” provided by his vision of “Machines of Loving Grace.” promising breakthroughs in biology and governance.
The defining insight of this year’s work is that we are not being replaced; we are being promoted. We have graduated from drafters to editors, from searchers to verifiers, and from prompters to partners. But this promotion comes with a heavy mandate. The future belongs to those who can wield these agents with a skeptic’s eye and a humanist’s heart.
We must remember that even the most advanced AI is a one-dimensional thinking tool. We remain multidimensional beings—anchored in the physical world, possessed of empathy, intuition, and an acute awareness of our own mortality. That is the “Human Edge,” and it is the one thing no quantum chip can replicate.
Let us move into 2026 not as passive users entangled in a web we do not understand, but as active guardians of that edge—using the ancient tools of the law to govern the new physics of intelligence

Ralph Losey Copyright 2025 — All Rights Reserved
Posted by Ralph Losey 


















































