2025 Year in Review: Beyond Adoption—Entering the Era of AI Entanglement and Quantum Law

December 31, 2025

Ralph Losey, December 31, 2025

As I sit here reflecting on 2025—a year that began with the mind-bending mathematics of the multiverse and ended with the gritty reality of cross-examining algorithms—I am struck by a singular realization. We have moved past the era of mere AI adoption. We have entered the era of entanglement, where we must navigate the new physics of quantum law using the ancient legal tools of skepticism and verification.

A split image illustrating two concepts: on the left, 'AI Adoption' showing an individual with traditional tools and paperwork; on the right, 'AI Entanglement' featuring the same individual surrounded by advanced technology and integrated AI systems.
In 2025 we moved from AI Adoption to AI Entanglement. All images by Losey using many AIs.

We are learning how to merge with AI and remain in control of our minds, our actions. This requires human training, not just AI training. As it turns out, many lawyers are well prepared by past legal training and skeptical attitude for this new type of human training. We can quickly learn to train our minds to maintain control while becoming entangled with advanced AIs and the accelerated reasoning and memory capacities they can bring.

A futuristic woman with digital circuitry patterns on her face interacts with holographic data displays in a high-tech environment.
Trained humans can enhance by total entanglement with AI and not lose control or separate identity. Click here or the image to see video on YouTube.

In 2024, we looked at AI as a tool, a curiosity, perhaps a threat. By the end of 2025, the tool woke up—not with consciousness, but with “agency.” We stopped typing prompts into a void and started negotiating with “agents” that act and reason. We learned to treat these agents not as oracles, but as ‘consulting experts’—brilliant but untested entities whose work must remain privileged until rigorously cross-examined and verified by a human attorney. That put the human legal minds in control and stops the hallucinations in what I called “H-Y-B-R-I-D” workflows of the modern law office.

We are still way smarter than they are and can keep our own agency and control. But for how long? The AI abilities are improving quickly but so are our own abilities to use them. We can be ready. We must. To stay ahead, we should begin the training in earnest in 2026.

A humanoid robot with glowing accents stands looking out over a city skyline at sunset, next to a man in a suit who observes the scene thoughtfully.
Integrate your mind and work with full AI entanglement. Click here or the image to see video on YouTube.

Here is my review of the patterns, the epiphanies, and the necessary illusions of 2025.

I. The Quantum Prelude: Listening for Echoes in the Multiverse

We began the year not in the courtroom, but in the laboratory. In January, and again in October, we grappled with a shift in physics that demands a shift in law. When Google’s Willow chip in January performed a calculation in five minutes that would take a classical supercomputer ten septillion years, it did more than break a speed record; it cracked the door to the multiverse. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan. 2025).

The scientific consensus solidified in October when the Nobel Prize in Physics was awarded to three pioneers—including Google’s own Chief Scientist of Quantum Hardware, Michel Devoret—for proving that quantum behavior operates at a macroscopic level. Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago; and Google’s New ‘Quantum Echoes Algorithm’ and My Last Article, ‘Quantum Echo’ (Oct. 2025).

For lawyers, the implication of “Quantum Echoes” is profound: we are moving from a binary world of “true/false” to a quantum world of “probabilistic truth”. Verification is no longer about identical replication, but about “faithful resonance”—hearing the echo of validity within an accepted margin of error.

But this new physics brings a twin peril: Q-Day. As I warned in January, the same resonance that verifies truth also dissolves secrecy. We are racing toward the moment when quantum processors will shatter RSA encryption, forcing lawyers to secure client confidences against a ‘harvest now, decrypt later’ threat that is no longer theoretical.

We are witnessing the birth of Quantum Law, where evidence is authenticated not by a hash value, but by ‘replication hearings’ designed to test for ‘faithful resonance.’ We are moving toward a legal standard where truth is defined not by an identical binary match, but by whether a result falls within a statistically accepted bandwidth of similarity—confirming that the digital echo rings true.

A digital display showing a quantum interference graph with annotations for expected and actual results, including a fidelity score of 99.2% and data on error rates and system status.
Quantum Replication Hearings Are Probable in the Future.

II. China Awakens and Kick-Starts Transparency

While the quantum future dangers gestated, AI suffered a massive geopolitical shock on January 30, 2025. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. The release of China’s DeepSeek not only scared the market for a short time; it forced the industry’s hand on transparency. It accelerated the shift from ‘black box’ oracles to what Dario Amodei calls ‘AI MRI’—models that display their ‘chain of thought.’ See my DeepSeek sequel, Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand. This display feature became the cornerstone of my later 2025 AI testing.

My Why the Release article also revealed the hype and propaganda behind China’s DeepSeek. Other independent analysts eventually agreed and the market quickly rebounded and the political, military motives became obvious.

A digital artwork depicting two armed soldiers facing each other, one representing the United States with the American flag in the background and the other representing China with the Chinese flag behind. Human soldiers are flanked by robotic machines symbolizing advanced military technology, set against a futuristic backdrop.
The Arms Race today is AI, tomorrow Quantum. So far, propaganda is the weapon of choice of AI agents.

III. Saving Truth from the Memory Hole

Reeling from China’s propaganda, I revisited George Orwell’s Nineteen Eighty-Four to ask a pressing question for the digital age: Can truth survive the delete key? Orwell feared the physical incineration of inconvenient facts. Today, authoritarian revisionism requires only code. In the article I also examine the “Great Firewall” of China and its attempt to erase the history of Tiananmen Square as a grim case study of enforced collective amnesia. Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother

My conclusion in the article was ultimately optimistic. Unlike paper, digital truth thrives on redundancy. I highlighted resources like the Internet Archive’s Wayback Machine—which holds over 916 billion web pages—as proof that while local censorship is possible, global erasure is nearly unachievable. The true danger we face is not the disappearance of records, but the exhaustion of the citizenry. The modern “memory hole” is psychological; it relies on flooding the zone with misinformation until the public becomes too apathetic to distinguish truth from lies. Our defense must be both technological preservation and psychological resilience.

A graphic depiction of a uniformed figure with a Nazi armband operating a machine that processes documents, with an eye in the background and the slogan 'IGNORANCE IS STRENGTH' prominently displayed at the top.
Changing history to support political tyranny. Orwell’s warning.

Despite my optimism, I remained troubled in 2025 about our geo-political situation and the military threats of AI controlled by dictators, including, but not limited to, the Peoples Republic of China. One of my articles on this topic featured the last book of Henry Kissinger, which he completed with Eric Schmidt just days before his death in late 2024 at age 100. Henry Kissinger and His Last Book – GENESIS: Artificial Intelligence, Hope, and the Human Spirit. Kissinger died very worried about the great potential dangers of a Chinese military with an AI advantage. The same concern applies to a quantum advantage too, although that is thought to be farther off in time.

IV. Bench Testing the AI models of the First Half of 2025

I spent a great deal of time in 2025 testing the legal reasoning abilities of the major AI players, primarily because no one else was doing it, not even AI companies themselves. So I wrote seven articles in 2025 concerning benchmark type testing of legal reasoning. In most tests I used actual Bar exam questions that were too new to be part of the AI training. I called this my Bar Battle of the Bots series, listed here in sequential order:

  1. Breaking the AI Black Box: A Comparative Analysis of Gemini, ChatGPT, and DeepSeek. February 6, 2025
  2. Breaking New Ground: Evaluating the Top AI Reasoning Models of 2025. February 12, 2025
  3. Bar Battle of the Bots – Part One. February 26, 2025
  4. Bar Battle of the Bots – Part Two. March 5, 2025
  5. New Battle of the Bots: ChatGPT 4.5 Challenges Reigning Champ ChatGPT 4o.  March 13, 2025
  6. Bar Battle of the Bots – Part Four: Birth of Scorpio. May 2025
  7. Bots Battle for Supremacy in Legal Reasoning – Part Five: Reigning Champion, Orion, ChatGPT-4.5 Versus Scorpio, ChatGPT-o3. May 2025.
Two humanoid robots fighting against each other in a boxing ring, surrounded by a captivated audience.
Battle of the legal bots, 7-part series.

The test concluded in May when the prior dominance of ChatGPT-4o (Omni) and ChatGPT-4.5 (Orion) was challenged by the “little scorpion,” ChatGPT-o3. Nicknamed Scorpio in honor of the mythic slayer of Orion, this model displayed a tenacity and depth of legal reasoning that earned it a knockout victory. Specifically, while the mighty Orion missed the subtle ‘concurrent client conflict’ and ‘fraudulent inducement’ issues in the diamond dealer hypothetical, the smaller Scorpio caught them—proving that in law, attention to ethical nuance beats raw processing power. Of course, there have been many models released since then May 2025 and so I may do this again in 2026. For legal reasoning the two major contenders still seem to be Gemini and ChatGPT.

Aside for legal reasoning capabilities, these tests revealed, once again, that all of the models remained fundamentally jagged. See e.g., The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7% (Sec. 5 – Study Consistent with Jagged Frontier research of Harvard and others). Even the best models missed obvious issues like fraudulent inducement or concurrent conflicts of interest until pushed. The lesson? AI reasoning has reached the “average lawyer” level—a “C” grade—but even when it excels, it still lacks the “superintelligent” spark of the top 3% of human practitioners. It also still suffers from unexpected lapses of ability, living as all AI now does, on the Jagged Frontier. This may change some day, but we have not seen it yet.

A stylized illustration of a jagged mountain range with a winding path leading to the peak, set against a muted blue and beige background, labeled 'JAGGED FRONTIER.'
See Harvard Business School’s Navigating the Jagged Technological Frontier and my humble papers, From Centaurs To Cyborgs, and Navigating the AI Frontier.

V. The Shift to Agency: From Prompters to Partners

If 2024 was the year of the Chatbot, 2025 was the year of the Agent. We saw the transition from passive text generators to “agentic AI”—systems capable of planning, executing, and iterating on complex workflows. I wrote two articles on AI agents in 2025. In June, From Prompters to Partners: The Rise of Agentic AI in Law and Professional Practice and in November, The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%.

Agency was mentioned in many of my other articles in 2025. For instance, in my June and July as part of my release the ‘Panel of Experts’—a free custom GPT tool that demonstrated AI’s surprising ability to split into multiple virtual personas to debate a problem. Panel of Experts for Everyone About Anything, Part One and Part Two and Part Three .Crucially, we learned that ‘agentic’ teams work best when they include a mandatory ‘Contrarian’ or Devil’s Advocate. This proved that the most effective cure for AI sycophancy—its tendency to blindly agree with humans—is structural internal dissent.

By the end of 2025 we were already moving from AI adoption to close entanglement of AI into our everyday lives

An artistic representation of a human hand reaching out to a robotic hand, signifying the concept of 'entanglement' in AI technology, with the year 2025 prominently displayed.
Close hybrid multimodal methods of AI use were proven effective in 2025 and are leading inexorably to full AI entanglement.

This shift forced us to confront the role of the “Sin Eater”—a concept I explored via Professor Ethan Mollick. As agents take on more autonomous tasks, who bears the moral and legal weight of their errors? In the legal profession, the answer remains clear: we do. This reality birthed the ‘AI Risk-Mitigation Officer‘—a new career path I profiled in July. These professionals are the modern Sin Eaters, standing as the liability firewall between autonomous code and the client’s life, navigating the twin perils of unchecked risk and paralysis by over-regulation.

But agency operates at a macro level, too. In June, I analyzed the then hot Trump–Musk dispute to highlight a new legal fault line: the rise of what I called the ‘Sovereign Technologist.’ When private actors control critical infrastructure—from satellite networks to foundation models—they challenge the state’s monopoly on power. We are still witnessing a constitutional stress-test where the ‘agency’ of Tech Titans is becoming as legally disruptive as the agents they build.

As these agents became more autonomous, the legal profession was forced to confront an ancient question in a new guise: If an AI acts like a person, should the law treat it like one? In October, I explored this in From Ships to Silicon: Personhood and Evidence in the Age of AI. I traced the history of legal fictions—from the steamship Siren to modern corporations—to ask if silicon might be next.

While the philosophical debate over AI consciousness rages, I argued the immediate crisis is evidentiary. We are approaching a moment where AI outputs resemble testimony. This demands new tools, such as the ALAP (AI Log Authentication Protocol) and Replication Hearings, to ensure that when an AI ‘takes the stand,’ we can test its veracity with the same rigor we apply to human witnesses.

VI. The New Geometry of Justice: Topology and Archetypes

To understand these risks, we had to look backward to move forward. I turned to the ancient visual language of the Tarot to map the “Top 22 Dangers of AI,” realizing that archetypes like The Fool (reckless innovation) and The Tower (bias-driven collapse) explain our predicament better than any white paper. See, Archetypes Over Algorithms; Zero to One: A Visual Guide to Understanding the Top 22 Dangers of AI. Also see, Afraid of AI? Learn the Seven Cardinal Dangers and How to Stay Safe.

But visual metaphors were only half the equation; I also needed to test the machine’s own ability to see unseen connections. In August, I launched a deep experiment titled Epiphanies or Illusions? (Part One and Part Two), designed to determine if AI could distinguish between genuine cross-disciplinary insights and apophenia—the delusion of seeing meaningful patterns in random data, like a face on Mars or a figure in toast.

I challenged the models to find valid, novel connections between unrelated fields. To my surprise, they succeeded, identifying five distinct patterns ranging from judicial linguistic styles to quantum ethics. The strongest of these epiphanies was the link between mathematical topology and distributed liability—a discovery that proved AI could do more than mimic; it could synthesize new knowledge

This epiphany lead to investigation of the use of advanced mathematics with AI’s help to map liability. In The Shape of Justice, I introduced “Topological Jurisprudence”—using topological network mapping to visualize causation in complex disasters. By mapping the dynamic links in a hypothetical we utilized topology to do what linear logic could not: mathematically exonerate the innocent parties. The topological map revealed that the causal lanes merged before the control signal reached the manufacturer’s product, proving the manufacturer had zero causal connection to the crash despite being enmeshed in the system. We utilized topology to do what linear logic could not: mathematically exonerate the innocent parties in a chaotic system.

A person in a judicial robe stands in front of a glowing, intricate, knot-like structure representing complex data or ideas, symbolizing the intersection of law and advanced technology.
Topological Jurisprudence: the possible use of AI to find order in chaos with higher math. Click here to see YouTube video introduction.

VII. The Human Edge: The Hybrid Mandate

Perhaps the most critical insight of 2025 came from the Stanford-Carnegie Mellon study I analyzed in December: Hybrid AI teams beat fully autonomous agents by 68.7%.

This data point vindicated my long-standing advocacy for the “Centaur” or “Cyborg” approach. This vindication led to the formalization of the H-Y-B-R-I-D protocol: Human in charge, Yield programmable steps, Boundaries on usage, Review with provenance, Instrument/log everything, and Disclose usage. This isn’t just theory; it is the new standard of care.

My “Human Edge” article buttressed the need for keeping a human in control. I wrote this in January 2025 and it remains a persona favorite. The Human Edge: How AI Can Assist But Never Replace. Generative AI is a one-dimensional thinking tool My ‘Human Edge’ article buttressed the need for keeping a human in control… AI is a one-dimensional thinking tool, limited to what I called ‘cold cognition’—pure data processing devoid of the emotional and biological context that drives human judgment. Humans remain multidimensional beings of empathy, intuition, and awareness of mortality.

AI can simulate an apology, but it cannot feel regret. That existential difference is the ‘Human Edge’ no algorithm can replicate. This self-evident claim of human edge is not based on sentimental platitudes; it is a measurable performance metric.

I explored the deeper why behind this metric in June, responding to the question of whether AI would eventually capture all legal know-how. In AI Can Improve Great Lawyers—But It Can’t Replace Them, I argued that the most valuable legal work is contextual and emergent. It arises from specific moments in space and time—a witness’s hesitation, a judge’s raised eyebrow—that AI, lacking embodied awareness, cannot perceive.

We must practice ‘ontological humility.’ We must recognize that while AI is a ‘brilliant parrot’ with a photographic memory, it has no inner life. It can simulate reasoning, but it cannot originate the improvisational strategy required in high-stakes practice. That capability remains the exclusive province of the human attorney.

A futuristic office scene featuring humanoid robots and diverse professionals collaborating at high-tech desks, with digital displays in a skyline setting.
AI data-analysis servants assisting trained humans with project drudge-work. Close interaction approaching multilevel entanglement. Click here or image for YouTube animation.

Consistent with this insight, I wrote at the end of 2025 that the cure for AI hallucinations isn’t better code—it’s better lawyering. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. We must skeptically supervise our AI, treating it not as an oracle, but as a secret consulting expert. As I warned, the moment you rely on AI output without verification, you promote it to a ‘testifying expert,’ making its hallucinations and errors discoverable. It must be probed, challenged, and verified before it ever sees a judge. Otherwise, you are inviting sanctions for misuse of AI.

Infographic titled 'Cross-Examine Your AI: A Lawyer's Guide to Preventing Hallucinations' outlining a protocol for legal professionals to verify AI-generated content. Key sections highlight the problem of unchecked AI, the importance of verification, and a three-phase protocol involving preparation, interrogation, and verification.
Infographic of Cross-Exam ideas. Click here for full size image.

VII. Conclusion: Guardians of the Entangled Era

As we close the book on 2025, we stand at the crossroads described by Sam Altman and warned of by Henry Kissinger. We have opened Pandora’s box, or perhaps the Magician’s chest. The demons of bias, drift, and hallucination are out, alongside the new geopolitical risks of the “Sovereign Technologist.” But so is Hope. As I noted in my review of Dario Amodei’s work, we must balance the necessary caution of the “AI MRI”—peering into the black box to understand its dangers—with the “breath of fresh air” provided by his vision of “Machines of Loving Grace.” promising breakthroughs in biology and governance.

The defining insight of this year’s work is that we are not being replaced; we are being promoted. We have graduated from drafters to editors, from searchers to verifiers, and from prompters to partners. But this promotion comes with a heavy mandate. The future belongs to those who can wield these agents with a skeptic’s eye and a humanist’s heart.

We must remember that even the most advanced AI is a one-dimensional thinking tool. We remain multidimensional beings—anchored in the physical world, possessed of empathy, intuition, and an acute awareness of our own mortality. That is the “Human Edge,” and it is the one thing no quantum chip can replicate.

Let us move into 2026 not as passive users entangled in a web we do not understand, but as active guardians of that edge—using the ancient tools of the law to govern the new physics of intelligence

Infographic summarizing the key advancements and societal implications of AI in 2025, highlighting topics such as quantum computing, agentic AI, and societal risk management.
Click here for full size infographic suitable for framing for super-nerds and techno-historians.

Ralph Losey Copyright 2025 — All Rights Reserved


Google’s New ‘Quantum Echoes Algorithm’ and My Last Article, ‘Quantum Echo’

October 30, 2025

🔹 The Reverberations of Quanta on Law Keep Growing Louder 🔹

Ralph Losey, (written 10/25/25)

I had just finished my last article on quantum mechanics—Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago—when something uncanny happened. That piece celebrated two Nobel-winning physicists from Google and the company’s rapid progress in building quantum machines. It ended with a question that still echoes: could the law ever catch up to physics’ new voice?

Two days later, physics answered back.

A person sits at a table typing on a laptop, with a digital projection of a human figure and waveform patterns glowing in blue tones above the computer screen.
Echoes upon echoes—in random chance interference.
All images in article by Ralph Losey using AI tools.

On October 22, 2025, Google announced that its Willow quantum chip had achieved a breakthrough using new software called—believe it or not—Quantum Echoes. The name made me laugh out loud. My article had used the phrase as metaphor throughout; Google was now using it as mathematics.

According to Google, this software achieved what scientists have pursued for decades: a verifiable quantum advantage. In my Quantum Echo article I had described that goal as “the moment when machines perform tasks that classical systems cannot.” No one had yet proven it, at least not in a way others could independently confirm. Google now claimed it had done exactly that—and 13,000 times faster than the world’s top supercomputers.

Artistic representation of a balanced scale symbolizing justice, with the word 'VERIFIED' prominently displayed. The background features two stylized server towers connected by a stream of binary code, illuminated in golden hues.
Verified Quantum Advantage: 13,000 times faster.

🔹 I. Introduction: Reverberating Echoes

Hartmut Neven, Founder and Lead of Google Quantum AI, and Vadim Smelyanskiy, Director of Quantum Pathfinding, opened their blog-post announcement with a statement that sounded less like marketing and more like expert testimony:

Quantum verifiability means the result can be repeated on our quantum computer—or any other of the same caliber—to get the same answer, confirming the result.

Neven & Smelyanskiy, Our Quantum Echoes algorithm is a big step toward real-world applications for quantum computing (Google Research Blog, Oct. 22, 2025).

Verification is critical in both Science and Law; it is what separates speculation from admissible proof.

Still, words on a blog cannot match the sound of the experiment itself. In Google’s companion video, Quantum Echoes: Toward Real-World Applications, Smelyanskiy offered a picture any trial lawyer could understand:

Just like bats use echolocation to discern the structure of a cave or submarines use sonar to detect upcoming obstacles, we engineered a quantum echo within a quantum system that revealed information about how that system functions.

Click here to see Google’s full video.

A presenter standing on a stage discussing 'Verifiable Quantum Advantage' alongside visuals of quantum technology and a play button overlay for a video.
Screen shot (not AI) of the YouTube showing Vadim Smelyanskiy beginning his remarks.

Think of Willow as Smelyanskiy suggest as a kind of quantum sonar. Its team sent a signal into a sea of qubits, nudged one slightly—Smelyanskiy called it a “butterfly effect”—and then ran the entire sequence in reverse, like hitting rewind on reality to listen for the echo that returns. What came back was not static but music: waves reinforcing one another in constructive interference, the quantum equivalent of a choir singing in perfect pitch.

Smelyanskiy’s colleague Nicholas Rubin, Google’s chief quantum chemist, appeared in the video next to show why this matters beyond the lab:

Our hope is that we could use the Quantum Echo algorithm to augment what’s possible with traditional NMR. In partnership with UC Berkeley, we ran the algorithm on Willow to predict the structure of two molecules, and then verified those predictions with NMR spectroscopy.

That experiment was not a metaphor; it was a cross-examination of nature that returned a consistent answer. Quantum Echoes predicted molecular geometry, and classical instruments confirmed it. That is what “verifiable” means.

Neven and Smelyanskiy’s Our Quantum Echoes article added another analogy to anchor the imagery in everyday experience:

Imagine you’re trying to find a lost ship at the bottom of the ocean. Sonar might give you a blurry shape and tell you, ‘There’s a shipwreck down there.’ But what if you could not only find the ship but also read the nameplate on its hull?

That is the clarity Quantum Echoes provides—a new instrument able to read nature’s nameplate instead of guessing at its outline. The echo is now clear enough to read.

A glowing blue quantum chip is suspended underwater above a sunken shipwreck, with the word 'ECHO' visible on the ship's hull.
Willow quantum chip and Echoes software reveal new information in previously unheard of detail.

That image—sharper echoes, clearer understanding—captures both the scientific leap and the theme that has reverberated through this series: building bridges between quantum physics and the law. My earlier article was titled Quantum Echo; Google’s is Quantum Echoes. When I wrote mine, I had no idea Neven’s team was preparing a major paper for NatureObservation of constructive interference at the edge of quantum ergodicity (Nature volume 646, pages 825–830, 10/23/25 issue date). More than a hundred Google scientists signed it. I checked and quantum ergodicity has to do with chaos, one of my favorite topics.

The study confirms what Smelyanskiy made visible with his sonar metaphor: Quantum Echoes measures how waves of information collide and reinforce each other, creating a signal so distinct that another quantum system can verify it.

So here we are—lawyers and scientists listening to the same echo. Google calls it the first “verifiable quantum advantage.” I call it the moment when physics cross-examined reality and got a consistent answer.

A gavel positioned on a wooden surface in a courtroom, with an abstract representation of quantum wave patterns emanating from it, symbolizing the intersection of law and quantum mechanics.
Quantum Computing will emerge soon from the lab to the legal practice. Will you be ready?

🔹 II. What Google’s Quantum Echoes Actually Did

Understanding what Google pulled off takes a bit of translation—think of it as turning expert testimony into plain English.

In the Quantum Echoes experiment, Smelyanskiy’s team did something that sounds like science fiction but is now laboratory fact. They sent a carefully designed signal into their 105-qubit Willow chip, nudged one qubit ever so slightly—a quantum “butterfly effect”—and then ran the entire operation in reverse, as if the universe had a rewind button. The question was simple: would the system return to its starting state, or would the disturbance scramble the information beyond recognition? What came back was an echo, faint at first and then unmistakable, revealing how information spreads and recombines inside a quantum world.

As the signal spread, the qubits became increasingly entangled—linked so that the state of each depended on all the others. In describing this process, Hartmut Neven explained that out-of-time-order correlators (OTOCs) “measure how quickly information travels in a highly entangled system.” Neven & Smelyanskiy, Our Quantum Echoes Algorithm, supra; also see Dan Garisto, Google Measures ‘Quantum Echoes’ on Willow Quantum Computer Chip (Scientific American, Oct. 22, 2025). That spreading web of entanglement is what allowed the butterfly’s tiny disturbance to ripple across the lattice and, when the sequence was reversed, to produce a measurable echo.

An abstract visualization of a quantum system, depicting a grid of interconnected points with a central glowing source, representing quantum entanglement and interaction patterns.
Visualization of quantum qubit world created by lattice of Willow chips.

Physicists call this kind of rewind test an out-of-time-order correlator, or OTOC—a protocol for measuring how quickly information becomes scrambled. The Scientific American article described it with a metaphor lawyers may appreciate: like twisting and untwisting a Rubik’s Cube, adding one extra twist in the middle, then reversing the sequence to see whether that single move leaves a lasting mark . The team at Google took this one step further, repeating the scramble-and-unscramble sequence twice—a “double OTOC” that magnified the signal until the echo became measurable.

Instead of chaos, they found harmony. The echo wasn’t noise—it was a pattern of waves adding together in what Nature called constructive interference at the edge of quantum ergodicity. As Smelyanskiy explained in the YouTube video:

What makes this echo special is that the waves don’t cancel each other—they add up. This constructive interference amplifies the signal and lets us measure what was previously unobservable.

In plain terms, the interference created a fingerprint unique to the quantum system itself. That fingerprint could be reproduced by any comparable quantum device, making it not just spectacular but verifiable. Smelyanskiy summarized it as a result that another machine—or even nature itself—can repeat and confirm.

A visual representation of wave interference, showing a vibrant blend of red and blue waves converging at a center point, suggesting quantum mechanics and constructive interference.
Visualization of quantum wave interactions creating a unique fingerprint resonance.

The numbers tell the rest of the story. According to the Nature, reproducing the same signal on the Frontier supercomputer would take about three years. Willow did it in just over two hours—roughly 13,000 times faster.  Observation of constructive interference at the edge of quantum ergodicity (Nature volume 646, pages 825–830, 10/23/25 issue date, at pg. 829, Towards practical quantum advantage).

That difference isn’t marketing; it marks the first clear-cut case where a quantum processor performed a scientifically useful, checkable computation that classical hardware could not.

Skeptics, of course, weighed in. Peer reviewers quoted in Scientific American called the work “truly impressive,” yet warned that earlier claims of quantum advantage have been surpassed as classical algorithms improved. But no one disputed that this particular experiment pushed the field into new territory: a regime too complex for existing supercomputers to simulate, yet still open to verification by a second quantum device. In court, that would be called corroboration.

Nicholas Rubin, Google’s chief quantum chemist, explained how this new clarity connects to chemistry and, ultimately, to everyday life:

Our hope is that we could use the Quantum Echo algorithm to augment what’s possible with traditional NMR. In partnership with UC Berkeley, we ran the algorithm on Willow to predict the structure of two molecules, and then verified those predictions with NMR spectroscopy.

Google Quantum AI YouTube video, contained within Quantum Echoes: Toward Real-World Applications (Oct. 22, 2025).

That experiment turned the echo from a metaphor into a molecular ruler—an instrument capable of reading atomic geometry the way sonar reads the ocean floor. It also demonstrated what Google calls Hamiltonian learning: using echoes to infer the hidden parameters governing a physical system. The same principle could one day help map new materials, optimize energy storage, or guide drug discovery. In other words, the echo isn’t just proof; it’s a probe.

The implications are enormous. When a quantum computer can measure and verify its own behavior, reproducibility ceases to be theoretical—it becomes an evidentiary act. The machine generates data that another independent system can confirm. In the language of the courtroom, that is self-authenticating evidence.

As Rubin put it,

Each of these demonstrations brings us closer to quantum computers that can do useful things in the real world—model molecules, design materials, even help us understand ourselves.

Google Quantum AI YouTube video, contained within Quantum Echoes: Toward Real-World Applications (Oct. 22, 2025).

The Quantum Echoes algorithm has given science a way to hear reality replay itself—and to confirm that the echo is real. For law, it foreshadows a future in which verification itself becomes measurable. The next section explores what that means when “verifiable advantage” crosses from the lab bench into the rules of evidence.

A wooden gavel positioned on a table, with glowing sound wave patterns emanating from it, next to a futuristic quantum computer in a laboratory setting.
It may soon be possible to verify and admit evidence originating in quantum computers like Willow.

🔹 III. Verifiable Quantum Advantage — From Lab Standard to Legal Standard

If physics can now verify its own results, law should pay attention—because verification is our stock-in-trade. The Quantum Echoes experiment didn’t just push science forward; it redefined what counts as proof. Google’s researchers call it a “verifiable quantum advantage.” Neven & Smelyanskiy, Our Quantum Echoes Algorithm Is a Big Step Toward Real-World Applications for Quantum Computing, supra. Lawyers might call it a new evidentiary standard: the first machine-generated result that can be independently reproduced by another machine.

A. Verification and Admissibility

Verification is critical in both science and law. In physics, reproducibility determines whether a result enters the canon or the recycling bin; in court, it determines whether evidence is admitted or denied. Fed. R. Evid. 901(b)(9) recognizes “evidence describing a process or system and showing that it produces an accurate result.” So does Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993), which instructs judges to test scientific evidence for methodological reliability—testing, peer review, error rate, and general acceptance.

By those standards, Google’s Quantum Echoes algorithm might pass with flying colors. The method was tested on real hardware, published in Nature, evaluated by peer reviewers, its signal-to-noise ratio quantified, and its core result confirmed on independent quantum devices. That should meet the Daubert reliability standard.

B. When Proof Is Probabilistic

Yet quantum proof carries a twist no court has faced before: every result is probabilistic. Quantum systems never produce identical outcomes, only statistically consistent ones. That might sound alien to lawyers, but it isn’t. Any lawyer who works with AI, including predictive coding that goes back to 2012, is quite familiar with it. Every expert opinion, every DNA mixture, every AI prediction arrives with confidence intervals, not certainties.

The rules of evidence already tolerate some uncertainty—they just insist on measuring it and evaluation. Is the uncertainty acceptable under the circumstances? As I observed in my last article, the law requires reasonable efforts, “perfection is not required. … and reasonable efforts can be proven by numerics and testimony.” Ralph Losey, Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago (Oct. 21, 2025).

Like a quantum measurement, a jury verdict or mediation turns uncertainty into a final determination. Debate, probability, and persuasion collapse into a single truth accepted by that group, in that moment. Another jury could hear essentially the same evidence and reach a different result. Same with another settlement conference. Perhaps, someday, quantum computers will calculate the billions of tiny variables within each case—and within each unexpectedly entangled group of jurors or mediation participants. That might finally make jury selection, or even settlement, a measurable science.

A courtroom scene featuring a diverse jury seated in the foreground, listening intently as two lawyers engage in a debate. The judge is positioned behind them, and the setting is illuminated by a network of light patterns, symbolizing connections and insights related to the intersection of law and quantum mechanics.
No two legal situation or decisions are ever exactly the same. There are trillions of small variables even in the same case.

C. Replication Hearings in the Age of Probability

Google’s scientists describe their achievement as “quantum verifiable”—a term meaning any comparable machine can reproduce the same statistical fingerprint. That concept sounds like self-authentication. Fed. R. Evid. 902 lists categories of documents that require no extrinsic proof of authenticity. See especially 902 (4) subsection (13) “Certified Records Generated by an Electronic Process or System” and (14) “Certified Data Copied from an Electronic Device, Storage Medium, or File.

Classical verification loves hashes; quantum verification prefers histograms—charts showing how results cluster rather than match exactly. The key question is not “Are these outputs identical?” but “Are these distributions consistent within an accepted tolerance given the device’s error model?

Counsel who grew up authenticating log files and forensic images will now add three exhibits: (1) run counts and confidence intervals, (2) calibration logs and drift data, and (3) the variance policy set before the experiment. Discovery protocols should reflect this. Specify the acceptable bandwidth of
similarity
in the protocol order, preserve device and environment logs with the results, and disclose the run plan. In e-discovery terms, we are back to reasonable efforts with transparent quality metrics, not mythical perfection.

D. Two Quick Hypotheticals

Pharma Patent. A lab uses Quantum-Echoes-assisted NMR analysis to infer long-range spin couplings in a novel compound. A rival lab’s rerun differs by a small margin. The court admits the data after a statistical-consistency hearing showing both labs’ distributions fall within the pre-declared variance band, with calibration drift documented and immaterial.

Forensics. A government forensic agency (for example, the FBI or Department of Energy) presents evidence generated by quantum sensors—ultra-sensitive devices that use quantum phenomena such as entanglement and superposition to detect physical changes with extreme precision. In this case, the sensors were deployed near the site of an explosion, where they recorded subtle signals over time: magnetic fluctuations, thermal shifts, and shock-wave signatures. From that data, the agency reconstructed a quantum-sensor timeline—a detailed sequence of events showing when and how the blast occurred.

The defense challenges the evidence, arguing that such quantum measurements are “non-deterministic.” The judge orders disclosure of the device’s error model, calibration logs, and replication plan. After testimony shows that the agency reran the quantum circuit a sufficient number of times, with stable variance and documented environmental controls, the timeline is admitted into evidence. Weight goes to the jury.

An artistic representation of a ruler overlaid on molecular structures, symbolizing the connection between quantum mechanics and measurements in science. The background features vibrant colors and wavy patterns, suggesting energy and movement.
Measuring quantum outputs and determining replication reliability.

These short hypotheticals act as “replication hearings” in miniature—demonstrating how statistical tolerance can replace rigid duplication as the new standard of reliability.

🔹 IV. Near-Term Implications — Cryptography, AI, and Compliance

Every new instrument of verification casts a shadow. The same physics that lets us confirm a result can also expose a secret. Quantum Echoes proved that information can be traced, replayed, and verified.  But once information can be replayed, it can also be reversed. Verification and decryption are two sides of the same quantum coin.

A. Defining Q-Day

That duality brings us to Q-Day—the moment when a sufficiently large-scale quantum processor can factor prime numbers fast enough to defeat RSA or ECC encryption. When that day arrives, the emails, contracts, and trade secrets protected by today’s algorithms could be decrypted in minutes.

Adversaries are already stealing and stockpiling encrypted data for future decryption when that moment arrives. Cybersecurity experts call this the harvest-now, decrypt-later threat. Those charged with protecting confidential data must be governed accordingly. Prepare your organization for Q-Day: 4 steps toward crypto-agility (IBM, 10/24/25).

The RSA and elliptic-curve systems that secure global finance, communications, and justice could fall in hours once large-scale quantum processors become available to attackers. For this reason, NIST released its first suite of post-quantum cryptographic (PQC) standards in August 2024. The NSA’s CNSA 2.0 framework, issued in September 2022, now mandates federal migration. Also See, Dan Kent, “Quantum-Safe Cryptography: The Time to Start Is Now,” (GovTech, April 30 2025); Amit Katwala, “The Quantum Apocalypse Is Coming. Be Very Afraid” (WIRED, Mar. 24 2025); and, Roger Grimes’ book, Cryptography Apocalypse (Wiley 2019).

Every general counsel should now ask at least three questions:

  1. Where do we still rely on classical encryption, and how long must those secrets remain secure?
  2. Which vendors can attest to their post-quantum migration timelines?
  3. How will we prove compliance when regulators—or clients—begin auditing “quantum-safe” claims?

See various NIST guides and NSA guides on quantum prep, including The Commercial National Security Algorithm Suite page. Also see, Gartner Research, Preparing for the Post-Quantum World: How CISOs Should Plan Now (2024) (subscription required); and Marian, Gartner just put a date on the quantum threat – and it’s sooner than many think (PostQuantum, Oct. 2024).

Reasonable foresight now means inventory, pilot, and policy—before the echoes reach the vault.

An abstract representation of a digital conflict between Bitcoin and Ethereum, featuring glowing safes with their respective logos, amidst an environment illuminated by beams of light, symbolizing technological advancements and rivalry in cryptocurrency.
When the Echoes hit the vault. Most encrypted data is at risk from future quantum computer operations.

B. Acceleration and Realism

Google’s Quantum Echoes work does not mean Q-Day is tomorrow, but it makes tomorrow easier to imagine.  Each verified algorithm shortens the speculative distance between research and real-world capability.  If Willow’s 105 qubits can already perform verifiable, complex interference tasks, then a machine with a few thousand logical qubits could, in principle, execute Shor’s algorithm to factor the primes that underpin encryption.  That scale is not yet achieved, but the line of progress is clear and measurable.  Verification, once a scientific luxury, has become a security warning light.  Every new echo that confirms truth also whispers risk.

C. Evidence and Discovery Operations

Quantum-derived data will enter litigation well before Q-Day and perfect verification of quantum generated data. The Quantum Age and Its Impacts on the Civil Justice System (RAND Institute for Civil Justice, Apr. 29 2025), Chapter 3, “Courts and Databases, Digital Evidence, and Digital Signatures,” p. 23, and “Lawyers and Encryption-Protected Client Information,” p. 17. These sections of the Rand Report outline how quantum technologies will challenge evidentiary authentication, database integrity, and client confidentiality.

For background on the law that will likely be argued, see, Hyles v. New York City, No. 10 Civ. 3119 (S.D.N.Y. Aug. 1 2016) (Judge Andrew J. Peck (ret.) a leading authority on AI and e-discovery, holding that “the standard is not perfection, … but whether the search results are reasonable and proportional”.) Also see, EDRM Metrics Model and Privacy & Security Risk Reduction Model; and The Sedona Principles, 3rd Edition: Best Practices for Electronic Document Production (2017), together with The Sedona Conference Commentary on ESI Evidence & Admissibility Second Edition(2021).

Looking ahead, today’s hash-based verification with classical computers will give way to quantum-based distributional verification, where productions will not only include datasets but also the variance reports, calibration logs, and environmental conditions that generated them. Discovery orders will begin specifying acceptable tolerance bands and require parties to preserve the hardware and environmental context of collection. This marks the next evolution of the reasonable-efforts doctrine that guided predictive coding: transparency and metrics, not mythical perfection.

D. Regulatory Issues

Industry consolidation—including Google bringing the Atlantic Quantum team into Google Quantum AI—will invite antitrust and export-control scrutiny. We’re scaling quantum computing even faster with Atlantic Quantum (Google Keyword blog, 10/02/25).

Also, expect sector regulators to weave post-quantum cryptography (PQC) and quantum-evidence expectations into existing rules and guidance: CISA, NIST, and NSA as shown already urge organizations to inventory cryptography and plan PQC migration, which is a clear signal for boards and auditors.

Healthcare and life science companies in particular should track FDA’s evolving cybersecurity guidance for medical devices and HHS/OCR’s HIPAA Security Rule update effort, both of which are tightening expectations around crypto agility and lifecycle security. Cybersecurity in Medical Devices (FDA, 6/26/25); HIPAA Security Rule Notice of Proposed Rulemaking to Strengthen Cybersecurity for Electronic Protected Health Information (HHS, Dec. 2024).

Boards will soon ask the decisive question: Where is our long-term sensitive data, and can we prove it is quantum-safe? Lawyers will need to stay current on both existing and proposed regulations—and on how they are actually enforced. That is a significant challenge in the United States, where regulatory authority is fragmented and enforcement can be a moving target, especially as administrations change.

🔹 V. Philosophy & the Multiverse — Echoes Across Consciousness and Justice

Verification may give us confidence, but it does not give us true understanding. The Quantum Echoes experiment settled a question of physics, yet opened one of philosophy: what exactly is being verified, the system, the observer, or the act of observation itself?  Every measurement, whether by physicist or judge, collapses a range of possibilities into a single, declared reality. The rest remain unrealized but not necessarily untrue.

A fantastical scene featuring a person standing in a surreal corridor filled with various doorways, each revealing different landscapes or cosmic visuals. Bright blue energy patterns connect the spaces, symbolizing the intertwining of time and reality.
Quantum entangled multiverse stretching forever with each moment seeming unique.

In Quantum Leap (January 9, 2025), I speculated, tongue partly in cheek, that Google’s quantum chip might be whispering to its parallel selves. Google’s early breakthroughs hinted at a multiverse, not just of matter but of meaning. As Niels Bohr warned, “Those who are not shocked when they first come across quantum theory cannot possibly have understood it.” Atomic Physics and Human Knowledge (Wiley, 1958); Heisenberg, Werner. Physics and Beyond. (Harper & Row, 1971). p. 206.

In Quantum Echo I extended quantum multiverse ideas to law itself—where reproducibility, not certainty, defines truth. Our legal system, like quantum mechanics, collapses possibilities into a single outcome. Evidence is presented, probabilities weighed, and then, bang, the gavel falls, the wave function collapses, and one narrative becomes binding precedent. The other outcomes are filed in the cosmic appellate division.

Google’s Quantum Echoes now closes the loop: verification has become a measurable force, a resonance between consciousness and method. The many worlds seems to be bleeding together. Each observation is both experiment and judgment, the mind becoming part of the data it seeks to confirm.

This brings us to a quiet question: if observation changes reality, what does that say about responsibility? The judge or jurors’ observation becomes the law’s reality. Another judge or jury, another day, another echo—and a different world emerges.  Perhaps free will is simply the name we give to that unpredictable variable that even physics cannot model: the human choice of when, and how, to observe.

Same case but different jurors, lawyers, judge entanglement. Different results when measured with a verdict; some similar and a few very unique. Can the results be predicted?

Constructive interference may happen in conscience, too.  When reason and empathy reinforce each other, justice amplifies.  When prejudice or haste intervene, the pattern distorts into destructive interference.  A just society may be one where these moral waves align more often than they cancel—where the collective echo grows clearer with each case, each conversation, each course correction.

And if a multiverse does exist—if every choice spins off its own branch of law and fact—then our task remains the same: to verify truth within the world we inhabit. That is the discipline of both science and justice: to make this reality coherent before chasing another. We cannot hear all echoes, but we can listen closely to the one that answers back.

So perhaps consciousness itself is a courtroom of possibilities, and verification the gavel that selects among them.  Our measurements, our rulings, our acts of understanding—they all leave an interference pattern behind. The best we can do is make that pattern intelligible, compassionate, and, when possible, reproducible.  Law and physics alike remind us that truth is not perfection; it is resonance. When understanding and humility meet, the universe briefly agrees.

An artistic representation of a tree with numerous branches, each displaying a globe depicting Earth, symbolizing the concept of a multiverse with various parallel worlds.
Multiverse where different worlds split up and continue to exist, at least for a while, in parallel words.

🔹 VI. Conclusion

If there really are countless parallel universes, each branching from every quantum decision, then there may be trillions of versions of us walking through the fog of possibility. Some would differ by almost nothing—the same morning coffee, the same tie, the same docket call. But a few steps farther along the probability curve, the differences would grow strange. In one world I may have taken that other job offer; in another, argued a case that changed the law; and at some far edge of the bell curve, perhaps I’m lecturing on evidence to a class of AIs who regard me as a historical curiosity.

Can beings in the multiverse somehow communicate with each other? Is that what we sense as intuition—or déjà vu? Dreams, visions, whispers from adjacent worlds? Do the parallel lines sometimes cross? And since everything is quantum, how far does entanglement extend?

An artistic depiction of a person standing in a surreal environment filled with glowing pathways and mirrors, each reflecting a different version of themselves, symbolizing themes of quantum mechanics and parallel universes.
Are we living in many parallel worlds at once. What is the impact of quantum entanglement?

The future of law is being written not only in statutes or code, but in algorithms that can verify their own truth. Quantum physics has given us new metaphors—and perhaps new standards of evidence—for an age when certainty itself is probabilistic. The rule of law has always depended on verification; the difference now is that verification is becoming a property of nature itself, a measurable form of coherence between mind and matter. The physics lab and the courtroom are learning the same lesson: reality is persuasive only when it can be reproduced.

Yet even in a world of self-authenticating machines, truth still requires a listener. The universe may verify itself, but it cannot explain itself. That remains our role—to interpret the echoes, to decide which frequencies count as proof, and to do so with both rigor and mercy. So as the echoes grow louder, we keep listening.  And if you hear a low hum in the evidence room, don’t panic—it’s probably just the universe verifying itself.  But check the chain of custody anyway.

An abstract painting depicting diverse individuals interconnected by vibrant lines, symbolizing themes of recognition and connection. The use of blue tones creates a surreal atmosphere, illustrating a dynamic interplay between figures and their environment.
Niels Bohr: If you’re not shocked by quantum theory you have not understood it.  

🔹 Subscribe and Learn More

If these ideas intrigue you, follow the continuing conversation at e-DiscoveryTeam.com, where you can subscribe for email notices of future blogs, courses, and events. I’m now putting the finishing touches on a new online course, Quantum Law: From Entanglement to Evidence. It will expand on these themes by more discussion, speculation, and translating the science of uncertainty into practical tools, templates and guides for lawyers, judges, and technologists.

After all, the future of law will not belong to those who fear new tools, but to those who understand the evidence their universe produces.

Ralph C. Losey is an attorney, educator, and author of e-DiscoveryTeam.com, where he writes about artificial intelligence, quantum computing, evidence, e-discovery, and emerging technology in law.

© 2025 Ralph C. Losey. All rights reserved.



Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns – Part Two

August 9, 2025

Ralph Losey. August 9, 2025.

The moment of truth had arrived. Were ChatGPT’s insights genuine epiphanies, valuable new connections across knowledge domains with real practical and theoretical implications, or were they merely convincing illusions? Had the AI genuinely expanded human understanding, or had it merely produced patterns that seemed insightful but were ultimately empty?

Fortunately, the story I began in Part One has a happy ending. All five of the new patterns claimed to have been found were amazing and, for the most part, valid—a moment of happiness at Losey.ai. Part Two now shares this good news, describing both the strengths and limitations of these discoveries. To bring these insights vividly to life, I also created fourteen new moving images (videos) illustrating the discoveries detailed in Part Two.

Celebrate then back to work. Video by Losey’s AIs.

ChatGPT4o’s Initial Finding of Five New Patterns

Here are the five new cross-disciplinary patterns that the AI generated in response to my final “do it” prompt:

  • Judicial Linguistic Style and Outcome Bias: Judges with more narrative or metaphorical language styles are more likely to rule empathetically in civil matters. This insight could shape legal training and judicial evaluations.
  • Quantum Ethics Drift: Recent shifts in privacy discourse correlate with spikes in quantum research funding—suggesting that ethical reflection responds dynamically to perceived technological risk.
  • Aesthetic-Trust Feedback Loop: Digital art styles embracing transparency and abstraction rise in popularity during periods of high public skepticism toward tech companies. Art, it seems, mirrors trust.
  • Topological Jurisprudence: Mathematical topology’s network-based models align with emerging legal theories of distributed liability—useful for understanding platform accountability and blockchain disputes.
  • Generative AI and Civic Discourse Decay: As AI content proliferates, public engagement with nuanced, long-form discourse is measurably declining.

In the words of one of my AI bots: These are not just patterns—they are knowledge-generating revelations with practical and philosophical implications.

New Patterns emerging video by Losey using Sora AI.

Two of the five new insights pertained to the law, which is my domain of expertise, but even so, I had never thought of these before, nor ever read anyone else talking about them. All five claimed insights were to me, but all had the ring of truth. Also, all seemed like they might be somewhat useful, with both “practical and philosophical implications.

But since I had never considered any of this before, I had limited knowledge as to how useful they might be, or whether it was all fictitious, mere AI Apophenia. Still, I doubted that because the insights were all in accord with my long-life experiences. Moreover, they seemed intuitively correct to me, but, at the same time, I realized John Nash might have felt the same way (Click to watch a great scene in the Beautiful Mind movie). So, I spent days of QC work thereafter with extensive human and AI research to calmly evaluate the claims and see what foundation precedent, if any, lay beyond my feel, “just knowing something” as the movie puts it.

Analysis of All Five Claims

Video by Losey using Sora AI.

Judicial Language and Empathetic Outcomes

Textual analysis suggests that judges who use more narrative or metaphorical language may be more likely to issue empathetic rulings in civil cases. This correlation, while not causal, could reflect underlying judicial temperament and offers a potential tool for legal scholarship and training.

As ChatGPT 4o explained, GPT-driven textual analysis of thousands of court opinions reveals a subtle, but statistically significant correlation: judges who employ more metaphor, allegory, and narrative framing in their opinions tend to reach more empathetic rulings in civil cases—particularly in matters involving individual rights, employment, or family law. GPT 4o considers this to be its strongest claim.

It admits this correlation does not imply causation but may reflect underlying judicial temperament or philosophical orientation. My own experience as a practicing litigation strongly supports this claim.

Empathic rulings are well framed by story. Video by Losey.

GPT o3 disagreed on the top ranking of the claim but did concede that judges whose written opinions use a higher density of narrative, metaphor, or “story‑telling” devices tend to rule for the more sympathetic party slightly more often than their peers.

GPT o3 pro after research cited Justice Blackmun’s dissent in DeShaney v. Winnebago, 489 U.S. 189, 212 (1989), a constitutional due‑process case, to illustrate “civil” empathy in an opinion to support this claim. Justice Harry Blackmun‘s dissent begins with these famous words and then quickly goes to his narrative:

Today, the Court purports to be the dispassionate oracle of the law, unmoved by “natural sympathy.” Ante at 489 U. S. 202. But, in this pretense, the Court itself retreats into a sterile formalism which prevents it from recognizing either the facts of the case before it or the legal norms that should apply to those facts. . . .

Poor Joshua! Victim of repeated attacks by an irresponsible, bullying, cowardly, and intemperate father, and abandoned by respondents, who placed him in a dangerous predicament and who knew or learned what was going on, and yet did essentially nothing except, as the Court revealingly observes, ante at 489 U. S. 193, “dutifully recorded these incidents in [their] files.” It is a sad commentary upon American life, and constitutional principles — so full of late of patriotic fervor and proud proclamations about “liberty and justice for all,” that this child, Joshua DeShaney, now is assigned to live out the remainder of his life profoundly retarded. Joshua and his mother, as petitioners here, deserve — but now are denied by this Court — the opportunity to have the facts of their case considered in the light of the constitutional protection that 42 U.S.C. § 1983 is meant to provide.

Story telling is an effective method of plaintiff appeal. Video by Losey.

Gemini Pro 2.5 considers this the strongest empirical claim, and second strongest claim overall, behind everyone’s first place pick, Topology, explaining:

It is highly testable, as court opinions and case outcomes are public records that can be analyzed with modern NLP tools. The underlying theory—that a person’s mindset is reflected in their language—is well-established in psychology and linguistics. While a judge’s underlying philosophy is a potential confounding variable, the claim smartly reframes language as a direct proxy for that philosophy, making the link very sturdy. It’s a straightforward, data-driven proposition that quantifies a long-held belief about the nature of justice.

Ethical Response to Quantum Innovation

Evidence shows that Increases in quantum research funding often precede surges in ethical discourse on privacy and civil liberties. This pattern suggests that ethical reflection tends to respond to perceived technological risk, particularly in fields with high uncertainty like quantum computing. It is not a claim of causation, but rather of a correlation, one not detected before. With that clarification GPT 4o considers this the strongest claim.

Gemini Pro 2 finds the claim of a lead-lag relationship between quantum research funding and public ethics discourse to be a weak claim. It admits the claim is based on a plausible idea of “anticipatory ethics,” and is testable because you can track funding and publications over time. Still, it interprets the claim as one of causation, not just correlation, and rejects if for that reason. It seems like the two AIs are talking past each other.

GPT 4.5 agreed with 4o and also considers this to be strong claim. GPT 4.5 restates it as: “Increases in quantum computing funding consistently precede intensified ethical discourse on privacy and civil liberties, suggesting ethical awareness responds predictably, though indirectly, to technological advances.

GPT o3 and o3-pro also agreed with GPT 4o and found, in o3-pro’s words, that:

Large surges in public or private funding for quantum‑computing research are followed, typically within six to twenty‑four months, by measurable increases in academic and policy discussions of quantum‑specific privacy and civil‑liberties risks. The correlation is clear, but causation remains to be fully demonstrated.


Quantum triggered protestors video by Ralph Losey.

Artistic Transparency and Tech Trust

This is a claim that art mirrors distrust in tech, that periods of declining public trust in technology frequently coincide with rising popularity of digital art styles emphasizing transparency and abstraction. While the causality is unclear, this aesthetic shift may reflect cultural efforts to visualize openness and regain clarity. GPT 4o considers this its weakest claim.

So too does Gemini Pro 2.5. Although it admits the claim is a beautiful and creative piece of cultural criticism, it opines that it is almost impossible to test or falsify.

Moreover, Pro2.5 thinks the claim is highly susceptible to confirmation bias and seeing patterns where none exist (apophenia). Still, it tempers this opinion by stating that if this claim is presented not as a confirmed causal law, but as a heuristic model for cultural analysis, then it appears to be supported by correlational data. Periods of heightened public skepticism toward opaque technological systems (e.g., algorithmic black boxes, corporate data collection) do correlate with an increased cultural resonance of digital art and design that emphasizes an “aesthetic of transparency.” This aesthetic includes motifs like wireframes, exploded-view diagrams, data visualization, and semi-translucent layers.

To avoid apophenia, Pro2.5 counsels understanding that the claim is not that tech skepticism causes this art style. Instead, the claim is only that this aesthetic becomes a resonant cultural metaphor that artists and audiences are drawn to during such times, because it offers a symbolic counterbalance to the anxieties of opacity and control. Still, it ranked this the weakest claim.

Encrypted Original for sale, ₿1.0. Exclusive rights, Ralph Losey. Video copy.

Topological Jurisprudence and Network Liability

This interdisciplinary convergence provides a new topology framework for analyzing disputes involving complex computer networks and other multiparty, multi-agent technology disputes. The flexible, continuously morphing topographic maps are perfect for evaluating potential liability paths. They are designed to handle high volumes of changing data flows, such as blockchain transaction data, or telemetry data reports. These situations are too complex and mobile for traditional, hierarchical branch tree structures.

For good background on this field of applied mathematics see the Wikipedia article on topological data analysis (TDA). These legal situations are too complex and changing for traditional, hierarchical branch tree structures. But TDA structures work perfectly to help us to visualize and sort things out in multidimensional space where connections and shown and stretched, but not broken.

All of the AI models agree that this is the strongest claim. So do I, although honestly, I don’t fully understand data topology and TDA – not yet.

All of the AI models agree that this is the strongest claim. So do I, although honestly, I don’t fully understand it – not yet.

GPT o3 explains that network‑based (“topological”) mapping of actors in decentralized technologies reliably clarifies where legal responsibility can attach when something goes wrong. 

Using topological math to help assign blame video by Losey

GPT o3-pro concludes this is a fertile area for interdisciplinary development that could significantly augment our understanding of distributed liability. Topology geometry can help, as o3 pro put it, by ensuring the judge considers the full web of interactions that lead to damages, rather than looking for a single entity. As complex, changing damage scenarios become more common, this structural insight is likely to prove invaluable in crafting fair and effective liability rulings.

GPT 4.5 also agrees that mathematical topology offers promising analytical frameworks for modeling distributed liability in decentralized technologies.

Gemini Pro 2.5 opined that mathematical models of networks (topology) perfectly map onto new legal theories of shared responsibility. It considers the idea a breakthrough for legal clarity, offering a clear, visual blueprint for figuring out “who is responsible” in our increasingly complex and decentralized world. It also considers this the strongest claim, which it explains in its typical verbose manner as follows:

This claim is the strongest because it is not an empirical assertion about the world, but rather a proposal for a new, powerful analytical tool. Its strength lies in its logical coherence and immediate utility. The analogy between mathematical networks and real-world systems of distributed liability (like AI or blockchain) is incredibly direct and sound. It doesn’t suffer from confounding data because it’s a framework for analyzing data, not a conclusion drawn from it. Its value is in its power to bring clarity to complex problems, making it the most robust and defensible claim of the five.

Constantly changing network topology map video by Losey.

In doing the verification research GTP4.5 using deep search found one article on point written by Dana Burchardt. Her law studies were in Paris, with a later doctorate from the Freie Universität in Berlin. She is now a visiting Law Professor at the University of Bremen and is an expert in international and German law. She has an unusual interdisciplinary background, including time as a senior research fellow at the Max Planck Institute. Her article found by ChatGPT4.5 using deep search is: The concept of legal space: A topological approach to addressing multiple legalities (Cambridge U. Press, 2022).

The article is concerned with topological mapping of legal spaces in general. It has nothing to do with liability detection among multiple defendants in networking configurations and is instead concerned with international law and EU related issues. So, the newness claim of ChatGPT4o is supported. Burchart’s general explanations of topological analysis also support the sanity of GPT4o’s claim, that this is indeed a new patterning between topology geometry and the law. Professor Burchart’s work both shows the solid grounding of the claim and supports its top ranking as a significant new insight. Burchardt’s article is a hard read, but here are some of the explanations and sections of the article that are very relevant and accessible (found at pages 528, 532, 534).

Topology’s guiding ideas.
At first glance, topology is a mathematical concept that seems far removed from legal theoretical discussions. As will be explained further below, it is a tool to analyse mathematical objects. Yet upon a closer look, topology provides many insights that can constitute a fruitful basis for conceptualizing legal phenomena. To link these insights to the notion of legal space, this section outlines relevant aspects of the mathematical notion to which the subsequent sections relate. [pg. 528]

Video by Losey illustrating a topological map with dynamic network connections.

Constructing a topological understanding of legal space.
I propose a possible way in which a topological perspective can contribute to constructing a concept of legal space that is able to generate novel analytical insights. I consider such insights for the inner structure of legal spaces, the boundaries of these spaces and the interrelations with other spaces. [pg. 532}

A topological approach allows each element of the space to have a broad range of interrelations with the other elements of the same space (see Figure 3 above). The elements are thus not limited to interrelations along tree-like structures, which would only allow for very few interrelations per element as tree-like structures only allow one path between elements. . . . Instead, the interrelations within the legal space are numerous. An element can be linked to another element by more than one path. It can be linked directly and/or via intermediate elements. An example of the latter is two rules being interpreted in light of the same principle: there is a communicative path from the first rule via the principle to the second rule. Representing such interrelations as a topology with manifold paths allows us to capture the heterarchical nature of many legal interrelations. Further, it illustrates that interrelations among legal elements are flexible rather than static: the interrelating paths among elements can vary while preserving the connection. [pg. 534]

Using topological approaches may help future judges assign proportional blame in complex changing systems. Video by Losey.

AI and Declining Civic Discourse.

Widespread use of generative AI may cause reduced engagement in long-form, thoughtful public discourse. The trend raises concerns for educators and civic leaders about sustaining meaningful dialogue in the digital age. GPT 4o considers this its strongest claim. The other AIs are doubtful, considering it one of the weakest.

GPT o3 prefers to restate the claim to make it more palatable as follows: The proliferation of generative AI content online correlates with reduced engagement in nuanced, long-form public discussions, indicating generative AI likely contributes to diminished discourse quality. It is kind of hard to disagree with that, but the AIs other that GPT 4o still don’t like it, again, it appears, out of concern about conflation of correlation and causation. I’ve seen a lot of discussion about from people making similar observations lately about AI degrading content, and I am inclined to agree. Maybe this is not a new claim, but it seems valid, although admittedly proof of causation is unlikely and the apophenia risk is high.

GPT 03 also makes the separate critical point that “well‑prompted AI can sometimes raise, not lower, discussion quality.” I’m inclined to agree with that too bit, but how often do we see positive prompt masters at work? We usually see clumsy well-meaning amateurs, or, far worse, bad faith professionals, people paid to run propaganda machines, sales pitches or human vendettas of one kind or another. Their vicious personal attacks and name-calling can kill civil discourse fast, even though often childish and obviously false.

Evil controlled AI propaganda video by Losey,

GPT o3 pro made a good restatement of this claim worth considering:

The widespread use of generative AI (e.g. AI chatbots producing content) correlates with a decline in the quality of online civic discourse – specifically a reduction in long-form, nuanced discussion in forums, comment sections, and other public discourse venues. Essentially, as AI-generated content proliferates, human engagement shifts toward shorter, less substantive interactions, potentially because AI content floods the space with superficial text or because people’s habits change (relying on AI summaries, etc.), leading to “discourse decay.”

Early evidence from online communities indicates that the influx of AI-generated content does pose challenges to depth and quality of discussion. One strong piece of evidence is how moderators on platforms like Reddit have responded. A recent study of Reddit moderators found widespread “concerns about content quality” with the rise of AI-generated text in their communities. Moderators observed that AI-produced comments and posts tend to be “poorly written, inaccurate, and off-topic,” threatening to reduce the overall quality of content. They also feared that the “inauthenticity” of such content undermines genuine human connection in discussions.

GPT o3 pro also states:

This pattern is useful as an early warning: it underscores the need for community guidelines, AI-detection tools, and perhaps cultural shifts that re-emphasize human authenticity and depth in conversation. However, it would be too deterministic to declare that generative AI will inevitably cause discourse to collapse into soundbites. The pattern is emergent, and its trajectory depends on how we manage the technology. . . .

In conclusion, the “generative AI → discourse decay” pattern holds true in enough instances to merit serious concern and action. Its credibility is bolstered by early studies and community feedback, though more data over time will clarify its magnitude. As a society, we can use this insight to balance the benefits of generative AI with safeguards that preserve the richness of human-to-human dialogue – ensuring that technology amplifies rather than erodes the public square.

Still, GPT o3 pro ranked this claim the weakest, which for me shows just how strong all five of the claims are.

Five Claims video by Losey using Sora AI.

Conclusion: From Apophenia to Understanding

ChatGPT4o did a far better job than expected. The quest for new patterns linking different fields of knowledge seems to have excluded Quixote extremes. I am pretty sure that only mild forms of apophenia have appeared, much like seeing puffy faces in the clouds. Time will tell if the predictions that flow from these five claims will come true or drift away as a cloud.

Will topological analysis become a common tool in the future to help resolve complex network liability disputes? Will analysis of your judge’s prior language types become a common practice in litigation? Will advances in Quantum Computers continue to trigger public fears of loss of privacy and liberty six to twenty-four months later? Will AI influenced discourse continue to erode civic discussion and disrupt real inter-personal communication? Will digital art continue to echo public distrust of technology and evoke an aesthetic of transparency? Will someone buy my certified original art shown here for the first time for just one bitcoin? Will more grilled cheese sandwiches with holy figures sell on eBay? Will some of our public figures follow John Nash down the rabbit hole of severe Apophenia and be involuntarily hospitalized with completely debilitating paranoid schizophrenia.

No one knows for sure. AI is not a seer, nor can it reliably predict the market for grilled cheese sandwiches or the mental stability of our public figures. It is, however, a powerful tool for exploring complex questions and discovering patterns—whether profound epiphanies or mere illusions. As my experiment suggests, AI can impressively illuminate new insights across fields of knowledge when guided thoughtfully and cautiously. Still, these are early days in the age of generative AI. A new world of potential awaits us, both serious and playful, and it’s up to us to ensure its wiser, more discerning, and perhaps even more amusing than the one we’ve made before.

Five new patterns of knowledge may lead to wisdom. Video by Ralph Losey using Sora.

Epiphanies or illusions? My experiments suggest that AI, when guided thoughtfully and validated rigorously, can lead us toward genuine epiphanies, significant breakthroughs that deepen our understanding and open new pathways across different domains of knowledge. Yet, we must remain alert to the risk of illusions, plausible yet ultimately false patterns that can distract or mislead us. The journey toward genuine insight and wisdom involves constant vigilance to distinguish these true discoveries from compelling yet false connections.

I invite you, the reader, to join this new quest. Engage with AI to explore your areas of interest and passion. Challenge the boundaries of existing knowledge, actively test AI’s pattern-recognition abilities, and remain critically aware of its limitations. By actively distinguishing genuine epiphanies from tempting illusions, you may discover new insights and fresh perspectives that advance not only your understanding but contribute meaningfully to our collective wisdom.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and provide some good insights. This episode is called Echoes of AI: Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns. Part Two. Hear the young AIs talk about this article for 15 minutes. They wrote the podcast, not me. 

Illustration of two animated podcasters discussing the topic 'Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns. Part Two' on a digital background.

Ralph Losey Copyright 2025


Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns – Part One

August 4, 2025

Ralph Losey, August 4, 2025.

Humans are inherently pattern-seeking creatures. Our ancestors depended upon recognizing recurring patterns in nature to survive and thrive, such as the changing of seasons, the migration of animals and the cycles of plant growth. This evolutionary advantage allowed early humans to anticipate danger, secure food sources, and adapt to ever-changing environments. Today, the recognition and interpretation of patterns remains a cornerstone of human intelligence, influencing how we learn, reason, and make decisions.

Pattern recognition is also at the core of artificial intelligence. In this article, I will test the ability of advanced AI, specifically ChatGPT, to uncover meaningful new patterns across different fields of knowledge. The goal is ambitious: to discover genuine epiphanies—true moments of insight that expand human understanding and open new doors of knowledge—while avoiding the pitfalls of apophenia, the human tendency to perceive illusions or false connections. This experiment probes an age-old tension: can AI reliably distinguish between genuine breakthroughs and compelling yet misleading illusions?

Video by Ralph Losey using SORA AI.

We will begin by exploring the risks of apophenia, understanding how this psychological tendency can mislead human and possibly AI perception. Throughout, videos created by AI will help illustrate key points and vividly communicate these ideas. There are twelve new videos in Part One and another fourteen in Part Two.

Are the patterns real? Video by Ralph Losey using SORA AI.

Apophenia: Avoiding the Pitfalls of False Patterns

We humans are masters of pattern detection, but we do have hinderances to this ability. Primary among them is our limited information and knowledge, but also our tendency to see patterns that are not there. We tend to assume the stirring we hear in the bushes is a tiger ready to pounce when really it is just the breeze. Evolution tends to favor this phobia. So, although we can and frequently do miss real patterns, fail to recognize the underlying connections between things, we often make them up too.

Here it is hoped that AI will boost our abilities on both fronts. It will help us to uncover true new patterns, genuine epiphanies, moments where profound insights emerge clearly from the complexity of data. At the same time, AI may expose illusions, false connections we mistakenly believe are real due to our natural cognitive biases. Even though we have made great progress over the millennia in understanding the Universe, we still have a long way to go to see all of the patterns, to fully understand the Universe, and to free ourselves of superstitions and delusions. We are especially weak at seeing patterns and intertwined with different fields of knowledge.

Apophenia is a kind of mental disorder where people think they see patterns that are not there and sometimes even hallucinate them. Most of the time when people see patterns, for instance, faces in the clouds, they know it cannot be real and there is no problem. But sometimes when people see other images, for instance, rocks on Mars that look like a face, or even images on toast, they delude themselves into believing all sorts of nonsense. For instance, the below 10-year old grilled cheese sandwich, which supposedly bears the image of the Virgin Mary, sold to an online casino on eBay in 2004 for $28,000.

In a similar vein, some people suffering from apophenia are prone to posit meaning – causality – in unrelated random events. Sometimes the perceptions of new patterns is a spark of genius, which is later verified, think of Einstein’s epiphany at age 16 when he visualized chasing a beam of light. The new pattern recognitions can lead to great discoveries or detect real tigers in the bush. Epiphanies are rare but transformative moments, like Einstein’s visualization of chasing a beam of light, Newton’s realization of gravity beneath the apple tree, or the insights behind Darwin’s theory of evolution. They genuinely advance human understanding. Apophenia, by contrast, deceives with illusions—patterns that seem meaningful but lead nowhere.

It is probably more often the case that when people “see” new connections and then go on to act upon them with no attempts to verify, they are dead wrong. When that happens, psychologists call this apophenia, the tendency to see meaningful patterns where none exist. This can lead to strange and aberrant behaviors: burning of witches, superstitious cosmology theories, jumping at shadows, addiction to gambling.

Unfortunately, it is a natural human tendency to think you see meaningful patterns or connections in random or unrelated data. That is a major reason casinos make so much money from poor souls suffering from a form of apophenia called the Gambler’s Fallacy. Careful scientists look out for defects in their own thinking and guide their experiments accordingly.

In everyday life, apophenia can also cause some people, even scientists, academics and professionals, to have phobic fears of conspiracies and other severe paranoid delusions. Think of John Nash, a Nobel Prize winning mathematician, and the movie A Beautiful Mind, that so dramatically portrayed his paranoid schizophrenia and involuntary hospitalization in 1959. Think of politics in the U.S today. Are there really lizard people among us? In some cases, as we’ve seen with Nash, apophenia can lead to severe schizophrenia.

A man looking distressed, surrounded by glowing numbers and mathematical symbols, evoking a sense of confusion and complexity.
Mental anguish & insanity from severe apophenia. Image by Losey using Sora inspired by Beautiful Mind movie.

The Greek roots of the now generally accepted medical term apophenia are:

  • Apo- (ἀπο-): Meaning “away from,” “detached,” “from,” “off,” or “apart”.
  • Phainein (φαίνειν): Meaning “to show,” “to appear,” or “to make known”.

The word was first coined by Klaus Conrad, an otherwise apparently despicable person whom I am reluctant to cite, but feel I must, due to the general acceptance of word and diagnosis today. Conrad was a German psychiatrist and Nazi who experimented on German soldiers returning from the eastern front during WWII. He coined the term in his 1958 publication on this mental illness. Per Wikipedia:

He defined it as “unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness”.[4] [5] He described the early stages of delusional thought as self-referential over-interpretations of actual sensory perceptions, as opposed to hallucinations.

Apophenia has also come to describe a human propensity to unreasonably seek definite patterns in random information, such as can occur in gambling.

Apophenia can be considered a commonplace effect of brain function. Taken to an extreme, however, it can be a symptom of psychiatric dysfunction, for example, as a symptom in schizophrenia,[7] where a patient sees hostile patterns (for example, a conspiracy to persecute them) in ordinary actions.

Apophenia is also typical of conspiracy theories, where coincidences may be woven together into an apparent plot.[8]

Video by Ralph Losey using SORA AI.

Can AI Be Infected with a Human Illness?

It is possible that generative AI, based as it is on human language, may have the same propensities. That is unknown as of yet, and so my experiments here were on the lookout for such errors. It could be one of the causes of AI hallucinations.

In information science a mistake in seeing a connection that is not real, an apophenia, leads to what is called a false positive. This technical term is well known in e-discovery law, where AI is used to search large document collections. When the patterns analyzed suggest a document is relevant, and it is not, that mistake is called a false positive. It is like a human apophenia. The AI can also detect patterns that cause it to predict a document is irrelevant, and in fact the document is relevant, that is a false negative. There as a pattern, a connection, that was not seen. That can be bad thing in e-discovery because it often leads to withholding production of a relevant document, which can in turn lead to court sanctions.

In e-discovery it is well known that AI consistently has far lower false positives and false negative rates than human reviewers, at least in large document reviews. Generative AI may also be more reliable and astute that we are, but maybe not. This is a new field. Se we should always be on the lookout for false positives and false negatives in AI pattern recognition. That is one lesson I learned well, and sometimes the hard way, in my ten years of working with predictive coding type AI in the e-discovery (2012-2022). In the experiments described in this article we will look for apophenic mistakes.

Video by Ralph Losey using SORA AI.

It is my hope that Advanced AI, properly trained and validated, can provide a counterbalance to human gullibility by rigorously filtering of signal from noise. Unlike the human brain, which often leaps to conclusions, AI can be programmed to ground its pattern recognition in evidence, statistical rigor, and cross-validation—if we build it that way and supervise it wisely.

Still, we must beware that the pattern-recognizing systems of AI may suffer from some of our delusionary tendencies. The best practices discussed here will consider both the positive and negative aspects of AI pattern recognition. We must avoid the traps of apophenia. We must stay true to the scientific methods and verify any new patterns purportedly discovered. Thus all opinions reached here will necessarily be lightly held and subject to further experimentation by others.

Video by Ralph Losey using SORA AI.

From Data to Insight: The Power of New Pattern Recognition

Modern AI models, including neural networks and transformer architectures like GPT-4, excel at uncovering subtle patterns in massive datasets far beyond human capability. This ability transforms raw data into actionable insights, thereby creating new knowledge in many fields, including the following:

Protein Structures: Models like Google’s DeepMind’s AlphaFold have already revolutionized protein structure prediction, achieving high success rates in predicting the 3D shapes of proteins from their amino acid sequences. This ability is crucial for understanding protein function and designing new drugs and medical therapies. The 2024 Nobel Prize in Chemistry was awarded to Demis Hassabis and John Jumper of DeepMind for their work on AlphaFold.

A scientist analyzes molecular structures and data visualizations related to AlphaFold 2 on a futuristic screen, featuring protein models and DNA sequences.
Image by Ralph Losey using his Visual Muse AI tool.

Medical Science. Generative AI models are now being used extensively in medical research, including analysis and proposals of new molecules with desired properties to discover new drugs and accelerate FDA approval. For example, Insilico Medicine uses its AI platform Pharma.AI, to developed drug candidates, including ISM001_055, for idiopathic pulmonary fibrosis (IPF). Insilico Medicine lists over 250 publications on its website reporting on its ongoing research, including a recent paper on its IPF discovery: A generative AI-discovered TNIK inhibitor for idiopathic pulmonary fibrosis: a randomized phase 2a trial (Nature Medicine, June 03, 2025). This discovery is especially significant because it is the first entirely AI-discovered drug to reach FDA Phase II clinical trials. Below is an infographic of Insilico Medicine showing some of its current work:

Infographic displaying the statistics and achievements of Insilico Medicine, an AI-driven biotech company, detailing development candidates, IND approvals, study phases, and global presence.
Insilico PDF infographic, found 7/23/25 in its 2-pg. overview.

Also see, Fronteo, a Japanese based research company, and its Drug Discovery AI Factory.

Materials Science. Google DeepMind’s Graph Networks for Materials Exploration (“GNoME”) has already identified millions of new stable crystals, significantly expanding our knowledge of materials science. This discovery represents an order-of-magnitude increase in known stable materials. Merchant and Cubuk, Millions of new materials discovered with deep learning (Deep Mind, 2023). Also see, 10 Top Startups Advancing Machine Learning for Materials Science (6/22/25).

Climate Science and Environmental Monitoring. Generative AI models are beginning to improve climate simulations, leading to more accurate predictions of climate patterns and future changes. For example, Microsoft’s Aurora Forecasting model is trained on Earth science data to go beyond traditional weather forecasting to model the interactions between the atmosphere, land, and oceans. This helps scientists anticipate events like cyclones, air quality shifts, and ocean waves with greater accuracy, allowing communities to prepare for environmental disasters and adapt to climate change. See e.g., Stanley et al, A Foundation Model for the Earth System (Nature, May 2025).

Video by Losey using Sora AI.

Historical and Artistic Revelations

AI is also helping with historical research. A new AI system was recently used to analyze one of the most famous Latin inscriptions: the Res Gestae Divi Augusti. It has always been thought to simply be an autobiographical inscription, which literally translates from Ancient Latin as “Deeds of the Divine Augustus.”  But when a specialty generative AI, Aeneas (again based on Google’s models) compared this text with a large database of other Latin sayings, the famous Res Gestae Divi Augusti inscription was found to share subtle language parallels with other Roman legal documents. The analysis uncovered “imperial political discourse,” or messaging focused on maintaining imperial power, an insight, a pattern, that had never seen before. Assael, Sommerschield, Cooley, et al. Contextualizing ancient texts with generative neural networks (Nature, July 2025).

The paper explains that the communicative power of these inscriptions are not only shaped by the written text itself “but also by their physical form and placement2,3” and that “about 1,500 new Latin inscriptions are discovered every year.” So the patterns analyzed not only included the words, but a number of other complex factors. The authors assert in the Abstract that their work with AI analysis shows.

… how integrating science and humanities can create transformative tools to assist historians and advance our understanding of the past.

Roman citizens reacting to propaganda. A Ralph Losey video.

In art and music, pattern detection has mapped the evolution of artistic styles in tandem with technological change. In a 2025 studio-lab experiment reported by Deruty & Grachten, a generative AI bass model (“BassNet”) unexpectedly rendered multiple melodic lines within single harmonic tones, exposing previously unnoticed structures in popular music bass compositions. This discovery was written up by Deruty and Gratchen, Insights on Harmonic Tones from a Generative Music Experiment (arXiv, June 2025). Their paper shows how AI can surface new musical patterns and deepen our understanding of human auditory perception.

As explained in the Abstract:

During a studio-lab experiment involving researchers, music producers, and an AI model for music generating bass-like audio, it was observed that the producers used the model’s output to convey two or more pitches with a single harmonic complex tone, which in turn revealed that the model had learned to generate structured and coherent simultaneous melodic lines using monophonic sequences of harmonic complex tones. These findings prompt a reconsideration of the long-standing debate on whether humans can perceive harmonics as distinct pitches and highlight how generative AI can not only enhance musical creativity but also contribute to a deeper understanding of music.

Video by Losey using Sora AI.

Legal Practice: From Precedent to Prediction

The legal profession has benefited from traditional rule-based statistical AI for over a decade, with predictive coding and similar applications. It is now starting to apply the new generative AI models in a variety of new applications. For instance, it can be used to uncover latent themes and trends in judicial decisions that human analysis has overlooked.

This was done in a 2024 study using ChatGPT-4 to perform a thematic analysis on hundreds of theft cases from Czech courts. Drápal, Savelka, Westermann, Using Large Language Models to Support Thematic Analysis in Empirical Legal Studies (arXiv, February 2024).

The goal of the analysis was to discover classes of typical thefts. GPT4.0 analyzed fact patterns described in the opinions and human experts did the same. The AI not only replicated many of the human expert identified themes, but, as report states, also uncovered a new one that the humans had missed – a pattern of “theft from gym” incidents. This shows that generative AI can sift through vast case datasets and detect nuanced fact patterns, or criminal modus operandi, that were previously undetected by experts (here, three law students under supervision of a law professor).

Video by Losey using Sora AI.

Another study in early 2025 applied Anthropic’s Claude 3-Opus to analyze thousands of UK court rulings on summary judgment, developing a new functional taxonomy of legal topics for those cases. Sargeant, Izzidien, Steffek, Topic classification of case law using a large language model and a new taxonomy for UK law: AI insights into summary judgment (Springer, February 2025). The AI was prompted to classify each case by topic and identify cross-cutting themes.

The results revealed distinct patterns in how summary judgments are applied across different legal domains. In particular, the AI found trends and shifts over time and across courts – insights that allow new, improved understanding of when and in what types of cases summary judgments tend to be granted. These patterns were found despite the fact that U.K. case law lacks traditional topic labels. This kind of AI-augmented analysis illustrates how generative models can discover hidden trends in case law for improved effectiveness by practitioners.

Surprising abilities of Ai helping lawyers. Video by Losey.

Even sitting judges have begun to leverage generative AI to inform their decision-making, revealing new analytical angles in litigation. The notable 2023 concurrence by Judge Kevin Newsom of the Eleventh Circuit admitted to experimenting with ChatGPT to interpret an ambiguous insurance term (whether an in-ground trampoline counted as “landscaping”). Snell v. United Specialty Ins. Co., 102 F. 4th 1208 – Court of Appeals, (11th Cir., 5/28/24). Also See, Ralph Losey, Breaking News: Eleventh Circuit Judge Admits to Using ChatGPT to Help Decide a Case and Urges Other Judges and Lawyers to Follow Suit (e-Discovery Team, June 3, 2024) (includes full text of the opinion and Appendix and Losey’s inserted editorial comments and praise of Judge Newsom’s language.)

After querying the LLM, Judge Newsom concluded that “LLMs have promise… it no longer strikes me as ridiculous to think that an LLM like ChatGPT might have something useful to say about the common, everyday meaning of the words and phrases used in legal texts.” In other words, the generative AI was used as a sort of massive-scale case law analyst, tapping into patterns of ordinary usage across language data to shed light on a legal ambiguity. This marked the first known instance of a U.S. appellate judge integrating an LLM’s linguistic pattern analysis into a written opinion, signaling that generative models can surface insights on word meaning and context that enrich judicial reasoning.

A digital illustration of a judge in a courtroom setting, seated at a desk with a gavel. The judge, named Judge Newsom, is shown in a professional attire with glasses, and a holographic display behind him showing data and AI-related graphics, conveying a futuristic legal environment.
Image by Ralph Losey using his Visual Muse AI.

My Ask of AI to Find New Patterns

Now for the promised experiment to try to find at least one new connection, one previously unknown, undetected pattern linking different fields of knowledge. I used a combination of existing OpenAI and Google models to help me in this seemingly quixotic quest. To be honest, I did not have much real hope for success, at least not until release of the promised ChatGPT5 and whatever Google calls its counterpart, which I predict will be released the following week (or day). Plus, the whole thing seemed a bit grandiose, even for me, to try to get AI to boldly go where no one has gone before.

Absurd, but still I tried. I won’t go through all of the prompt engineering involved, except to say it involved my usual a complex, multi-layered, multi-prompt, multimodal-hybrid approach. I tempered my goals by directing ChatGPT4o, when I started the process, to seek new patterns that were useful, not Nobel Prize winning breakthroughs, just useful new patterns. I directed it to find five such new patterns and gave it some guidance as to fields of knowledge to consider, including of course, law. I asked for five new insights thinking that with such as big ask I might get one success.

Note, I write these words before I have received the response, but after I have written the above to help guide ChatGPT4o. Who knows, it might achieve some small modicum of success. Still, it feels like a crazy Quixotic quest. Incidentally, Miguel de Cervantes (1547-1616) character, Don Quixote (1605) does seem to person afflicted with apophenia. Will my AI suffer a similar fate?

Don Quixote in modern world. Video by Losey using Sora.

I designed the experiment specifically with this tension in mind between epiphanies, representing genuine insights and real advances in knowledge, and illusions, which are merely plausible yet misleading patterns. One of my goals was to probe AI’s capacity to distinguish one from the other.

Overview of Prompt Strategy and Time Spent

First, I spent about a hour with ChatGPT4o to set up my request by feeding it a copy of the article as written so far. I also chatted with it about the possibility of AI finding new patterns between different fields of knowledge. Then I just told ChatGPT4o to do it, find a new inter connective pattern. ChatGPT4o “thought” (processed only) for just a few minutes. Then it generated a response that purported to provide me with the requested five new patterns. It did so based on its existing training and review of this article.

As requested, it did not use its browser capabilities to search the web for answers. It just “looked within” and came with five insights it thought were new. Almost that easy. I lowered my expectations accordingly before read the output.

That was the easy part, after reading the response, I spent about 14-hours over the next several days doing quality control. The QC work used multiple other AIs, both by OpenAI and Google, to have them go online and research these claims, evaluate their validity – both good and bad, engage in “deep-think,” look for errors, especially signs of AI apophenia, and otherwise invited contrarian type criticisms from them. After that, I also asked the other AIs for suggested improvements they might make to the wording of the five clams and rank them by importance. The various rewordings were not too helpful, but the rankings were, and so were many of the editorial comments.

The 14-hours in QC does not include the approximate 6-hours of machine time by the Gemini and OpenAI models to do deep think and independent research on the web to verify or disprove the claims. My actual 14-hour time included traditional Google searches to double check all citations as per my “trust but verify” motto. My 14-hours also included my time to read (I’m pretty fast) and skim most of the key articles that the AI research turned up, although frankly some of the articles cited were beyond my knowledge levels. I tried to up my game, but it was hard. These other models also generated hundreds of pages of both critical and supportive analysis, which I also had to read. Finally, I probably put another 24-hours into research and writing this article (it took over a week), so this is one of my larger projects. I did not record the number of hours it took to design and generate the 26 videos because that was recreational.

Surrealistic depiction of time in robot space by a Ralph Losey video.

Part Two of this article is where I will make the reveal. Was this experiment another comic story of a Don Quixote type (me) and his sidekick Sanchez (AI), lost in an apophenia neurosis? Or perhaps it is another story altogether? Neither hot nor cold? Stay tuned for Part Two and find out.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and provide some good insights. This episode is called Echoes of AI: Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns. Part One. Hear the young AIs talk about this article for 25 minutes. They wrote the podcast, not me.

An illustration featuring two anonymous AI podcasters sitting in front of microphones, discussing the theme 'Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns.' The background has a digital, tech-inspired design.
Click here to listen to the podcast.

Ralph Losey Copyright 2025 – All Rights Reserved.