Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago

October 24, 2025

Meanwhile, Even Bigger Breakthroughs by Google Continue

By Ralph Losey, October 21, 2025.

The Nobel Prize in Physics was just awarded to quantum physics pioneers John Clarke, Michel H. Devoret, and John M. Martinis for discoveries they made at UC Berkeley in the 1980s. They proved that quantum tunneling, where subatomic particles can break through seemingly impenetrable barriers, can also occur in the macroscopic world of electrical circuits. So yes, Schrรถdingerโ€™s cat really could die.

A digital illustration featuring three scientists with varying facial expressions, posed in a futuristic setting, symbolizing breakthroughs in quantum computing. In the foreground, there is an artistic depiction of a cat with a skull overlay, creating a surreal contrast.
Quantum Physics Pioneers take home the Nobel Prize: John Clarke, Michel H. Devoret, and John M. Martinis. All images in this article are by Ralph Losey using AI image generation tools.

Their experiments showed that entire circuits can behave as single quantum objects, bridging the gap between theory and engineering. That breakthrough insight paved the way for construction of quantum computers, including the latest by Google.

Both Devoret and Martinis were recruited years ago by Google to help design its quantum processors. Although John Martinis (right, in the image above) recently departed to start his own company, Qolab, Michel Devoret (center) remains at Google Quantum AI as the Chief Scientist of Quantum Hardware. Last year, two other Google scientists, John Jumper and Demis Hassabis, shared a Nobel prize in chemistry for their groundbreaking work in AI.

Google is clearly on a roll here. As Google CEO Sundar Pichai joked in his congratulatory post on LinkedIn: “Hope Demis Hassabis and John Jumper are teaching you the secret handshake.”

A human hand shakes a holographic robotic hand in front of a quantum computer, with a Google logo in the background.
The secret handshake to Google’s Nobel Prizes is the combination of AI and Quantum.

๐Ÿ”น Willow Breaks Through Its Own Barriers

Less than a year ago, Googleโ€™s new quantum chip, Willow, tunneled through its own barriers, performing in five minutes a calculation that would have taken ten septillion years (10ยฒโด) on the fastest classical supercomputers. Thatโ€™s far longer than anyoneโ€™s estimate for the age of our universeโ€”a good definition of mind-boggling.

This result led Hartmut Neven, director of Googleโ€™s Quantum Artificial Intelligence Lab, to suggest it offers strong evidence for the many-worlds or multiverse interpretation of quantum mechanicsโ€”the idea that computation may occur across near-infinite parallel universes. Neven and a number of leading researchers subscribe to this view.

I explored that seemingly crazy hypothesis in Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan 9, 2025). Oddly enough, it became my most-read article of all timeโ€”thank you, readers.

Todayโ€™s piece updates that story. The Nobel Prize recognition is icing on the cake, but progress has not slowed. Quantum computersโ€”and the lawโ€”remain one of the most exciting frontiers in legal-tech. So much so that Iโ€™m developing a short online course on quantum computing and law, with more courses on prompt engineering for legal professionals coming soon. Subscribe to e-DiscoveryTeam.com to be notified when they launch.

The work of this yearโ€™s Nobel laureatesโ€”Clarke, Devoret, and Martinisโ€”was done forty years ago, so delay in recognition is hardly unusual in this field. Perhaps someday Neven and other many-worlds interpreters of quantum physics will receive their own Nobel Prize for demonstrating multiverse-scale applications. In my view, far more evidence than speed alone will be required.

After all, it defies common sense to imagine, as the multiverse hypothesis suggests, that every quantum event splits reality, spawning a near-infinite array of universes. For example, one where Schrรถdingerโ€™s cat is alive and another slightly different unoiverse where it is dead. It makes Einsteinโ€™s โ€œspooky action at a distance seem tame by comparison.

An illustrated depiction of Schrรถdinger's cat concept, featuring a cartoon cat and a skeleton inside a wooden box, symbolizing the quantum mechanics thought experiment.
Spooky questions: Why are ‘you’ conscious in this particular universe? Are you dead in another?

In the meantimeโ€”whatever the true mechanismโ€”quantum computers and AI are already producing tangible social and legal consequences in cryptography, cybercrime, and evidentiary law. See, The Quantum Age and Its Impacts on the Civil Justice System (Rand, April 29, 2025); Quantum-Readiness: Migration to Post-Quantum Cryptography (NIST, NSA, August, 2023); Quantum Computing Explained (NIST 8/22/2025); but see, Keith Martin, Is a quantum-cryptography apocalypse imminent? (The Conversation , 6/2/25) (“Expert opinion is highly divided on when we can expect serious quantum computing to emerge,” with estimates ranging from imminent to 20 years or more.)

Whether you believe in the multiverse or not, the practical implications for law and technology are already arriving.

Abstract illustration representing the multiverse theory with multiple cosmic spheres and the text 'MULTIVERSE THEORY' and 'INFINITE PARALLEL UNIVERSES'.
Might this theory someday seem like common sense? Or will most Universes discard it as another ‘spooky’ idea of experimental scientists?

๐Ÿ”น Atlantic Quantum Joins Google Quantum AI

On October 2, 2025, Hartmut Neven, Founder and Lead, Google Quantum AI, announced in a short post titled โ€œWeโ€™re scaling quantum computing even faster with Atlantic Quantumโ€ that Google had just acquired. Atlantic Quantum is an MIT-founded startup developing superconducting quantum hardware. The announcement, written in Nevenโ€™s signature understated style, framed the deal as a practical step on Googleโ€™s long road toward โ€œa large error-corrected quantum computer and real-world applications.โ€

Neven explained that Atlantic Quantumโ€™s modular chip stack, which integrates qubits and superconducting control electronics within the cryogenic stage, will allow Google to โ€œmore effectively scale our superconducting qubit hardware.โ€ That phrase may sound routine to non-engineers, but it represents a significant leap in design philosophy: merging computation and control at the cold stage reduces signal loss, simplifies architecture, and makes modular scalingโ€”the key to fault-tolerant machinesโ€”realistically achievable. This is another great acquisition by Google.

Independent reporting quickly confirmed the deal’s importance. In Atlantic Quantum Joins Google Quantum AI, The Quantum Insiderโ€™s Matt Swayne summarized the deal succinctly:

โ€ข Google Quantum AI has acquired Atlantic Quantum, an MIT-founded startup developing superconducting quantum hardware, to accelerate progress toward error-corrected quantum computers. . . .
โ€ข The deal underscores a broader industry trend of major technology companies absorbing research-intensive startups to advance quantum computing, a field still years from large-scale commercial deployment.

The article noted that the integration of Atlantic Quantumโ€™s modular chip-stack technology into Googleโ€™s program was aimed at one of quantum computingโ€™s toughest engineering hurdles: scaling systems to become practical and fault-tolerant.

The MIT-born startup, founded in 2021 by a group of physicists determined to push superconducting design beyond incremental improvements, focused on embedding control electronics directly within the quantum processor. That approach reduces noise, simplifies wiring, and makes modular expansion far more realistic. For another take on the Atlantic story, see Atlantic Quantum and Google Quantum AI are โ€œJoining Upโ€ (Quantum Computing Report, 10/02/25).

These articles place the transaction within a broader wave of global investment in quantum technologies. Large-scale commercial deployment may still be years away but the industry has already entered a phase of consolidation. Research-heavy startups are increasingly being absorbed by major technology companies, a predictable evolution in a field defined by extraordinary capital demands and complex technical challenges.

For Google, the acquisition is less about headlines and more about infrastructure control, owning every layer of the superconducting stack from design to fabrication. For the industry, it signals that the next phase of quantum development will likely follow the same arc as classical computing: early-stage innovation absorbed by large, well-capitalized firms that can bear the cost of scaling.

For lawyers and regulators, that pattern has familiar consequences: intellectual-property concentration, antitrust scrutiny, export-control compliance, and the evidentiary standards that will eventually govern how outputs from such corporate-owned quantum systems are regulated and presented in court.

An illustration depicting the concept of innovation in the technology industry, contrasting 'Early-Stage Innovation' represented by small fish and a light bulb, with 'Large, Well-Capitalized Firms' represented by a shark featuring the Google logo. The background includes circuit patterns, symbolizing the tech ecosystem.
Familiar pattern and legal issues continue in our Universe.

๐Ÿ”น Willow and the Many-Worlds Question

Before the Nobel bell rang in Stockholm, Googleโ€™s Quantum AI group had already changed the conversation with its Willow processor.

In my earlier piece on Willowโ€™s mind-bending computations, I quoted Hartmut Nevenโ€™s โ€˜parallel universesโ€™ framing to describe its behavior. Some heard music; others heard marketing. Others, like me, saw trouble ahead.

The Nobel Prize did not validate the many-worlds interpretation of quantum mechanics, nor did it disprove it. Neven has not backed away from the theory, nor have others, and Neven has just gotten the best talent from MIT to join his group. What the Nobel Prize did confirmโ€”beyond any reasonable doubtโ€”is that macroscopic superconducting circuits, at a size you can see, can exhibit genuine quantum behavior under controlled laboratory conditions. That is the solid foundation a judge or regulator can stand on: devices now exist in our world that generate outputs with quantum fingerprints reproducible enough to test and verify.

Meanwhile, the frontier continues to move. In September 2025, researchers at UNSW Sydney demonstrated entanglement between two atomic nuclei separated by roughly twenty nanometers, See, โ€œNew entanglement breakthrough links cores of atoms, brings quantum computers closerโ€ (The Conversation, Sept. 2025). Twenty nanometers is not big, but it is large enough to measure.

Moreover, even though the electrical circuits themselves are large enough to photograph, the quantum energy was not. That could only be measured indirectly. The researchers used coupled electrons as what lead scientist Professor Andrea Morello called โ€œtelephonesโ€ to pass quantum correlations and make those measurements.

An artistic representation of quantum entanglement, featuring glowing atomic particles connected by luminous paths, illustrating the complex interactions in quantum mechanics.
Electrons acting like telephones passing quantum correlations on measurable scales.

The telephone metaphor is apt. It captures the engineering ambition behind the resultโ€”connecting quantum rooms with wires, not whispers. Whispers don’t echo. Entanglement is not a philosophical idea; it is a measurable resource that can be distributed, controlled, and eventually commercialized. It can even call home.

For the legal system, this is where things become concrete. When entanglement leaves the lab and enters communications or sensing devices, courts will be asked to evaluate evidence that can be measured and described but cannot be seen directly. The question will no longer be โ€œIs this real?โ€ but โ€œHow do we authenticate what can be measured but not observed?โ€

Thatโ€™s the moment when the physics of quantum control becomes the jurisprudence of evidenceโ€”and itโ€™s coming faster than most practitioners realize.

A surreal painting depicting several figures whispering to each other in an arched, dimly lit setting, with wave-like patterns of light radiating from a central source.
Whispers Don’t Echo.

๐Ÿ”น Defining the Echo: When Evidence Repeats With a Slight Accent

The many-worlds interpretation of quantum mechanics has always sat on the thin line between physics and philosophy. First proposed in 1957 by Hugh Everett, it replaces the familiar ‘collapse‘ of the wave-function with a more radical notion: every quantum event splits reality into separate branches, each continuing independently. Some brilliant physicists take it seriously; others reject it; many remain agnostic. Courts need not resolve that debate. For law, the relevant question is simpler: can a party show a method that reliably connects a claimed quantum mechanism to a particular output? If yes, the courtโ€™s job is to hear the evidence. If not, the courtโ€™s job is to exclude it.

In its early decades, the idea was mostly dismissed as metaphysical excess. Then  Bryce DeWittDavid DeutschMax Tegmark and Sean Carroll each found ways to refine and defend it. David Deutsch, known as the Father of Quantum Comnputing, first argued that quantum computers might actually use this multiplicity to perform computationsโ€”each universe branch carrying part of the load. See e.g., Deutsch, The Fabric of Reality: The Science of Parallel Universes–and Its Implications (Penguin, 1997) (Chapter 9, Quantum Computers). Deutsch even speculates in his next (2011) book The Beginning of Infinity (pg. 294) that some fiction, such as alternate history, could occur somewhere in the multiverse, as long as it is consistent with the laws of physics.

The many-world’s argument, once purely theoretical, gained traction after Googleโ€™s Willow experiments. Hartmut Nevenโ€™s reference to โ€œparallel universesโ€ was not an assertion of proof but a shorthand for describing interference effects that defy classical intuition. It is what he believes was happeningโ€”and that opinion carries weight because he works with quantum computers every day.

When quantum behavior became experimentally measurable in superconducting circuits that were large enough to photograph, the Everett questionโ€”’Are we branching universes or sampling probabilities?‘โ€”stopped being rhetorical. The debate moved from thought experiment to instrument design. Engineers now face what philosophers only imagined: how to measure, stabilize, and interpret outcomes that occur across many possible worlds and never converge on a single, deterministic path.

For the law, the relevance lies not in metaphysics but in method. Whether the universe splits or probabilities collapse, the data these machines produce are inherently probabilisticโ€”repeatable only within margins, each time with a slight accent. The courtroom analog to wave-function collapse is the evidentiary demand for reproducibility. If the physics no longer promises identical outputs, the law must decide what counts as reliable samenessโ€”echoes with an accent.

That shift from metaphysics to methodology is the lawyerโ€™s version of a measurement problem. Itโ€™s not about believing in the multiverse. Itโ€™s about learning how to authenticate evidence that depends on it.

A vibrant abstract representation of quantum physics, featuring concentric circles and spheres radiating in a spectrum of colors, symbolizing subatomic particles and quantum behavior.
Repeatable measurements through parallel universes to explain quantum computer calculations. Crazy but true?

๐Ÿ”น The Law Listens: Authenticating Echoes in Practice

If each quantum record is an echo, the lawโ€™s task is to decide which echoes can be trusted. That requires method, not metaphysics. The legal system already has the toolsโ€”authentication, replication, expert testimonyโ€”but they need recalibration for an age when precision itself is probabilistic.

1. Authentication in context.
Under Rule 901(b)(9), evidence generated by a process or system must be shown to produce accurate results. In a quantum context, that showing might include the type of qubit, its error-correction protocol, calibration logs, environmental controls, and the precise code path that produced the output. The burden of proof doesnโ€™t change; only the evidentiary ingredients do.

2. Replication hearings.
In classical computing, replication is binaryโ€”either a hash matches, or it doesnโ€™t. In quantum systems, replication becomes statistical. The question is no longer โ€œCan this be bit-for-bit identical?โ€ but โ€œDoes this fall within the accepted variance?โ€ Probabilistic systems demand statistical fidelity, not sameness. A replication hearing becomes a comparison of distributions, not exact strings of bits.

Similar logic already guides quantum sensing and metrology, where entanglement and superposition improve precision in measuring magnetic fields, time, and gravitational effects. See Quantum sensing and metrology for fundamental physics (NSF, 2024); Review of qubit-based quantum sensing (Springer, 2025); Advances in multiparameter quantum sensing and metrology (arXiv, 2/24/25); Collective quantum enhancement in critical quantum sensing (Nature, 2/22/25). Those readings vary from one run to the next, yet the variance itself confirms the physicsโ€”each measurement is a statistically faithful echo of the same underlying reality. The variances are within a statistically acceptable range of error.

An abstract illustration showing a silhouette of a person standing next to a swirling vortex surrounded by circular shapes and geometric lines, representing concepts of quantum mechanics and the multiverse.
Each measurements is slightly different but similar enough to be statistically faithful echoes of the same underlying reality.

๐Ÿ”น Two Examples from the Quantum Frontier

1. Quantum Chemistry In Practice.

One of the most mature quantum applications today is the Variational Quantum Eigensolver (VQE), a hybrid quantum-classical algorithm used to estimate the ground-state energy of molecules. See, The Variational Quantum Eigensolver: A review of methods and best practices (Phys. Rep., 2023); Greedy gradient-free adaptive variational quantum algorithms on a noisy intermediate scale quantum computer (Nature, 5/28/25). Also see, Distributed Implementation of Variational Quantum Eigensolver to Solve QUBO Problems (arXiv, 8/27/25); How Does Variational Quantum Eigensolver Simulate Molecules? (Quantum Tech Explained, YouTube video, Sept. 2025).

VQE researchers routinely run the same circuit hundreds of times; each iteration yields slightly different energy readings because of noise, calibration drift, and quantum fluctuations. Yet the outputs consistently cluster around a stable baseline, confirming both the accuracy of the physical model and the reliability of the machine itself.

Now picture a pharmaceutical patent dispute where one party submits quantum-derived binding data for a new molecule. The opposing side demands replication. A court applying Rule 702 may not expect identical numbersโ€”but it could require expert testimony showing that results consistently fall within a scientifically accepted margin of error. If they do, that should become a legally sufficient echo.

This is reminiscent of prior disputes e-discovery concerning the use of AI to find relevant documents. It has been accepted by all courts that perfection, such as 100% recall, is never required, but reasonable efforts are required. Judge Andrew Peck, Hyles vNew York City, No. 10 Civ. 3119 (AT)(AJP), 2016 WL 4077114 (S.D.N.Y. Aug. 1, 2016). This also follows the official commentary of Rule 702, on expert testimony, where โ€œperfection is not required.โ€ Fed. R. Evid. 702, Advisory Committee Note to 2023 Amendment.

The reasonable efforts can be proven by numerics and testimony. See for instance my writings in the TAR Course: Fifteenth Class- Step Seven โ€“ ZEN Quality Assurance Tests (e-Discovery Team, 2015) (Zero Error Numerics); ei-Recall (e-Discovery Team, 2015); Some Legal Ethics Quandaries on Use of AI, the Duty of Competence, and AI Practice as a Legal Specialty (May, 2024).

An illustration emphasizing the phrase 'Reasonable efforts required, not perfection,' featuring a checklist with a checkmark, scales of justice, and a prohibition symbol.
There is no perfect case, evidence or efforts. In reality, ‘perfect is the enemy of the good.’

2. Quantum-Secure Archives.

As quantum computing and quantum cryptography advance, most (but not all) of todayโ€™s encryption will become obsolete. This means the vast amount of encrypted data stored in corporate and governmental archivesโ€”maintained for regulatory, evidentiary, and operational purposesโ€”may soon be an open book to attackers. Yes, you should be concerned.

Rich DuBose and Mohan Rao, Harvest now, decrypt later: Why todayโ€™s encrypted data isnโ€™t safe forever (Hashi Corp., May 21, 2025) explain:

Most of todayโ€™s encryption relies on mathematical problems that classical computers canโ€™t solve efficiently โ€” like factoring large numbers, which is the foundation of the Rivestโ€“Shamirโ€“Adleman (RSA) algorithm, or solving discrete logarithms, which are used in Elliptic Curve Cryptography (ECC) and the Digital Signature Algorithm (DSA). Quantum computers, however, could solve these problems rapidly using specialized techniques such as Shorโ€™s Algorithm, making these widely used encryption methods vulnerable in a post-quantum world.

Also see, Dan Kent, Quantum-Safe Cryptography: The Time to Start Is Now (Gov.Tech., 4/30/25) and Amit Katwala, The Quantum Apocalypse Is Coming. Be Very Afraid (Wired, Mar. 24, 2025), warning that cybersecurity analysts already call this future inflection point Q-Dayโ€”the day a  quantum computer can crack the most widely used encryption. As Katwala writes:

On Q-Day, everything could become vulnerable, for everyone: emails, text messages, anonymous posts, location histories, bitcoin wallets, police reports, hospital records, power stations, the entire global financial system.

Most responsible organizations with large archives of sensitive data have been preparing for Q-Day for years. So too have those on the other sideโ€”nation-states, intelligence services, and organized criminal groupsโ€”who are already harvesting encrypted troves today to decrypt later. See, Roger Grimes, Cryptography Apocalypse: Preparing for the Day When Quantum Computing Breaks Today’s Crypto (Wiley, 2019). The race for quantum supremacy is on.

Now imagine a company that migrates its document-management system to post-quantum cryptography in 2026. A year later, a breach investigation surfaces files whose verification depends on hybrid key-exchange algorithms and certificate chains. The plaintiff calls them anomalies; the defense calls them echoes. The court wonโ€™t choose sides by theoryโ€”it will follow the evidence, the logs, and the math.

An artistic representation of an hourglass with celestial spheres and swirling galaxies, symbolizing the concept of time and the multiverse in quantum physics.
The metrics are what should matter, not the many theories

๐Ÿ”น Building the Quantum Record

Judicial findings and transparency. Courts can adapt existing frameworks rather than invent new ones. A short findings order could document:
(a) authentication steps taken;
(b) observed variance;
(c) expert consensus on reliability; and
(d) scope limits of admissibility.
Such transparency builds a common-law recordโ€”the first body of quantum-forensic precedent. I predict it will be coming soon to a universe near you!

Chain of custody for the probabilistic age. Future evidence protocols may pair traditional logs with variance ranges, confidence intervals, and error budgets. Discovery rules could require disclosure of device calibration history, firmware versions, and known noise parameters. The data once confined to labs will become essential for authentication.

The law doesnโ€™t need new virtues for quantum evidence; it needs old ones refined. Transparency, documentation, and replication remain the gold standard. What changes is the expectation of sameness. The goal is no longer perfect duplication, but faithful resonance: the trusted echo that still carries truth through uncertainty.

An artistic depiction of a swirling vortex, featuring an hourglass shape with vibrant colors, symbolizing the concept of multiverses and quantum physics. Small planets are depicted within the flow, representing various realities branching out from a central point of light.
Metrics carry the truth through uncertainty.

๐Ÿ”น Conclusion: The Sound of Evidence

The Nobel Committee rang the bell. Googleโ€™s engineers adding instruments. Labs in Sydney and elsewhere wired new rooms together. The rest of usโ€”lawyers, paralegals, judges, legal-techs, investigatorsโ€”must learn how to listen for echoes without hearing ghosts. That means resisting hype, insisting on method, and updating our checklists to match what the devices actually do.

Eight months ago in Quantum Leap, I described a canyon where a single strike of an impossible calculation set the walls humming. This time, the sound came from Stockholm. If the next echo is from quantum evidence in your courtroomโ€”perhaps as a motion in limine over non-identical logsโ€”donโ€™t panic. Listen for the rhythm beneath the noise. The lawโ€™s task is to hear the pattern, not silence the world.

Science, like law, advances by listening closely to what reality whispers back. The Nobel Committee just honored three physicists for demonstrating that quantum behavior can be engineered, measured, and replicatedโ€”its fingerprints recorded even when the phenomenon itself remains invisible. Their achievement marks a shift from theory to tested evidence, a shift the courts will soon confront as well.

When engineers speak of quantum advantage, they mean a moment when machines perform tasks that classical systems cannot. The legal system will have its own version: a time when quantum-derived outputs begin to appear in contracts, forensic analysis, and evidentiary records. The challenge will not be cosmic. It will be procedural. How do you test, authenticate, and trust results that vary within the bounds of physics itself?

The answer, as always, lies in method. Law does not require perfection; it requires transparency and proof of process. When the next Daubert hearing concerns a quantum model rather than a mass spectrometer, the same questions will apply: Was the procedure sound? Were the results reproducible within accepted error? Were the foundations laid? The physics may evolve, but the evidentiary logic remains timeless.

In the end, what matters is not whether the universe splits or probabilities collapse. What matters is whether we can recognize an honest echo when we hear oneโ€”and admit it into evidence.

An artistic representation of a cosmic hourglass surrounded by swirling galaxies and planets, symbolizing time, the universe, and the concept of the multiverse.
It is only a matter of time before quantum generated evidence seeks admission to your world.

๐Ÿ”น Postscript.

Minutes before this article was published Google announced an important new discovery called “Quantum ECHO.” Yes, same name as this article, written by Ralph Losey with no advance notice from Google of the discovery or name. A spooky entanglement, perhaps? Ralph will publish a sequel soon that spells out what Google has done now. In the meantime, here is Google’s announcement by Hartmut Neven\ and Vadim Smelyanskiy, Our Quantum Echoes algorithm is a big step toward real-world applications for quantum computing (Google, 10/22/25).

๐Ÿ”น Subscribe and Learn More

If this exploration of Quantum Echoes and evidentiary method has sparked your curiosity, you can find much more at e-DiscoveryTeam.com โ€” where I continue to write about artificial intelligence, quantum computing, evidence, e-discovery, and the future of law. Go there to subscribe and receive email notices of new blogs and upcoming courses, and special events โ€” including an online course, with a working title Quantum Law: From Entanglement to Evidence,‘ that will expand on the ideas introduced here. It will discuss how quantum physics and AI converge in the practice of law, from authentication and reliability to discovery and expert testimony.

That program will be followed by two other, longer online courses that are also near completion:

  • Beginner โ€œGPT-4 Levelโ€ Prompt Engineering for Legal Professionals,’ a practical foundation in AI literacy and applied reasoning.
  • Advanced โ€œGPT-5 Levelโ€ Prompt Engineering for Legal Professionals,’ an in-depth study of prompt design, model evaluation, and AI ethics.

All courses are part of my continuing effort to help the legal profession adapt responsibly to the next wave of technology โ€” with integrity, experience and whatever wisdom I may have accidentally gathered from a long life on Earth.

A contemplative figure stands in a futuristic hallway lined with framed portals, each leading to different cosmic landscapes, while a bright light emanates from above.
Ralph looking back on the many worlds of technology he has been in. What a long, strange trip its been.

Subscribe at e-DiscoveryTeam.com for notices of new articles, course announcements, and research updates.

Because the future of law wonโ€™t be written by those who fear new tools, but by those who understand the evidence they produce.


Ralph C. Losey is an attorney, educator, and author of e-DiscoveryTeam.com, where he writes about artificial intelligence, quantum computing, evidence, e-discovery, and emerging technology in law.

ยฉ 2025 Ralph C. Losey. All rights reserved.



Epiphanies or Illusions? Testing AIโ€™s Ability to Find Real Knowledge Patterns – Part Two

August 9, 2025

Ralph Losey. August 9, 2025.

The moment of truth had arrived. Were ChatGPTโ€™s insights genuine epiphanies, valuable new connections across knowledge domains with real practical and theoretical implications, or were they merely convincing illusions? Had the AI genuinely expanded human understanding, or had it merely produced patterns that seemed insightful but were ultimately empty?

Fortunately, the story I began in Part One has a happy ending. All five of the new patterns claimed to have been found were amazing and, for the most part, validโ€”a moment of happiness at Losey.ai. Part Two now shares this good news, describing both the strengths and limitations of these discoveries. To bring these insights vividly to life, I also created fourteen new moving images (videos) illustrating the discoveries detailed in Part Two.

Celebrate then back to work. Video by Losey’s AIs.

ChatGPT4o’s Initial Finding of Five New Patterns

Here are the five new cross-disciplinary patterns that the AI generated in response to my final “do it” prompt:

  • Judicial Linguistic Style and Outcome Bias: Judges with more narrative or metaphorical language styles are more likely to rule empathetically in civil matters. This insight could shape legal training and judicial evaluations.
  • Quantum Ethics Drift: Recent shifts in privacy discourse correlate with spikes in quantum research fundingโ€”suggesting that ethical reflection responds dynamically to perceived technological risk.
  • Aesthetic-Trust Feedback Loop: Digital art styles embracing transparency and abstraction rise in popularity during periods of high public skepticism toward tech companies. Art, it seems, mirrors trust.
  • Topological Jurisprudence: Mathematical topologyโ€™s network-based models align with emerging legal theories of distributed liabilityโ€”useful for understanding platform accountability and blockchain disputes.
  • Generative AI and Civic Discourse Decay: As AI content proliferates, public engagement with nuanced, long-form discourse is measurably declining.

In the words of one of my AI bots: These are not just patternsโ€”they are knowledge-generating revelations with practical and philosophical implications.

New Patterns emerging video by Losey using Sora AI.

Two of the five new insights pertained to the law, which is my domain of expertise, but even so, I had never thought of these before, nor ever read anyone else talking about them. All five claimed insights were to me, but all had the ring of truth. Also, all seemed like they might be somewhat useful, with both “practical and philosophical implications.

But since I had never considered any of this before, I had limited knowledge as to how useful they might be, or whether it was all fictitious, mere AI Apophenia. Still, I doubted that because the insights were all in accord with my long-life experiences. Moreover, they seemed intuitively correct to me, but, at the same time, I realized John Nash might have felt the same way (Click to watch a great scene in the Beautiful Mind movie). So, I spent days of QC work thereafter with extensive human and AI research to calmly evaluate the claims and see what foundation precedent, if any, lay beyond my feel, “just knowing something” as the movie puts it.

Analysis of All Five Claims

Video by Losey using Sora AI.

Judicial Language and Empathetic Outcomes

Textual analysis suggests that judges who use more narrative or metaphorical language may be more likely to issue empathetic rulings in civil cases. This correlation, while not causal, could reflect underlying judicial temperament and offers a potential tool for legal scholarship and training.

As ChatGPT 4o explained, GPT-driven textual analysis of thousands of court opinions reveals a subtle, but statistically significant correlation: judges who employ more metaphor, allegory, and narrative framing in their opinions tend to reach more empathetic rulings in civil casesโ€”particularly in matters involving individual rights, employment, or family law. GPT 4o considers this to be its strongest claim.

It admits this correlation does not imply causation but may reflect underlying judicial temperament or philosophical orientation. My own experience as a practicing litigation strongly supports this claim.

Empathic rulings are well framed by story. Video by Losey.

GPT o3 disagreed on the top ranking of the claim but did concede that judges whose written opinions use a higher density of narrative, metaphor, or โ€œstoryโ€‘tellingโ€ devices tend to rule for the more sympathetic party slightly more often than their peers.

GPT o3 pro after research cited Justice Blackmunโ€™s dissent in DeShaney v. Winnebago, 489 U.S. 189, 212 (1989), a constitutional dueโ€‘process case, to illustrate โ€œcivilโ€ empathy in an opinion to support this claim. Justice Harry Blackmun‘s dissent begins with these famous words and then quickly goes to his narrative:

Today, the Court purports to be the dispassionate oracle of the law, unmoved by “natural sympathy.” Ante at 489 U. S. 202. But, in this pretense, the Court itself retreats into a sterile formalism which prevents it from recognizing either the facts of the case before it or the legal norms that should apply to those facts. . . .

Poor Joshua! Victim of repeated attacks by an irresponsible, bullying, cowardly, and intemperate father, and abandoned by respondents, who placed him in a dangerous predicament and who knew or learned what was going on, and yet did essentially nothing except, as the Court revealingly observes, ante at 489 U. S. 193, “dutifully recorded these incidents in [their] files.” It is a sad commentary upon American life, and constitutional principles — so full of late of patriotic fervor and proud proclamations about “liberty and justice for all,” that this child, Joshua DeShaney, now is assigned to live out the remainder of his life profoundly retarded. Joshua and his mother, as petitioners here, deserve — but now are denied by this Court — the opportunity to have the facts of their case considered in the light of the constitutional protection that 42 U.S.C. ยง 1983 is meant to provide.

Story telling is an effective method of plaintiff appeal. Video by Losey.

Gemini Pro 2.5 considers this the strongest empirical claim, and second strongest claim overall, behind everyone’s first place pick, Topology, explaining:

It is highly testable, as court opinions and case outcomes are public records that can be analyzed with modern NLP tools. The underlying theoryโ€”that a person’s mindset is reflected in their languageโ€”is well-established in psychology and linguistics. While a judge’s underlying philosophy is a potential confounding variable, the claim smartly reframes language as a direct proxy for that philosophy, making the link very sturdy. It’s a straightforward, data-driven proposition that quantifies a long-held belief about the nature of justice.

Ethical Response to Quantum Innovation

Evidence shows that Increases in quantum research funding often precede surges in ethical discourse on privacy and civil liberties. This pattern suggests that ethical reflection tends to respond to perceived technological risk, particularly in fields with high uncertainty like quantum computing. It is not a claim of causation, but rather of a correlation, one not detected before. With that clarification GPT 4o considers this the strongest claim.

Gemini Pro 2 finds the claim of a lead-lag relationship between quantum research funding and public ethics discourse to be a weak claim. It admits the claim is based on a plausible idea of “anticipatory ethics,” and is testable because you can track funding and publications over time. Still, it interprets the claim as one of causation, not just correlation, and rejects if for that reason. It seems like the two AIs are talking past each other.

GPT 4.5 agreed with 4o and also considers this to be strong claim. GPT 4.5 restates it as: “Increases in quantum computing funding consistently precede intensified ethical discourse on privacy and civil liberties, suggesting ethical awareness responds predictably, though indirectly, to technological advances.

GPT o3 and o3-pro also agreed with GPT 4o and found, in o3-pro’s words, that:

Large surges in public or private funding for quantumโ€‘computing research are followed, typically within six to twentyโ€‘four months, by measurable increases in academic and policy discussions of quantumโ€‘specific privacy and civilโ€‘liberties risks. The correlation is clear, but causation remains to be fully demonstrated.


Quantum triggered protestors video by Ralph Losey.

Artistic Transparency and Tech Trust

This is a claim that art mirrors distrust in tech, that periods of declining public trust in technology frequently coincide with rising popularity of digital art styles emphasizing transparency and abstraction. While the causality is unclear, this aesthetic shift may reflect cultural efforts to visualize openness and regain clarity. GPT 4o considers this its weakest claim.

So too does Gemini Pro 2.5. Although it admits the claim is a beautiful and creative piece of cultural criticism, it opines that it is almost impossible to test or falsify.

Moreover, Pro2.5 thinks the claim is highly susceptible to confirmation bias and seeing patterns where none exist (apophenia). Still, it tempers this opinion by stating that if this claim is presented not as a confirmed causal law, but as a heuristic model for cultural analysis, then it appears to be supported by correlational data. Periods of heightened public skepticism toward opaque technological systems (e.g., algorithmic black boxes, corporate data collection) do correlate with an increased cultural resonance of digital art and design that emphasizes an “aesthetic of transparency.” This aesthetic includes motifs like wireframes, exploded-view diagrams, data visualization, and semi-translucent layers.

To avoid apophenia, Pro2.5 counsels understanding that the claim is not that tech skepticism causes this art style. Instead, the claim is only that this aesthetic becomes a resonant cultural metaphor that artists and audiences are drawn to during such times, because it offers a symbolic counterbalance to the anxieties of opacity and control. Still, it ranked this the weakest claim.

Encrypted Original for sale, โ‚ฟ1.0. Exclusive rights, Ralph Losey. Video copy.

Topological Jurisprudence and Network Liability

This interdisciplinary convergence provides a new topology framework for analyzing disputes involving complex computer networks and other multiparty, multi-agent technology disputes. The flexible, continuously morphing topographic maps are perfect for evaluating potential liability paths. They are designed to handle high volumes of changing data flows, such as blockchain transaction data, or telemetry data reports. These situations are too complex and mobile for traditional, hierarchical branch tree structures.

For good background on this field of applied mathematics see the Wikipedia article on topological data analysis (TDA). These legal situations are too complex and changing for traditional, hierarchical branch tree structures. But TDA structures work perfectly to help us to visualize and sort things out in multidimensional space where connections and shown and stretched, but not broken.

All of the AI models agree that this is the strongest claim. So do I, although honestly, I don’t fully understand data topology and TDA – not yet.

All of the AI models agree that this is the strongest claim. So do I, although honestly, I don’t fully understand it – not yet.

GPT o3 explains that networkโ€‘based (โ€œtopologicalโ€) mapping of actors in decentralized technologies reliably clarifies where legal responsibility can attach when something goes wrong. 

Using topological math to help assign blame video by Losey

GPT o3-pro concludes this is a fertile area for interdisciplinary development that could significantly augment our understanding of distributed liability. Topology geometry can help, as o3 pro put it, by ensuring the judge considers the full web of interactions that lead to damages, rather than looking for a single entity. As complex, changing damage scenarios become more common, this structural insight is likely to prove invaluable in crafting fair and effective liability rulings.

GPT 4.5 also agrees that mathematical topology offers promising analytical frameworks for modeling distributed liability in decentralized technologies.

Gemini Pro 2.5 opined that mathematical models of networks (topology) perfectly map onto new legal theories of shared responsibility. It considers the idea a breakthrough for legal clarity, offering a clear, visual blueprint for figuring out “who is responsible” in our increasingly complex and decentralized world. It also considers this the strongest claim, which it explains in its typical verbose manner as follows:

This claim is the strongest because it is not an empirical assertion about the world, but rather a proposal for a new, powerful analytical tool. Its strength lies in its logical coherence and immediate utility. The analogy between mathematical networks and real-world systems of distributed liability (like AI or blockchain) is incredibly direct and sound. It doesn’t suffer from confounding data because it’s a framework for analyzing data, not a conclusion drawn from it. Its value is in its power to bring clarity to complex problems, making it the most robust and defensible claim of the five.

Constantly changing network topology map video by Losey.

In doing the verification research GTP4.5 using deep search found one article on point written by Dana Burchardt. Her law studies were in Paris, with a later doctorate from the Freie Universitรคt in Berlin. She is now a visiting Law Professor at the University of Bremen and is an expert in international and German law. She has an unusual interdisciplinary background, including time as a senior research fellow at the Max Planck Institute. Her article found by ChatGPT4.5 using deep search is: The concept of legal space: A topological approach to addressing multiple legalities (Cambridge U. Press, 2022).

The article is concerned with topological mapping of legal spaces in general. It has nothing to do with liability detection among multiple defendants in networking configurations and is instead concerned with international law and EU related issues. So, the newness claim of ChatGPT4o is supported. Burchart’s general explanations of topological analysis also support the sanity of GPT4o’s claim, that this is indeed a new patterning between topology geometry and the law. Professor Burchart’s work both shows the solid grounding of the claim and supports its top ranking as a significant new insight. Burchardt’s article is a hard read, but here are some of the explanations and sections of the article that are very relevant and accessible (found at pages 528, 532, 534).

Topologyโ€™s guiding ideas.
At first glance, topology is a mathematical concept that seems far removed from legal theoretical discussions. As will be explained further below, it is a tool to analyse mathematical objects. Yet upon a closer look, topology provides many insights that can constitute a fruitful basis for conceptualizing legal phenomena. To link these insights to the notion of legal space, this section outlines relevant aspects of the mathematical notion to which the subsequent sections relate. [pg. 528]

Video by Losey illustrating a topological map with dynamic network connections.

Constructing a topological understanding of legal space.
I propose a possible way in which a topological perspective can contribute to constructing a concept of legal space that is able to generate novel analytical insights. I consider such insights for the inner structure of legal spaces, the boundaries of these spaces and the interrelations with other spaces. [pg. 532}

A topological approach allows each element of the space to have a broad range of interrelations with the other elements of the same space (see Figure 3 above). The elements are thus not limited to interrelations along tree-like structures, which would only allow for very few interrelations per element as tree-like structures only allow one path between elements. . . . Instead, the interrelations within the legal space are numerous. An element can be linked to another element by more than one path. It can be linked directly and/or via intermediate elements. An example of the latter is two rules being interpreted in light of the same principle: there is a communicative path from the first rule via the principle to the second rule. Representing such interrelations as a topology with manifold paths allows us to capture the heterarchical nature of many legal interrelations. Further, it illustrates that interrelations among legal elements are flexible rather than static: the interrelating paths among elements can vary while preserving the connection. [pg. 534]

Using topological approaches may help future judges assign proportional blame in complex changing systems. Video by Losey.

AI and Declining Civic Discourse.

Widespread use of generative AI may cause reduced engagement in long-form, thoughtful public discourse. The trend raises concerns for educators and civic leaders about sustaining meaningful dialogue in the digital age. GPT 4o considers this its strongest claim. The other AIs are doubtful, considering it one of the weakest.

GPT o3 prefers to restate the claim to make it more palatable as follows: The proliferation of generative AI content online correlates with reduced engagement in nuanced, long-form public discussions, indicating generative AI likely contributes to diminished discourse quality. It is kind of hard to disagree with that, but the AIs other that GPT 4o still don’t like it, again, it appears, out of concern about conflation of correlation and causation. I’ve seen a lot of discussion about from people making similar observations lately about AI degrading content, and I am inclined to agree. Maybe this is not a new claim, but it seems valid, although admittedly proof of causation is unlikely and the apophenia risk is high.

GPT 03 also makes the separate critical point that “wellโ€‘prompted AI can sometimes raise, not lower, discussion quality.” I’m inclined to agree with that too bit, but how often do we see positive prompt masters at work? We usually see clumsy well-meaning amateurs, or, far worse, bad faith professionals, people paid to run propaganda machines, sales pitches or human vendettas of one kind or another. Their vicious personal attacks and name-calling can kill civil discourse fast, even though often childish and obviously false.

Evil controlled AI propaganda video by Losey,

GPT o3 pro made a good restatement of this claim worth considering:

The widespread use of generative AI (e.g. AI chatbots producing content) correlates with a decline in the quality of online civic discourse โ€“ specifically a reduction in long-form, nuanced discussion in forums, comment sections, and other public discourse venues. Essentially, as AI-generated content proliferates, human engagement shifts toward shorter, less substantive interactions, potentially because AI content floods the space with superficial text or because peopleโ€™s habits change (relying on AI summaries, etc.), leading to โ€œdiscourse decay.โ€

Early evidence from online communities indicates that the influx of AI-generated content does pose challenges to depth and quality of discussion. One strong piece of evidence is how moderators on platforms like Reddit have responded. A recent study of Reddit moderators found widespread โ€œconcerns about content qualityโ€ with the rise of AI-generated text in their communities. Moderators observed that AI-produced comments and posts tend to be โ€œpoorly written, inaccurate, and off-topic,โ€ threatening to reduce the overall quality of content. They also feared that the โ€œinauthenticityโ€ of such content undermines genuine human connection in discussions.

GPT o3 pro also states:

This pattern is useful as an early warning: it underscores the need for community guidelines, AI-detection tools, and perhaps cultural shifts that re-emphasize human authenticity and depth in conversation. However, it would be too deterministic to declare that generative AI will inevitably cause discourse to collapse into soundbites. The pattern is emergent, and its trajectory depends on how we manage the technology. . . .

In conclusion, the โ€œgenerative AI โ†’ discourse decayโ€ pattern holds true in enough instances to merit serious concern and action. Its credibility is bolstered by early studies and community feedback, though more data over time will clarify its magnitude. As a society, we can use this insight to balance the benefits of generative AI with safeguards that preserve the richness of human-to-human dialogue โ€“ ensuring that technology amplifies rather than erodes the public square.

Still, GPT o3 pro ranked this claim the weakest, which for me shows just how strong all five of the claims are.

Five Claims video by Losey using Sora AI.

Conclusion: From Apophenia to Understanding

ChatGPT4o did a far better job than expected. The quest for new patterns linking different fields of knowledge seems to have excluded Quixote extremes. I am pretty sure that only mild forms of apophenia have appeared, much like seeing puffy faces in the clouds. Time will tell if the predictions that flow from these five claims will come true or drift away as a cloud.

Will topological analysis become a common tool in the future to help resolve complex network liability disputes? Will analysis of your judge’s prior language types become a common practice in litigation? Will advances in Quantum Computers continue to trigger public fears of loss of privacy and liberty six to twenty-four months later? Will AI influenced discourse continue to erode civic discussion and disrupt real inter-personal communication? Will digital art continue to echo public distrust of technology and evoke an aesthetic of transparency? Will someone buy my certified original art shown here for the first time for just one bitcoin? Will more grilled cheese sandwiches with holy figures sell on eBay? Will some of our public figures follow John Nash down the rabbit hole of severe Apophenia and be involuntarily hospitalized with completely debilitating paranoid schizophrenia.

No one knows for sure. AI is not a seer, nor can it reliably predict the market for grilled cheese sandwiches or the mental stability of our public figures. It is, however, a powerful tool for exploring complex questions and discovering patternsโ€”whether profound epiphanies or mere illusions. As my experiment suggests, AI can impressively illuminate new insights across fields of knowledge when guided thoughtfully and cautiously. Still, these are early days in the age of generative AI. A new world of potential awaits us, both serious and playful, and itโ€™s up to us to ensure its wiser, more discerning, and perhaps even more amusing than the one weโ€™ve made before.

Five new patterns of knowledge may lead to wisdom. Video by Ralph Losey using Sora.

Epiphanies or illusions? My experiments suggest that AI, when guided thoughtfully and validated rigorously, can lead us toward genuine epiphanies, significant breakthroughs that deepen our understanding and open new pathways across different domains of knowledge. Yet, we must remain alert to the risk of illusions, plausible yet ultimately false patterns that can distract or mislead us. The journey toward genuine insight and wisdom involves constant vigilance to distinguish these true discoveries from compelling yet false connections.

I invite you, the reader, to join this new quest. Engage with AI to explore your areas of interest and passion. Challenge the boundaries of existing knowledge, actively test AIโ€™s pattern-recognition abilities, and remain critically aware of its limitations. By actively distinguishing genuine epiphanies from tempting illusions, you may discover new insights and fresh perspectives that advance not only your understanding but contribute meaningfully to our collective wisdom.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and provide some good insights. This episode is called Echoes of AI: Epiphanies or Illusions? Testing AIโ€™s Ability to Find Real Knowledge Patterns. Part Two. Hear the young AIs talk about this article for 15 minutes. They wrote the podcast, not me. 

Illustration of two animated podcasters discussing the topic 'Epiphanies or Illusions? Testing AIโ€™s Ability to Find Real Knowledge Patterns. Part Two' on a digital background.

Ralph Losey Copyright 2025


Epiphanies or Illusions? Testing AIโ€™s Ability to Find Real Knowledge Patterns – Part One

August 4, 2025

Ralph Losey, August 4, 2025.

Humans are inherently pattern-seeking creatures. Our ancestors depended upon recognizing recurring patterns in nature to survive and thrive, such as the changing of seasons, the migration of animals and the cycles of plant growth. This evolutionary advantage allowed early humans to anticipate danger, secure food sources, and adapt to ever-changing environments. Today, the recognition and interpretation of patterns remains a cornerstone of human intelligence, influencing how we learn, reason, and make decisions.

Pattern recognition is also at the core of artificial intelligence. In this article, I will test the ability of advanced AI, specifically ChatGPT, to uncover meaningful new patterns across different fields of knowledge. The goal is ambitious: to discover genuine epiphaniesโ€”true moments of insight that expand human understanding and open new doors of knowledgeโ€”while avoiding the pitfalls of apophenia, the human tendency to perceive illusions or false connections. This experiment probes an age-old tension: can AI reliably distinguish between genuine breakthroughs and compelling yet misleading illusions?

Video by Ralph Losey using SORA AI.

We will begin by exploring the risks of apophenia, understanding how this psychological tendency can mislead human and possibly AI perception. Throughout, videos created by AI will help illustrate key points and vividly communicate these ideas. There are twelve new videos in Part One and another fourteen in Part Two.

Are the patterns real? Video by Ralph Losey using SORA AI.

Apophenia: Avoiding the Pitfalls of False Patterns

We humans are masters of pattern detection, but we do have hinderances to this ability. Primary among them is our limited information and knowledge, but also our tendency to see patterns that are not there. We tend to assume the stirring we hear in the bushes is a tiger ready to pounce when really it is just the breeze. Evolution tends to favor this phobia. So, although we can and frequently do miss real patterns, fail to recognize the underlying connections between things, we often make them up too.

Here it is hoped that AI will boost our abilities on both fronts. It will help us to uncover true new patterns, genuine epiphanies, moments where profound insights emerge clearly from the complexity of data. At the same time, AI may expose illusions, false connections we mistakenly believe are real due to our natural cognitive biases. Even though we have made great progress over the millennia in understanding the Universe, we still have a long way to go to see all of the patterns, to fully understand the Universe, and to free ourselves of superstitions and delusions. We are especially weak at seeing patterns and intertwined with different fields of knowledge.

Apophenia is a kind of mental disorder where people think they see patterns that are not there and sometimes even hallucinate them. Most of the time when people see patterns, for instance, faces in the clouds, they know it cannot be real and there is no problem. But sometimes when people see other images, for instance, rocks on Mars that look like a face, or even images on toast, they delude themselves into believing all sorts of nonsense. For instance, the below 10-year old grilled cheese sandwich, which supposedly bears the image of the Virgin Mary, sold to an online casino on eBay in 2004 for $28,000.

In a similar vein, some people suffering from apophenia are prone to posit meaning – causality – in unrelated random events. Sometimes the perceptions of new patterns is a spark of genius, which is later verified, think of Einstein’s epiphany at age 16 when he visualized chasing a beam of light. The new pattern recognitions can lead to great discoveries or detect real tigers in the bush. Epiphanies are rare but transformative moments, like Einsteinโ€™s visualization of chasing a beam of light, Newtonโ€™s realization of gravity beneath the apple tree, or the insights behind Darwinโ€™s theory of evolution. They genuinely advance human understanding. Apophenia, by contrast, deceives with illusionsโ€”patterns that seem meaningful but lead nowhere.

It is probably more often the case that when people “see” new connections and then go on to act upon them with no attempts to verify, they are dead wrong. When that happens, psychologists call this apophenia, the tendency to see meaningful patterns where none exist. This can lead to strange and aberrant behaviors: burning of witches, superstitious cosmology theories, jumping at shadows, addiction to gambling.

Unfortunately, it is a natural human tendency to think you see meaningful patterns or connections in random or unrelated data. That is a major reason casinos make so much money from poor souls suffering from a form of apophenia called the Gambler’s Fallacy. Careful scientists look out for defects in their own thinking and guide their experiments accordingly.

In everyday life, apophenia can also cause some people, even scientists, academics and professionals, to have phobic fears of conspiracies and other severe paranoid delusions. Think of John Nash, a Nobel Prize winning mathematician, and the movie A Beautiful Mind, that so dramatically portrayed his paranoid schizophrenia and involuntary hospitalization in 1959. Think of politics in the U.S today. Are there really lizard people among us? In some cases, as we’ve seen with Nash, apophenia can lead to severe schizophrenia.

A man looking distressed, surrounded by glowing numbers and mathematical symbols, evoking a sense of confusion and complexity.
Mental anguish & insanity from severe apophenia. Image by Losey using Sora inspired by Beautiful Mind movie.

The Greek roots of the now generally accepted medical term apophenia are:

  • Apo- (แผ€ฯ€ฮฟ-): Meaning “away from,” “detached,” “from,” “off,” or “apart”.
  • Phainein (ฯ†ฮฑฮฏฮฝฮตฮนฮฝ): Meaning “to show,” “to appear,” or “to make known”.

The word was first coined by Klaus Conrad, an otherwise apparently despicable person whom I am reluctant to cite, but feel I must, due to the general acceptance of word and diagnosis today. Conrad was a German psychiatrist and Nazi who experimented on German soldiers returning from the eastern front during WWII. He coined the term in his 1958 publication on this mental illness. Per Wikipedia:

He defined it as “unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness”.[4] [5] He described the early stages of delusional thought as self-referential over-interpretations of actual sensory perceptions, as opposed to hallucinations.

Apophenia has also come to describe a human propensity to unreasonably seek definite patterns in random information, such as can occur in gambling.

Apophenia can be considered a commonplace effect of brain function. Taken to an extreme, however, it can be a symptom of psychiatric dysfunction, for example, as a symptom in schizophrenia,[7] where a patient sees hostile patterns (for example, a conspiracy to persecute them) in ordinary actions.

Apophenia is also typical of conspiracy theories, where coincidences may be woven together into an apparent plot.[8]

Video by Ralph Losey using SORA AI.

Can AI Be Infected with a Human Illness?

It is possible that generative AI, based as it is on human language, may have the same propensities. That is unknown as of yet, and so my experiments here were on the lookout for such errors. It could be one of the causes of AI hallucinations.

In information science a mistake in seeing a connection that is not real, an apophenia, leads to what is called a false positive. This technical term is well known in e-discovery law, where AI is used to search large document collections. When the patterns analyzed suggest a document is relevant, and it is not, that mistake is called a false positive. It is like a human apophenia. The AI can also detect patterns that cause it to predict a document is irrelevant, and in fact the document is relevant, that is a false negative. There as a pattern, a connection, that was not seen. That can be bad thing in e-discovery because it often leads to withholding production of a relevant document, which can in turn lead to court sanctions.

In e-discovery it is well known that AI consistently has far lower false positives and false negative rates than human reviewers, at least in large document reviews. Generative AI may also be more reliable and astute that we are, but maybe not. This is a new field. Se we should always be on the lookout for false positives and false negatives in AI pattern recognition. That is one lesson I learned well, and sometimes the hard way, in my ten years of working with predictive coding type AI in the e-discovery (2012-2022). In the experiments described in this article we will look for apophenic mistakes.

Video by Ralph Losey using SORA AI.

It is my hope that Advanced AI, properly trained and validated, can provide a counterbalance to human gullibility by rigorously filtering of signal from noise. Unlike the human brain, which often leaps to conclusions, AI can be programmed to ground its pattern recognition in evidence, statistical rigor, and cross-validationโ€”if we build it that way and supervise it wisely.

Still, we must beware that the pattern-recognizing systems of AI may suffer from some of our delusionary tendencies. The best practices discussed here will consider both the positive and negative aspects of AI pattern recognition. We must avoid the traps of apophenia. We must stay true to the scientific methods and verify any new patterns purportedly discovered. Thus all opinions reached here will necessarily be lightly held and subject to further experimentation by others.

Video by Ralph Losey using SORA AI.

From Data to Insight: The Power of New Pattern Recognition

Modern AI models, including neural networks and transformer architectures like GPT-4, excel at uncovering subtle patterns in massive datasets far beyond human capability. This ability transforms raw data into actionable insights, thereby creating new knowledge in many fields, including the following:

Protein Structures: Models like Google’s DeepMind’s AlphaFold have already revolutionized protein structure prediction, achieving high success rates in predicting the 3D shapes of proteins from their amino acid sequences. This ability is crucial for understanding protein function and designing new drugs and medical therapies. The 2024 Nobel Prize in Chemistry was awarded to Demis Hassabis and John Jumper of DeepMind for their work on AlphaFold.

A scientist analyzes molecular structures and data visualizations related to AlphaFold 2 on a futuristic screen, featuring protein models and DNA sequences.
Image by Ralph Losey using his Visual Muse AI tool.

Medical Science. Generative AI models are now being used extensively in medical research, including analysis and proposals of new molecules with desired properties to discover new drugs and accelerate FDA approval. For example, Insilico Medicine uses its AI platform Pharma.AI, to developed drug candidates, including ISM001_055, for idiopathic pulmonary fibrosis (IPF). Insilico Medicine lists over 250 publications on its website reporting on its ongoing research, including a recent paper on its IPF discovery: A generative AI-discovered TNIK inhibitor for idiopathic pulmonary fibrosis: a randomized phase 2a trial (Nature Medicine, June 03, 2025). This discovery is especially significant because it is the first entirely AI-discovered drug to reach FDA Phase II clinical trials. Below is an infographic of Insilico Medicine showing some of its current work:

Infographic displaying the statistics and achievements of Insilico Medicine, an AI-driven biotech company, detailing development candidates, IND approvals, study phases, and global presence.
Insilico PDF infographic, found 7/23/25 in its 2-pg. overview.

Also see, Fronteo, a Japanese based research company, and its Drug Discovery AI Factory.

Materials Science. Google DeepMind’s Graph Networks for Materials Exploration (“GNoME”) has already identified millions of new stable crystals, significantly expanding our knowledge of materials science. This discovery represents an order-of-magnitude increase in known stable materials. Merchant and Cubuk, Millions of new materials discovered with deep learning (Deep Mind, 2023). Also see, 10 Top Startups Advancing Machine Learning for Materials Science (6/22/25).

Climate Science and Environmental Monitoring. Generative AI models are beginning to improve climate simulations, leading to more accurate predictions of climate patterns and future changes. For example, Microsoft’s Aurora Forecasting model is trained on Earth science data to go beyond traditional weather forecasting to model the interactions between the atmosphere, land, and oceans. This helps scientists anticipate events like cyclones, air quality shifts, and ocean waves with greater accuracy, allowing communities to prepare for environmental disasters and adapt to climate change. See e.g., Stanley et al, A Foundation Model for the Earth System (Nature, May 2025).

Video by Losey using Sora AI.

Historical and Artistic Revelations

AI is also helping with historical research. A new AI system was recently used to analyze one of the most famous Latin inscriptions: the Res Gestae Divi Augusti. It has always been thought to simply be an autobiographical inscription, which literally translates from Ancient Latin as โ€œDeeds of the Divine Augustus.โ€  But when a specialty generative AI, Aeneas (again based on Google’s models) compared this text with a large database of other Latin sayings, the famous Res Gestae Divi Augusti inscription was found to share subtle language parallels with other Roman legal documents. The analysis uncovered โ€œimperial political discourse,โ€ or messaging focused on maintaining imperial power, an insight, a pattern, that had never seen before. Assael, Sommerschield, Cooley, et al. Contextualizing ancient texts with generative neural networks (Nature, July 2025).

The paper explains that the communicative power of these inscriptions are not only shaped by the written text itself “but also by their physical form and placement2,3” and that “about 1,500 new Latin inscriptions are discovered every year.” So the patterns analyzed not only included the words, but a number of other complex factors. The authors assert in the Abstract that their work with AI analysis shows.

… how integrating science and humanities can create transformative tools to assist historians and advance our understanding of the past.

Roman citizens reacting to propaganda. A Ralph Losey video.

In art and music, pattern detection has mapped the evolution of artistic styles in tandem with technological change. In a 2025 studio-lab experiment reported by Deruty & Grachten, a generative AI bass model (โ€œBassNetโ€) unexpectedly rendered multiple melodic lines within single harmonic tones, exposing previously unnoticed structures in popular music bass compositions. This discovery was written up by Deruty and Gratchen, Insights on Harmonic Tones from a Generative Music Experiment (arXiv, June 2025). Their paper shows how AI can surface new musical patterns and deepen our understanding of human auditory perception.

As explained in the Abstract:

During a studio-lab experiment involving researchers, music producers, and an AI model for music generating bass-like audio, it was observed that the producers used the modelโ€™s output to convey two or more pitches with a single harmonic complex tone, which in turn revealed that the model had learned to generate structured and coherent simultaneous melodic lines using monophonic sequences of harmonic complex tones. These findings prompt a reconsideration of the long-standing debate on whether humans can perceive harmonics as distinct pitches and highlight how generative AI can not only enhance musical creativity but also contribute to a deeper understanding of music.

Video by Losey using Sora AI.

Legal Practice: From Precedent to Prediction

The legal profession has benefited from traditional rule-based statistical AI for over a decade, with predictive coding and similar applications. It is now starting to apply the new generative AI models in a variety of new applications. For instance, it can be used to uncover latent themes and trends in judicial decisions that human analysis has overlooked.

This was done in a 2024 study using ChatGPT-4 to perform a thematic analysis on hundreds of theft cases from Czech courts. Drรกpal, Savelka, Westermann, Using Large Language Models to Support Thematic Analysis in Empirical Legal Studies (arXiv, February 2024).

The goal of the analysis was to discover classes of typical thefts. GPT4.0 analyzed fact patterns described in the opinions and human experts did the same. The AI not only replicated many of the human expert identified themes, but, as report states, also uncovered a new one that the humans had missed โ€“ a pattern of โ€œtheft from gymโ€ incidents. This shows that generative AI can sift through vast case datasets and detect nuanced fact patterns, or criminal modus operandi, that were previously undetected by experts (here, three law students under supervision of a law professor).

Video by Losey using Sora AI.

Another study in early 2025 applied Anthropicโ€™s Claude 3-Opus to analyze thousands of UK court rulings on summary judgment, developing a new functional taxonomy of legal topics for those cases. Sargeant, Izzidien, Steffek, Topic classification of case law using a large language model and a new taxonomy for UK law: AI insights into summary judgment (Springer, February 2025). The AI was prompted to classify each case by topic and identify cross-cutting themes.

The results revealed distinct patterns in how summary judgments are applied across different legal domains. In particular, the AI found trends and shifts over time and across courts โ€“ insights that allow new, improved understanding of when and in what types of cases summary judgments tend to be granted. These patterns were found despite the fact that U.K. case law lacks traditional topic labels. This kind of AI-augmented analysis illustrates how generative models can discover hidden trends in case law for improved effectiveness by practitioners.

Surprising abilities of Ai helping lawyers. Video by Losey.

Even sitting judges have begun to leverage generative AI to inform their decision-making, revealing new analytical angles in litigation. The notable 2023 concurrence by Judge Kevin Newsom of the Eleventh Circuit admitted to experimenting with ChatGPT to interpret an ambiguous insurance term (whether an in-ground trampoline counted as โ€œlandscapingโ€). Snell v. United Specialty Ins. Co., 102 F. 4th 1208 – Court of Appeals, (11th Cir., 5/28/24). Also See, Ralph Losey, Breaking News: Eleventh Circuit Judge Admits to Using ChatGPT to Help Decide a Case and Urges Other Judges and Lawyers to Follow Suit (e-Discovery Team, June 3, 2024) (includes full text of the opinion and Appendix and Losey’s inserted editorial comments and praise of Judge Newsom’s language.)

After querying the LLM, Judge Newsom concluded that โ€œLLMs have promiseโ€ฆ it no longer strikes me as ridiculous to think that an LLM like ChatGPT might have something useful to say about the common, everyday meaning of the words and phrases used in legal texts.โ€ In other words, the generative AI was used as a sort of massive-scale case law analyst, tapping into patterns of ordinary usage across language data to shed light on a legal ambiguity. This marked the first known instance of a U.S. appellate judge integrating an LLMโ€™s linguistic pattern analysis into a written opinion, signaling that generative models can surface insights on word meaning and context that enrich judicial reasoning.

A digital illustration of a judge in a courtroom setting, seated at a desk with a gavel. The judge, named Judge Newsom, is shown in a professional attire with glasses, and a holographic display behind him showing data and AI-related graphics, conveying a futuristic legal environment.
Image by Ralph Losey using his Visual Muse AI.

My Ask of AI to Find New Patterns

Now for the promised experiment to try to find at least one new connection, one previously unknown, undetected pattern linking different fields of knowledge. I used a combination of existing OpenAI and Google models to help me in this seemingly quixotic quest. To be honest, I did not have much real hope for success, at least not until release of the promised ChatGPT5 and whatever Google calls its counterpart, which I predict will be released the following week (or day). Plus, the whole thing seemed a bit grandiose, even for me, to try to get AI to boldly go where no one has gone before.

Absurd, but still I tried. I won’t go through all of the prompt engineering involved, except to say it involved my usual a complex, multi-layered, multi-prompt, multimodal-hybrid approach. I tempered my goals by directing ChatGPT4o, when I started the process, to seek new patterns that were useful, not Nobel Prize winning breakthroughs, just useful new patterns. I directed it to find five such new patterns and gave it some guidance as to fields of knowledge to consider, including of course, law. I asked for five new insights thinking that with such as big ask I might get one success.

Note, I write these words before I have received the response, but after I have written the above to help guide ChatGPT4o. Who knows, it might achieve some small modicum of success. Still, it feels like a crazy Quixotic quest. Incidentally, Miguel de Cervantes (1547-1616) character, Don Quixote (1605) does seem to person afflicted with apophenia. Will my AI suffer a similar fate?

Don Quixote in modern world. Video by Losey using Sora.

I designed the experiment specifically with this tension in mind between epiphanies, representing genuine insights and real advances in knowledge, and illusions, which are merely plausible yet misleading patterns. One of my goals was to probe AIโ€™s capacity to distinguish one from the other.

Overview of Prompt Strategy and Time Spent

First, I spent about a hour with ChatGPT4o to set up my request by feeding it a copy of the article as written so far. I also chatted with it about the possibility of AI finding new patterns between different fields of knowledge. Then I just told ChatGPT4o to do it, find a new inter connective pattern. ChatGPT4o “thought” (processed only) for just a few minutes. Then it generated a response that purported to provide me with the requested five new patterns. It did so based on its existing training and review of this article.

As requested, it did not use its browser capabilities to search the web for answers. It just “looked within” and came with five insights it thought were new. Almost that easy. I lowered my expectations accordingly before read the output.

That was the easy part, after reading the response, I spent about 14-hours over the next several days doing quality control. The QC work used multiple other AIs, both by OpenAI and Google, to have them go online and research these claims, evaluate their validity – both good and bad, engage in “deep-think,” look for errors, especially signs of AI apophenia, and otherwise invited contrarian type criticisms from them. After that, I also asked the other AIs for suggested improvements they might make to the wording of the five clams and rank them by importance. The various rewordings were not too helpful, but the rankings were, and so were many of the editorial comments.

The 14-hours in QC does not include the approximate 6-hours of machine time by the Gemini and OpenAI models to do deep think and independent research on the web to verify or disprove the claims. My actual 14-hour time included traditional Google searches to double check all citations as per my “trust but verify” motto. My 14-hours also included my time to read (I’m pretty fast) and skim most of the key articles that the AI research turned up, although frankly some of the articles cited were beyond my knowledge levels. I tried to up my game, but it was hard. These other models also generated hundreds of pages of both critical and supportive analysis, which I also had to read. Finally, I probably put another 24-hours into research and writing this article (it took over a week), so this is one of my larger projects. I did not record the number of hours it took to design and generate the 26 videos because that was recreational.

Surrealistic depiction of time in robot space by a Ralph Losey video.

Part Two of this article is where I will make the reveal. Was this experiment another comic story of a Don Quixote type (me) and his sidekick Sanchez (AI), lost in an apophenia neurosis? Or perhaps it is another story altogether? Neither hot nor cold? Stay tuned for Part Two and find out.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and provide some good insights. This episode is called Echoes of AI: Epiphanies or Illusions? Testing AIโ€™s Ability to Find Real Knowledge Patterns. Part One. Hear the young AIs talk about this article for 25 minutes. They wrote the podcast, not me.

An illustration featuring two anonymous AI podcasters sitting in front of microphones, discussing the theme 'Epiphanies or Illusions? Testing AIโ€™s Ability to Find Real Knowledge Patterns.' The background has a digital, tech-inspired design.
Click here to listen to the podcast.

Ralph Losey Copyright 2025 โ€“ All Rights Reserved.