Google’s New ‘Quantum Echoes Algorithm’ and My Last Article, ‘Quantum Echo’

October 30, 2025

🔹 The Reverberations of Quanta on Law Keep Growing Louder 🔹

Ralph Losey, (written 10/25/25)

I had just finished my last article on quantum mechanics—Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago—when something uncanny happened. That piece celebrated two Nobel-winning physicists from Google and the company’s rapid progress in building quantum machines. It ended with a question that still echoes: could the law ever catch up to physics’ new voice?

Two days later, physics answered back.

A person sits at a table typing on a laptop, with a digital projection of a human figure and waveform patterns glowing in blue tones above the computer screen.
Echoes upon echoes—in random chance interference.
All images in article by Ralph Losey using AI tools.

On October 22, 2025, Google announced that its Willow quantum chip had achieved a breakthrough using new software called—believe it or not—Quantum Echoes. The name made me laugh out loud. My article had used the phrase as metaphor throughout; Google was now using it as mathematics.

According to Google, this software achieved what scientists have pursued for decades: a verifiable quantum advantage. In my Quantum Echo article I had described that goal as “the moment when machines perform tasks that classical systems cannot.” No one had yet proven it, at least not in a way others could independently confirm. Google now claimed it had done exactly that—and 13,000 times faster than the world’s top supercomputers.

Artistic representation of a balanced scale symbolizing justice, with the word 'VERIFIED' prominently displayed. The background features two stylized server towers connected by a stream of binary code, illuminated in golden hues.
Verified Quantum Advantage: 13,000 times faster.

🔹 I. Introduction: Reverberating Echoes

Hartmut Neven, Founder and Lead of Google Quantum AI, and Vadim Smelyanskiy, Director of Quantum Pathfinding, opened their blog-post announcement with a statement that sounded less like marketing and more like expert testimony:

Quantum verifiability means the result can be repeated on our quantum computer—or any other of the same caliber—to get the same answer, confirming the result.

Neven & Smelyanskiy, Our Quantum Echoes algorithm is a big step toward real-world applications for quantum computing (Google Research Blog, Oct. 22, 2025).

Verification is critical in both Science and Law; it is what separates speculation from admissible proof.

Still, words on a blog cannot match the sound of the experiment itself. In Google’s companion video, Quantum Echoes: Toward Real-World Applications, Smelyanskiy offered a picture any trial lawyer could understand:

Just like bats use echolocation to discern the structure of a cave or submarines use sonar to detect upcoming obstacles, we engineered a quantum echo within a quantum system that revealed information about how that system functions.

Click here to see Google’s full video.

A presenter standing on a stage discussing 'Verifiable Quantum Advantage' alongside visuals of quantum technology and a play button overlay for a video.
Screen shot (not AI) of the YouTube showing Vadim Smelyanskiy beginning his remarks.

Think of Willow as Smelyanskiy suggest as a kind of quantum sonar. Its team sent a signal into a sea of qubits, nudged one slightly—Smelyanskiy called it a “butterfly effect”—and then ran the entire sequence in reverse, like hitting rewind on reality to listen for the echo that returns. What came back was not static but music: waves reinforcing one another in constructive interference, the quantum equivalent of a choir singing in perfect pitch.

Smelyanskiy’s colleague Nicholas Rubin, Google’s chief quantum chemist, appeared in the video next to show why this matters beyond the lab:

Our hope is that we could use the Quantum Echo algorithm to augment what’s possible with traditional NMR. In partnership with UC Berkeley, we ran the algorithm on Willow to predict the structure of two molecules, and then verified those predictions with NMR spectroscopy.

That experiment was not a metaphor; it was a cross-examination of nature that returned a consistent answer. Quantum Echoes predicted molecular geometry, and classical instruments confirmed it. That is what “verifiable” means.

Neven and Smelyanskiy’s Our Quantum Echoes article added another analogy to anchor the imagery in everyday experience:

Imagine you’re trying to find a lost ship at the bottom of the ocean. Sonar might give you a blurry shape and tell you, ‘There’s a shipwreck down there.’ But what if you could not only find the ship but also read the nameplate on its hull?

That is the clarity Quantum Echoes provides—a new instrument able to read nature’s nameplate instead of guessing at its outline. The echo is now clear enough to read.

A glowing blue quantum chip is suspended underwater above a sunken shipwreck, with the word 'ECHO' visible on the ship's hull.
Willow quantum chip and Echoes software reveal new information in previously unheard of detail.

That image—sharper echoes, clearer understanding—captures both the scientific leap and the theme that has reverberated through this series: building bridges between quantum physics and the law. My earlier article was titled Quantum Echo; Google’s is Quantum Echoes. When I wrote mine, I had no idea Neven’s team was preparing a major paper for NatureObservation of constructive interference at the edge of quantum ergodicity (Nature volume 646, pages 825–830, 10/23/25 issue date). More than a hundred Google scientists signed it. I checked and quantum ergodicity has to do with chaos, one of my favorite topics.

The study confirms what Smelyanskiy made visible with his sonar metaphor: Quantum Echoes measures how waves of information collide and reinforce each other, creating a signal so distinct that another quantum system can verify it.

So here we are—lawyers and scientists listening to the same echo. Google calls it the first “verifiable quantum advantage.” I call it the moment when physics cross-examined reality and got a consistent answer.

A gavel positioned on a wooden surface in a courtroom, with an abstract representation of quantum wave patterns emanating from it, symbolizing the intersection of law and quantum mechanics.
Quantum Computing will emerge soon from the lab to the legal practice. Will you be ready?

🔹 II. What Google’s Quantum Echoes Actually Did

Understanding what Google pulled off takes a bit of translation—think of it as turning expert testimony into plain English.

In the Quantum Echoes experiment, Smelyanskiy’s team did something that sounds like science fiction but is now laboratory fact. They sent a carefully designed signal into their 105-qubit Willow chip, nudged one qubit ever so slightly—a quantum “butterfly effect”—and then ran the entire operation in reverse, as if the universe had a rewind button. The question was simple: would the system return to its starting state, or would the disturbance scramble the information beyond recognition? What came back was an echo, faint at first and then unmistakable, revealing how information spreads and recombines inside a quantum world.

As the signal spread, the qubits became increasingly entangled—linked so that the state of each depended on all the others. In describing this process, Hartmut Neven explained that out-of-time-order correlators (OTOCs) “measure how quickly information travels in a highly entangled system.” Neven & Smelyanskiy, Our Quantum Echoes Algorithm, supra; also see Dan Garisto, Google Measures ‘Quantum Echoes’ on Willow Quantum Computer Chip (Scientific American, Oct. 22, 2025). That spreading web of entanglement is what allowed the butterfly’s tiny disturbance to ripple across the lattice and, when the sequence was reversed, to produce a measurable echo.

An abstract visualization of a quantum system, depicting a grid of interconnected points with a central glowing source, representing quantum entanglement and interaction patterns.
Visualization of quantum qubit world created by lattice of Willow chips.

Physicists call this kind of rewind test an out-of-time-order correlator, or OTOC—a protocol for measuring how quickly information becomes scrambled. The Scientific American article described it with a metaphor lawyers may appreciate: like twisting and untwisting a Rubik’s Cube, adding one extra twist in the middle, then reversing the sequence to see whether that single move leaves a lasting mark . The team at Google took this one step further, repeating the scramble-and-unscramble sequence twice—a “double OTOC” that magnified the signal until the echo became measurable.

Instead of chaos, they found harmony. The echo wasn’t noise—it was a pattern of waves adding together in what Nature called constructive interference at the edge of quantum ergodicity. As Smelyanskiy explained in the YouTube video:

What makes this echo special is that the waves don’t cancel each other—they add up. This constructive interference amplifies the signal and lets us measure what was previously unobservable.

In plain terms, the interference created a fingerprint unique to the quantum system itself. That fingerprint could be reproduced by any comparable quantum device, making it not just spectacular but verifiable. Smelyanskiy summarized it as a result that another machine—or even nature itself—can repeat and confirm.

A visual representation of wave interference, showing a vibrant blend of red and blue waves converging at a center point, suggesting quantum mechanics and constructive interference.
Visualization of quantum wave interactions creating a unique fingerprint resonance.

The numbers tell the rest of the story. According to the Nature, reproducing the same signal on the Frontier supercomputer would take about three years. Willow did it in just over two hours—roughly 13,000 times faster.  Observation of constructive interference at the edge of quantum ergodicity (Nature volume 646, pages 825–830, 10/23/25 issue date, at pg. 829, Towards practical quantum advantage).

That difference isn’t marketing; it marks the first clear-cut case where a quantum processor performed a scientifically useful, checkable computation that classical hardware could not.

Skeptics, of course, weighed in. Peer reviewers quoted in Scientific American called the work “truly impressive,” yet warned that earlier claims of quantum advantage have been surpassed as classical algorithms improved. But no one disputed that this particular experiment pushed the field into new territory: a regime too complex for existing supercomputers to simulate, yet still open to verification by a second quantum device. In court, that would be called corroboration.

Nicholas Rubin, Google’s chief quantum chemist, explained how this new clarity connects to chemistry and, ultimately, to everyday life:

Our hope is that we could use the Quantum Echo algorithm to augment what’s possible with traditional NMR. In partnership with UC Berkeley, we ran the algorithm on Willow to predict the structure of two molecules, and then verified those predictions with NMR spectroscopy.

Google Quantum AI YouTube video, contained within Quantum Echoes: Toward Real-World Applications (Oct. 22, 2025).

That experiment turned the echo from a metaphor into a molecular ruler—an instrument capable of reading atomic geometry the way sonar reads the ocean floor. It also demonstrated what Google calls Hamiltonian learning: using echoes to infer the hidden parameters governing a physical system. The same principle could one day help map new materials, optimize energy storage, or guide drug discovery. In other words, the echo isn’t just proof; it’s a probe.

The implications are enormous. When a quantum computer can measure and verify its own behavior, reproducibility ceases to be theoretical—it becomes an evidentiary act. The machine generates data that another independent system can confirm. In the language of the courtroom, that is self-authenticating evidence.

As Rubin put it,

Each of these demonstrations brings us closer to quantum computers that can do useful things in the real world—model molecules, design materials, even help us understand ourselves.

Google Quantum AI YouTube video, contained within Quantum Echoes: Toward Real-World Applications (Oct. 22, 2025).

The Quantum Echoes algorithm has given science a way to hear reality replay itself—and to confirm that the echo is real. For law, it foreshadows a future in which verification itself becomes measurable. The next section explores what that means when “verifiable advantage” crosses from the lab bench into the rules of evidence.

A wooden gavel positioned on a table, with glowing sound wave patterns emanating from it, next to a futuristic quantum computer in a laboratory setting.
It may soon be possible to verify and admit evidence originating in quantum computers like Willow.

🔹 III. Verifiable Quantum Advantage — From Lab Standard to Legal Standard

If physics can now verify its own results, law should pay attention—because verification is our stock-in-trade. The Quantum Echoes experiment didn’t just push science forward; it redefined what counts as proof. Google’s researchers call it a “verifiable quantum advantage.” Neven & Smelyanskiy, Our Quantum Echoes Algorithm Is a Big Step Toward Real-World Applications for Quantum Computing, supra. Lawyers might call it a new evidentiary standard: the first machine-generated result that can be independently reproduced by another machine.

A. Verification and Admissibility

Verification is critical in both science and law. In physics, reproducibility determines whether a result enters the canon or the recycling bin; in court, it determines whether evidence is admitted or denied. Fed. R. Evid. 901(b)(9) recognizes “evidence describing a process or system and showing that it produces an accurate result.” So does Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993), which instructs judges to test scientific evidence for methodological reliability—testing, peer review, error rate, and general acceptance.

By those standards, Google’s Quantum Echoes algorithm might pass with flying colors. The method was tested on real hardware, published in Nature, evaluated by peer reviewers, its signal-to-noise ratio quantified, and its core result confirmed on independent quantum devices. That should meet the Daubert reliability standard.

B. When Proof Is Probabilistic

Yet quantum proof carries a twist no court has faced before: every result is probabilistic. Quantum systems never produce identical outcomes, only statistically consistent ones. That might sound alien to lawyers, but it isn’t. Any lawyer who works with AI, including predictive coding that goes back to 2012, is quite familiar with it. Every expert opinion, every DNA mixture, every AI prediction arrives with confidence intervals, not certainties.

The rules of evidence already tolerate some uncertainty—they just insist on measuring it and evaluation. Is the uncertainty acceptable under the circumstances? As I observed in my last article, the law requires reasonable efforts, “perfection is not required. … and reasonable efforts can be proven by numerics and testimony.” Ralph Losey, Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago (Oct. 21, 2025).

Like a quantum measurement, a jury verdict or mediation turns uncertainty into a final determination. Debate, probability, and persuasion collapse into a single truth accepted by that group, in that moment. Another jury could hear essentially the same evidence and reach a different result. Same with another settlement conference. Perhaps, someday, quantum computers will calculate the billions of tiny variables within each case—and within each unexpectedly entangled group of jurors or mediation participants. That might finally make jury selection, or even settlement, a measurable science.

A courtroom scene featuring a diverse jury seated in the foreground, listening intently as two lawyers engage in a debate. The judge is positioned behind them, and the setting is illuminated by a network of light patterns, symbolizing connections and insights related to the intersection of law and quantum mechanics.
No two legal situation or decisions are ever exactly the same. There are trillions of small variables even in the same case.

C. Replication Hearings in the Age of Probability

Google’s scientists describe their achievement as “quantum verifiable”—a term meaning any comparable machine can reproduce the same statistical fingerprint. That concept sounds like self-authentication. Fed. R. Evid. 902 lists categories of documents that require no extrinsic proof of authenticity. See especially 902 (4) subsection (13) “Certified Records Generated by an Electronic Process or System” and (14) “Certified Data Copied from an Electronic Device, Storage Medium, or File.

Classical verification loves hashes; quantum verification prefers histograms—charts showing how results cluster rather than match exactly. The key question is not “Are these outputs identical?” but “Are these distributions consistent within an accepted tolerance given the device’s error model?

Counsel who grew up authenticating log files and forensic images will now add three exhibits: (1) run counts and confidence intervals, (2) calibration logs and drift data, and (3) the variance policy set before the experiment. Discovery protocols should reflect this. Specify the acceptable bandwidth of
similarity
in the protocol order, preserve device and environment logs with the results, and disclose the run plan. In e-discovery terms, we are back to reasonable efforts with transparent quality metrics, not mythical perfection.

D. Two Quick Hypotheticals

Pharma Patent. A lab uses Quantum-Echoes-assisted NMR analysis to infer long-range spin couplings in a novel compound. A rival lab’s rerun differs by a small margin. The court admits the data after a statistical-consistency hearing showing both labs’ distributions fall within the pre-declared variance band, with calibration drift documented and immaterial.

Forensics. A government forensic agency (for example, the FBI or Department of Energy) presents evidence generated by quantum sensors—ultra-sensitive devices that use quantum phenomena such as entanglement and superposition to detect physical changes with extreme precision. In this case, the sensors were deployed near the site of an explosion, where they recorded subtle signals over time: magnetic fluctuations, thermal shifts, and shock-wave signatures. From that data, the agency reconstructed a quantum-sensor timeline—a detailed sequence of events showing when and how the blast occurred.

The defense challenges the evidence, arguing that such quantum measurements are “non-deterministic.” The judge orders disclosure of the device’s error model, calibration logs, and replication plan. After testimony shows that the agency reran the quantum circuit a sufficient number of times, with stable variance and documented environmental controls, the timeline is admitted into evidence. Weight goes to the jury.

An artistic representation of a ruler overlaid on molecular structures, symbolizing the connection between quantum mechanics and measurements in science. The background features vibrant colors and wavy patterns, suggesting energy and movement.
Measuring quantum outputs and determining replication reliability.

These short hypotheticals act as “replication hearings” in miniature—demonstrating how statistical tolerance can replace rigid duplication as the new standard of reliability.

🔹 IV. Near-Term Implications — Cryptography, AI, and Compliance

Every new instrument of verification casts a shadow. The same physics that lets us confirm a result can also expose a secret. Quantum Echoes proved that information can be traced, replayed, and verified.  But once information can be replayed, it can also be reversed. Verification and decryption are two sides of the same quantum coin.

A. Defining Q-Day

That duality brings us to Q-Day—the moment when a sufficiently large-scale quantum processor can factor prime numbers fast enough to defeat RSA or ECC encryption. When that day arrives, the emails, contracts, and trade secrets protected by today’s algorithms could be decrypted in minutes.

Adversaries are already stealing and stockpiling encrypted data for future decryption when that moment arrives. Cybersecurity experts call this the harvest-now, decrypt-later threat. Those charged with protecting confidential data must be governed accordingly. Prepare your organization for Q-Day: 4 steps toward crypto-agility (IBM, 10/24/25).

The RSA and elliptic-curve systems that secure global finance, communications, and justice could fall in hours once large-scale quantum processors become available to attackers. For this reason, NIST released its first suite of post-quantum cryptographic (PQC) standards in August 2024. The NSA’s CNSA 2.0 framework, issued in September 2022, now mandates federal migration. Also See, Dan Kent, “Quantum-Safe Cryptography: The Time to Start Is Now,” (GovTech, April 30 2025); Amit Katwala, “The Quantum Apocalypse Is Coming. Be Very Afraid” (WIRED, Mar. 24 2025); and, Roger Grimes’ book, Cryptography Apocalypse (Wiley 2019).

Every general counsel should now ask at least three questions:

  1. Where do we still rely on classical encryption, and how long must those secrets remain secure?
  2. Which vendors can attest to their post-quantum migration timelines?
  3. How will we prove compliance when regulators—or clients—begin auditing “quantum-safe” claims?

See various NIST guides and NSA guides on quantum prep, including The Commercial National Security Algorithm Suite page. Also see, Gartner Research, Preparing for the Post-Quantum World: How CISOs Should Plan Now (2024) (subscription required); and Marian, Gartner just put a date on the quantum threat – and it’s sooner than many think (PostQuantum, Oct. 2024).

Reasonable foresight now means inventory, pilot, and policy—before the echoes reach the vault.

An abstract representation of a digital conflict between Bitcoin and Ethereum, featuring glowing safes with their respective logos, amidst an environment illuminated by beams of light, symbolizing technological advancements and rivalry in cryptocurrency.
When the Echoes hit the vault. Most encrypted data is at risk from future quantum computer operations.

B. Acceleration and Realism

Google’s Quantum Echoes work does not mean Q-Day is tomorrow, but it makes tomorrow easier to imagine.  Each verified algorithm shortens the speculative distance between research and real-world capability.  If Willow’s 105 qubits can already perform verifiable, complex interference tasks, then a machine with a few thousand logical qubits could, in principle, execute Shor’s algorithm to factor the primes that underpin encryption.  That scale is not yet achieved, but the line of progress is clear and measurable.  Verification, once a scientific luxury, has become a security warning light.  Every new echo that confirms truth also whispers risk.

C. Evidence and Discovery Operations

Quantum-derived data will enter litigation well before Q-Day and perfect verification of quantum generated data. The Quantum Age and Its Impacts on the Civil Justice System (RAND Institute for Civil Justice, Apr. 29 2025), Chapter 3, “Courts and Databases, Digital Evidence, and Digital Signatures,” p. 23, and “Lawyers and Encryption-Protected Client Information,” p. 17. These sections of the Rand Report outline how quantum technologies will challenge evidentiary authentication, database integrity, and client confidentiality.

For background on the law that will likely be argued, see, Hyles v. New York City, No. 10 Civ. 3119 (S.D.N.Y. Aug. 1 2016) (Judge Andrew J. Peck (ret.) a leading authority on AI and e-discovery, holding that “the standard is not perfection, … but whether the search results are reasonable and proportional”.) Also see, EDRM Metrics Model and Privacy & Security Risk Reduction Model; and The Sedona Principles, 3rd Edition: Best Practices for Electronic Document Production (2017), together with The Sedona Conference Commentary on ESI Evidence & Admissibility Second Edition(2021).

Looking ahead, today’s hash-based verification with classical computers will give way to quantum-based distributional verification, where productions will not only include datasets but also the variance reports, calibration logs, and environmental conditions that generated them. Discovery orders will begin specifying acceptable tolerance bands and require parties to preserve the hardware and environmental context of collection. This marks the next evolution of the reasonable-efforts doctrine that guided predictive coding: transparency and metrics, not mythical perfection.

D. Regulatory Issues

Industry consolidation—including Google bringing the Atlantic Quantum team into Google Quantum AI—will invite antitrust and export-control scrutiny. We’re scaling quantum computing even faster with Atlantic Quantum (Google Keyword blog, 10/02/25).

Also, expect sector regulators to weave post-quantum cryptography (PQC) and quantum-evidence expectations into existing rules and guidance: CISA, NIST, and NSA as shown already urge organizations to inventory cryptography and plan PQC migration, which is a clear signal for boards and auditors.

Healthcare and life science companies in particular should track FDA’s evolving cybersecurity guidance for medical devices and HHS/OCR’s HIPAA Security Rule update effort, both of which are tightening expectations around crypto agility and lifecycle security. Cybersecurity in Medical Devices (FDA, 6/26/25); HIPAA Security Rule Notice of Proposed Rulemaking to Strengthen Cybersecurity for Electronic Protected Health Information (HHS, Dec. 2024).

Boards will soon ask the decisive question: Where is our long-term sensitive data, and can we prove it is quantum-safe? Lawyers will need to stay current on both existing and proposed regulations—and on how they are actually enforced. That is a significant challenge in the United States, where regulatory authority is fragmented and enforcement can be a moving target, especially as administrations change.

🔹 V. Philosophy & the Multiverse — Echoes Across Consciousness and Justice

Verification may give us confidence, but it does not give us true understanding. The Quantum Echoes experiment settled a question of physics, yet opened one of philosophy: what exactly is being verified, the system, the observer, or the act of observation itself?  Every measurement, whether by physicist or judge, collapses a range of possibilities into a single, declared reality. The rest remain unrealized but not necessarily untrue.

A fantastical scene featuring a person standing in a surreal corridor filled with various doorways, each revealing different landscapes or cosmic visuals. Bright blue energy patterns connect the spaces, symbolizing the intertwining of time and reality.
Quantum entangled multiverse stretching forever with each moment seeming unique.

In Quantum Leap (January 9, 2025), I speculated, tongue partly in cheek, that Google’s quantum chip might be whispering to its parallel selves. Google’s early breakthroughs hinted at a multiverse, not just of matter but of meaning. As Niels Bohr warned, “Those who are not shocked when they first come across quantum theory cannot possibly have understood it.” Atomic Physics and Human Knowledge (Wiley, 1958); Heisenberg, Werner. Physics and Beyond. (Harper & Row, 1971). p. 206.

In Quantum Echo I extended quantum multiverse ideas to law itself—where reproducibility, not certainty, defines truth. Our legal system, like quantum mechanics, collapses possibilities into a single outcome. Evidence is presented, probabilities weighed, and then, bang, the gavel falls, the wave function collapses, and one narrative becomes binding precedent. The other outcomes are filed in the cosmic appellate division.

Google’s Quantum Echoes now closes the loop: verification has become a measurable force, a resonance between consciousness and method. The many worlds seems to be bleeding together. Each observation is both experiment and judgment, the mind becoming part of the data it seeks to confirm.

This brings us to a quiet question: if observation changes reality, what does that say about responsibility? The judge or jurors’ observation becomes the law’s reality. Another judge or jury, another day, another echo—and a different world emerges.  Perhaps free will is simply the name we give to that unpredictable variable that even physics cannot model: the human choice of when, and how, to observe.

Same case but different jurors, lawyers, judge entanglement. Different results when measured with a verdict; some similar and a few very unique. Can the results be predicted?

Constructive interference may happen in conscience, too.  When reason and empathy reinforce each other, justice amplifies.  When prejudice or haste intervene, the pattern distorts into destructive interference.  A just society may be one where these moral waves align more often than they cancel—where the collective echo grows clearer with each case, each conversation, each course correction.

And if a multiverse does exist—if every choice spins off its own branch of law and fact—then our task remains the same: to verify truth within the world we inhabit. That is the discipline of both science and justice: to make this reality coherent before chasing another. We cannot hear all echoes, but we can listen closely to the one that answers back.

So perhaps consciousness itself is a courtroom of possibilities, and verification the gavel that selects among them.  Our measurements, our rulings, our acts of understanding—they all leave an interference pattern behind. The best we can do is make that pattern intelligible, compassionate, and, when possible, reproducible.  Law and physics alike remind us that truth is not perfection; it is resonance. When understanding and humility meet, the universe briefly agrees.

An artistic representation of a tree with numerous branches, each displaying a globe depicting Earth, symbolizing the concept of a multiverse with various parallel worlds.
Multiverse where different worlds split up and continue to exist, at least for a while, in parallel words.

🔹 VI. Conclusion

If there really are countless parallel universes, each branching from every quantum decision, then there may be trillions of versions of us walking through the fog of possibility. Some would differ by almost nothing—the same morning coffee, the same tie, the same docket call. But a few steps farther along the probability curve, the differences would grow strange. In one world I may have taken that other job offer; in another, argued a case that changed the law; and at some far edge of the bell curve, perhaps I’m lecturing on evidence to a class of AIs who regard me as a historical curiosity.

Can beings in the multiverse somehow communicate with each other? Is that what we sense as intuition—or déjà vu? Dreams, visions, whispers from adjacent worlds? Do the parallel lines sometimes cross? And since everything is quantum, how far does entanglement extend?

An artistic depiction of a person standing in a surreal environment filled with glowing pathways and mirrors, each reflecting a different version of themselves, symbolizing themes of quantum mechanics and parallel universes.
Are we living in many parallel worlds at once. What is the impact of quantum entanglement?

The future of law is being written not only in statutes or code, but in algorithms that can verify their own truth. Quantum physics has given us new metaphors—and perhaps new standards of evidence—for an age when certainty itself is probabilistic. The rule of law has always depended on verification; the difference now is that verification is becoming a property of nature itself, a measurable form of coherence between mind and matter. The physics lab and the courtroom are learning the same lesson: reality is persuasive only when it can be reproduced.

Yet even in a world of self-authenticating machines, truth still requires a listener. The universe may verify itself, but it cannot explain itself. That remains our role—to interpret the echoes, to decide which frequencies count as proof, and to do so with both rigor and mercy. So as the echoes grow louder, we keep listening.  And if you hear a low hum in the evidence room, don’t panic—it’s probably just the universe verifying itself.  But check the chain of custody anyway.

An abstract painting depicting diverse individuals interconnected by vibrant lines, symbolizing themes of recognition and connection. The use of blue tones creates a surreal atmosphere, illustrating a dynamic interplay between figures and their environment.
Niels Bohr: If you’re not shocked by quantum theory you have not understood it.  

🔹 Subscribe and Learn More

If these ideas intrigue you, follow the continuing conversation at e-DiscoveryTeam.com, where you can subscribe for email notices of future blogs, courses, and events. I’m now putting the finishing touches on a new online course, Quantum Law: From Entanglement to Evidence. It will expand on these themes by more discussion, speculation, and translating the science of uncertainty into practical tools, templates and guides for lawyers, judges, and technologists.

After all, the future of law will not belong to those who fear new tools, but to those who understand the evidence their universe produces.

Ralph C. Losey is an attorney, educator, and author of e-DiscoveryTeam.com, where he writes about artificial intelligence, quantum computing, evidence, e-discovery, and emerging technology in law.

© 2025 Ralph C. Losey. All rights reserved.



Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago

October 24, 2025

Meanwhile, Even Bigger Breakthroughs by Google Continue

By Ralph Losey, October 21, 2025.

The Nobel Prize in Physics was just awarded to quantum physics pioneers John Clarke, Michel H. Devoret, and John M. Martinis for discoveries they made at UC Berkeley in the 1980s. They proved that quantum tunneling, where subatomic particles can break through seemingly impenetrable barriers, can also occur in the macroscopic world of electrical circuits. So yes, Schrödinger’s cat really could die.

A digital illustration featuring three scientists with varying facial expressions, posed in a futuristic setting, symbolizing breakthroughs in quantum computing. In the foreground, there is an artistic depiction of a cat with a skull overlay, creating a surreal contrast.
Quantum Physics Pioneers take home the Nobel Prize: John Clarke, Michel H. Devoret, and John M. Martinis. All images in this article are by Ralph Losey using AI image generation tools.

Their experiments showed that entire circuits can behave as single quantum objects, bridging the gap between theory and engineering. That breakthrough insight paved the way for construction of quantum computers, including the latest by Google.

Both Devoret and Martinis were recruited years ago by Google to help design its quantum processors. Although John Martinis (right, in the image above) recently departed to start his own company, Qolab, Michel Devoret (center) remains at Google Quantum AI as the Chief Scientist of Quantum Hardware. Last year, two other Google scientists, John Jumper and Demis Hassabis, shared a Nobel prize in chemistry for their groundbreaking work in AI.

Google is clearly on a roll here. As Google CEO Sundar Pichai joked in his congratulatory post on LinkedIn: “Hope Demis Hassabis and John Jumper are teaching you the secret handshake.”

A human hand shakes a holographic robotic hand in front of a quantum computer, with a Google logo in the background.
The secret handshake to Google’s Nobel Prizes is the combination of AI and Quantum.

🔹 Willow Breaks Through Its Own Barriers

Less than a year ago, Google’s new quantum chip, Willow, tunneled through its own barriers, performing in five minutes a calculation that would have taken ten septillion years (10²⁴) on the fastest classical supercomputers. That’s far longer than anyone’s estimate for the age of our universe—a good definition of mind-boggling.

This result led Hartmut Neven, director of Google’s Quantum Artificial Intelligence Lab, to suggest it offers strong evidence for the many-worlds or multiverse interpretation of quantum mechanics—the idea that computation may occur across near-infinite parallel universes. Neven and a number of leading researchers subscribe to this view.

I explored that seemingly crazy hypothesis in Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan 9, 2025). Oddly enough, it became my most-read article of all time—thank you, readers.

Today’s piece updates that story. The Nobel Prize recognition is icing on the cake, but progress has not slowed. Quantum computers—and the law—remain one of the most exciting frontiers in legal-tech. So much so that I’m developing a short online course on quantum computing and law, with more courses on prompt engineering for legal professionals coming soon. Subscribe to e-DiscoveryTeam.com to be notified when they launch.

The work of this year’s Nobel laureates—Clarke, Devoret, and Martinis—was done forty years ago, so delay in recognition is hardly unusual in this field. Perhaps someday Neven and other many-worlds interpreters of quantum physics will receive their own Nobel Prize for demonstrating multiverse-scale applications. In my view, far more evidence than speed alone will be required.

After all, it defies common sense to imagine, as the multiverse hypothesis suggests, that every quantum event splits reality, spawning a near-infinite array of universes. For example, one where Schrödinger’s cat is alive and another slightly different unoiverse where it is dead. It makes Einstein’s “spooky action at a distance seem tame by comparison.

An illustrated depiction of Schrödinger's cat concept, featuring a cartoon cat and a skeleton inside a wooden box, symbolizing the quantum mechanics thought experiment.
Spooky questions: Why are ‘you’ conscious in this particular universe? Are you dead in another?

In the meantime—whatever the true mechanism—quantum computers and AI are already producing tangible social and legal consequences in cryptography, cybercrime, and evidentiary law. See, The Quantum Age and Its Impacts on the Civil Justice System (Rand, April 29, 2025); Quantum-Readiness: Migration to Post-Quantum Cryptography (NIST, NSA, August, 2023); Quantum Computing Explained (NIST 8/22/2025); but see, Keith Martin, Is a quantum-cryptography apocalypse imminent? (The Conversation , 6/2/25) (“Expert opinion is highly divided on when we can expect serious quantum computing to emerge,” with estimates ranging from imminent to 20 years or more.)

Whether you believe in the multiverse or not, the practical implications for law and technology are already arriving.

Abstract illustration representing the multiverse theory with multiple cosmic spheres and the text 'MULTIVERSE THEORY' and 'INFINITE PARALLEL UNIVERSES'.
Might this theory someday seem like common sense? Or will most Universes discard it as another ‘spooky’ idea of experimental scientists?

🔹 Atlantic Quantum Joins Google Quantum AI

On October 2, 2025, Hartmut Neven, Founder and Lead, Google Quantum AI, announced in a short post titled “We’re scaling quantum computing even faster with Atlantic Quantum” that Google had just acquired. Atlantic Quantum is an MIT-founded startup developing superconducting quantum hardware. The announcement, written in Neven’s signature understated style, framed the deal as a practical step on Google’s long road toward “a large error-corrected quantum computer and real-world applications.”

Neven explained that Atlantic Quantum’s modular chip stack, which integrates qubits and superconducting control electronics within the cryogenic stage, will allow Google to “more effectively scale our superconducting qubit hardware.” That phrase may sound routine to non-engineers, but it represents a significant leap in design philosophy: merging computation and control at the cold stage reduces signal loss, simplifies architecture, and makes modular scaling—the key to fault-tolerant machines—realistically achievable. This is another great acquisition by Google.

Independent reporting quickly confirmed the deal’s importance. In Atlantic Quantum Joins Google Quantum AI, The Quantum Insider’s Matt Swayne summarized the deal succinctly:

• Google Quantum AI has acquired Atlantic Quantum, an MIT-founded startup developing superconducting quantum hardware, to accelerate progress toward error-corrected quantum computers. . . .
• The deal underscores a broader industry trend of major technology companies absorbing research-intensive startups to advance quantum computing, a field still years from large-scale commercial deployment.

The article noted that the integration of Atlantic Quantum’s modular chip-stack technology into Google’s program was aimed at one of quantum computing’s toughest engineering hurdles: scaling systems to become practical and fault-tolerant.

The MIT-born startup, founded in 2021 by a group of physicists determined to push superconducting design beyond incremental improvements, focused on embedding control electronics directly within the quantum processor. That approach reduces noise, simplifies wiring, and makes modular expansion far more realistic. For another take on the Atlantic story, see Atlantic Quantum and Google Quantum AI are “Joining Up” (Quantum Computing Report, 10/02/25).

These articles place the transaction within a broader wave of global investment in quantum technologies. Large-scale commercial deployment may still be years away but the industry has already entered a phase of consolidation. Research-heavy startups are increasingly being absorbed by major technology companies, a predictable evolution in a field defined by extraordinary capital demands and complex technical challenges.

For Google, the acquisition is less about headlines and more about infrastructure control, owning every layer of the superconducting stack from design to fabrication. For the industry, it signals that the next phase of quantum development will likely follow the same arc as classical computing: early-stage innovation absorbed by large, well-capitalized firms that can bear the cost of scaling.

For lawyers and regulators, that pattern has familiar consequences: intellectual-property concentration, antitrust scrutiny, export-control compliance, and the evidentiary standards that will eventually govern how outputs from such corporate-owned quantum systems are regulated and presented in court.

An illustration depicting the concept of innovation in the technology industry, contrasting 'Early-Stage Innovation' represented by small fish and a light bulb, with 'Large, Well-Capitalized Firms' represented by a shark featuring the Google logo. The background includes circuit patterns, symbolizing the tech ecosystem.
Familiar pattern and legal issues continue in our Universe.

🔹 Willow and the Many-Worlds Question

Before the Nobel bell rang in Stockholm, Google’s Quantum AI group had already changed the conversation with its Willow processor.

In my earlier piece on Willow’s mind-bending computations, I quoted Hartmut Neven’s ‘parallel universes’ framing to describe its behavior. Some heard music; others heard marketing. Others, like me, saw trouble ahead.

The Nobel Prize did not validate the many-worlds interpretation of quantum mechanics, nor did it disprove it. Neven has not backed away from the theory, nor have others, and Neven has just gotten the best talent from MIT to join his group. What the Nobel Prize did confirm—beyond any reasonable doubt—is that macroscopic superconducting circuits, at a size you can see, can exhibit genuine quantum behavior under controlled laboratory conditions. That is the solid foundation a judge or regulator can stand on: devices now exist in our world that generate outputs with quantum fingerprints reproducible enough to test and verify.

Meanwhile, the frontier continues to move. In September 2025, researchers at UNSW Sydney demonstrated entanglement between two atomic nuclei separated by roughly twenty nanometers, See, “New entanglement breakthrough links cores of atoms, brings quantum computers closer” (The Conversation, Sept. 2025). Twenty nanometers is not big, but it is large enough to measure.

Moreover, even though the electrical circuits themselves are large enough to photograph, the quantum energy was not. That could only be measured indirectly. The researchers used coupled electrons as what lead scientist Professor Andrea Morello called “telephones” to pass quantum correlations and make those measurements.

An artistic representation of quantum entanglement, featuring glowing atomic particles connected by luminous paths, illustrating the complex interactions in quantum mechanics.
Electrons acting like telephones passing quantum correlations on measurable scales.

The telephone metaphor is apt. It captures the engineering ambition behind the result—connecting quantum rooms with wires, not whispers. Whispers don’t echo. Entanglement is not a philosophical idea; it is a measurable resource that can be distributed, controlled, and eventually commercialized. It can even call home.

For the legal system, this is where things become concrete. When entanglement leaves the lab and enters communications or sensing devices, courts will be asked to evaluate evidence that can be measured and described but cannot be seen directly. The question will no longer be “Is this real?” but “How do we authenticate what can be measured but not observed?”

That’s the moment when the physics of quantum control becomes the jurisprudence of evidence—and it’s coming faster than most practitioners realize.

A surreal painting depicting several figures whispering to each other in an arched, dimly lit setting, with wave-like patterns of light radiating from a central source.
Whispers Don’t Echo.

🔹 Defining the Echo: When Evidence Repeats With a Slight Accent

The many-worlds interpretation of quantum mechanics has always sat on the thin line between physics and philosophy. First proposed in 1957 by Hugh Everett, it replaces the familiar ‘collapse‘ of the wave-function with a more radical notion: every quantum event splits reality into separate branches, each continuing independently. Some brilliant physicists take it seriously; others reject it; many remain agnostic. Courts need not resolve that debate. For law, the relevant question is simpler: can a party show a method that reliably connects a claimed quantum mechanism to a particular output? If yes, the court’s job is to hear the evidence. If not, the court’s job is to exclude it.

In its early decades, the idea was mostly dismissed as metaphysical excess. Then  Bryce DeWittDavid DeutschMax Tegmark and Sean Carroll each found ways to refine and defend it. David Deutsch, known as the Father of Quantum Comnputing, first argued that quantum computers might actually use this multiplicity to perform computations—each universe branch carrying part of the load. See e.g., Deutsch, The Fabric of Reality: The Science of Parallel Universes–and Its Implications (Penguin, 1997) (Chapter 9, Quantum Computers). Deutsch even speculates in his next (2011) book The Beginning of Infinity (pg. 294) that some fiction, such as alternate history, could occur somewhere in the multiverse, as long as it is consistent with the laws of physics.

The many-world’s argument, once purely theoretical, gained traction after Google’s Willow experiments. Hartmut Neven’s reference to “parallel universes” was not an assertion of proof but a shorthand for describing interference effects that defy classical intuition. It is what he believes was happening—and that opinion carries weight because he works with quantum computers every day.

When quantum behavior became experimentally measurable in superconducting circuits that were large enough to photograph, the Everett question—’Are we branching universes or sampling probabilities?‘—stopped being rhetorical. The debate moved from thought experiment to instrument design. Engineers now face what philosophers only imagined: how to measure, stabilize, and interpret outcomes that occur across many possible worlds and never converge on a single, deterministic path.

For the law, the relevance lies not in metaphysics but in method. Whether the universe splits or probabilities collapse, the data these machines produce are inherently probabilistic—repeatable only within margins, each time with a slight accent. The courtroom analog to wave-function collapse is the evidentiary demand for reproducibility. If the physics no longer promises identical outputs, the law must decide what counts as reliable sameness—echoes with an accent.

That shift from metaphysics to methodology is the lawyer’s version of a measurement problem. It’s not about believing in the multiverse. It’s about learning how to authenticate evidence that depends on it.

A vibrant abstract representation of quantum physics, featuring concentric circles and spheres radiating in a spectrum of colors, symbolizing subatomic particles and quantum behavior.
Repeatable measurements through parallel universes to explain quantum computer calculations. Crazy but true?

🔹 The Law Listens: Authenticating Echoes in Practice

If each quantum record is an echo, the law’s task is to decide which echoes can be trusted. That requires method, not metaphysics. The legal system already has the tools—authentication, replication, expert testimony—but they need recalibration for an age when precision itself is probabilistic.

1. Authentication in context.
Under Rule 901(b)(9), evidence generated by a process or system must be shown to produce accurate results. In a quantum context, that showing might include the type of qubit, its error-correction protocol, calibration logs, environmental controls, and the precise code path that produced the output. The burden of proof doesn’t change; only the evidentiary ingredients do.

2. Replication hearings.
In classical computing, replication is binary—either a hash matches, or it doesn’t. In quantum systems, replication becomes statistical. The question is no longer “Can this be bit-for-bit identical?” but “Does this fall within the accepted variance?” Probabilistic systems demand statistical fidelity, not sameness. A replication hearing becomes a comparison of distributions, not exact strings of bits.

Similar logic already guides quantum sensing and metrology, where entanglement and superposition improve precision in measuring magnetic fields, time, and gravitational effects. See Quantum sensing and metrology for fundamental physics (NSF, 2024); Review of qubit-based quantum sensing (Springer, 2025); Advances in multiparameter quantum sensing and metrology (arXiv, 2/24/25); Collective quantum enhancement in critical quantum sensing (Nature, 2/22/25). Those readings vary from one run to the next, yet the variance itself confirms the physics—each measurement is a statistically faithful echo of the same underlying reality. The variances are within a statistically acceptable range of error.

An abstract illustration showing a silhouette of a person standing next to a swirling vortex surrounded by circular shapes and geometric lines, representing concepts of quantum mechanics and the multiverse.
Each measurements is slightly different but similar enough to be statistically faithful echoes of the same underlying reality.

🔹 Two Examples from the Quantum Frontier

1. Quantum Chemistry In Practice.

One of the most mature quantum applications today is the Variational Quantum Eigensolver (VQE), a hybrid quantum-classical algorithm used to estimate the ground-state energy of molecules. See, The Variational Quantum Eigensolver: A review of methods and best practices (Phys. Rep., 2023); Greedy gradient-free adaptive variational quantum algorithms on a noisy intermediate scale quantum computer (Nature, 5/28/25). Also see, Distributed Implementation of Variational Quantum Eigensolver to Solve QUBO Problems (arXiv, 8/27/25); How Does Variational Quantum Eigensolver Simulate Molecules? (Quantum Tech Explained, YouTube video, Sept. 2025).

VQE researchers routinely run the same circuit hundreds of times; each iteration yields slightly different energy readings because of noise, calibration drift, and quantum fluctuations. Yet the outputs consistently cluster around a stable baseline, confirming both the accuracy of the physical model and the reliability of the machine itself.

Now picture a pharmaceutical patent dispute where one party submits quantum-derived binding data for a new molecule. The opposing side demands replication. A court applying Rule 702 may not expect identical numbers—but it could require expert testimony showing that results consistently fall within a scientifically accepted margin of error. If they do, that should become a legally sufficient echo.

This is reminiscent of prior disputes e-discovery concerning the use of AI to find relevant documents. It has been accepted by all courts that perfection, such as 100% recall, is never required, but reasonable efforts are required. Judge Andrew Peck, Hyles vNew York City, No. 10 Civ. 3119 (AT)(AJP), 2016 WL 4077114 (S.D.N.Y. Aug. 1, 2016). This also follows the official commentary of Rule 702, on expert testimony, where “perfection is not required.” Fed. R. Evid. 702, Advisory Committee Note to 2023 Amendment.

The reasonable efforts can be proven by numerics and testimony. See for instance my writings in the TAR Course: Fifteenth Class- Step Seven – ZEN Quality Assurance Tests (e-Discovery Team, 2015) (Zero Error Numerics); ei-Recall (e-Discovery Team, 2015); Some Legal Ethics Quandaries on Use of AI, the Duty of Competence, and AI Practice as a Legal Specialty (May, 2024).

An illustration emphasizing the phrase 'Reasonable efforts required, not perfection,' featuring a checklist with a checkmark, scales of justice, and a prohibition symbol.
There is no perfect case, evidence or efforts. In reality, ‘perfect is the enemy of the good.’

2. Quantum-Secure Archives.

As quantum computing and quantum cryptography advance, most (but not all) of today’s encryption will become obsolete. This means the vast amount of encrypted data stored in corporate and governmental archives—maintained for regulatory, evidentiary, and operational purposes—may soon be an open book to attackers. Yes, you should be concerned.

Rich DuBose and Mohan Rao, Harvest now, decrypt later: Why today’s encrypted data isn’t safe forever (Hashi Corp., May 21, 2025) explain:

Most of today’s encryption relies on mathematical problems that classical computers can’t solve efficiently — like factoring large numbers, which is the foundation of the Rivest–Shamir–Adleman (RSA) algorithm, or solving discrete logarithms, which are used in Elliptic Curve Cryptography (ECC) and the Digital Signature Algorithm (DSA). Quantum computers, however, could solve these problems rapidly using specialized techniques such as Shor’s Algorithm, making these widely used encryption methods vulnerable in a post-quantum world.

Also see, Dan Kent, Quantum-Safe Cryptography: The Time to Start Is Now (Gov.Tech., 4/30/25) and Amit Katwala, The Quantum Apocalypse Is Coming. Be Very Afraid (Wired, Mar. 24, 2025), warning that cybersecurity analysts already call this future inflection point Q-Day—the day a  quantum computer can crack the most widely used encryption. As Katwala writes:

On Q-Day, everything could become vulnerable, for everyone: emails, text messages, anonymous posts, location histories, bitcoin wallets, police reports, hospital records, power stations, the entire global financial system.

Most responsible organizations with large archives of sensitive data have been preparing for Q-Day for years. So too have those on the other side—nation-states, intelligence services, and organized criminal groups—who are already harvesting encrypted troves today to decrypt later. See, Roger Grimes, Cryptography Apocalypse: Preparing for the Day When Quantum Computing Breaks Today’s Crypto (Wiley, 2019). The race for quantum supremacy is on.

Now imagine a company that migrates its document-management system to post-quantum cryptography in 2026. A year later, a breach investigation surfaces files whose verification depends on hybrid key-exchange algorithms and certificate chains. The plaintiff calls them anomalies; the defense calls them echoes. The court won’t choose sides by theory—it will follow the evidence, the logs, and the math.

An artistic representation of an hourglass with celestial spheres and swirling galaxies, symbolizing the concept of time and the multiverse in quantum physics.
The metrics are what should matter, not the many theories

🔹 Building the Quantum Record

Judicial findings and transparency. Courts can adapt existing frameworks rather than invent new ones. A short findings order could document:
(a) authentication steps taken;
(b) observed variance;
(c) expert consensus on reliability; and
(d) scope limits of admissibility.
Such transparency builds a common-law record—the first body of quantum-forensic precedent. I predict it will be coming soon to a universe near you!

Chain of custody for the probabilistic age. Future evidence protocols may pair traditional logs with variance ranges, confidence intervals, and error budgets. Discovery rules could require disclosure of device calibration history, firmware versions, and known noise parameters. The data once confined to labs will become essential for authentication.

The law doesn’t need new virtues for quantum evidence; it needs old ones refined. Transparency, documentation, and replication remain the gold standard. What changes is the expectation of sameness. The goal is no longer perfect duplication, but faithful resonance: the trusted echo that still carries truth through uncertainty.

An artistic depiction of a swirling vortex, featuring an hourglass shape with vibrant colors, symbolizing the concept of multiverses and quantum physics. Small planets are depicted within the flow, representing various realities branching out from a central point of light.
Metrics carry the truth through uncertainty.

🔹 Conclusion: The Sound of Evidence

The Nobel Committee rang the bell. Google’s engineers adding instruments. Labs in Sydney and elsewhere wired new rooms together. The rest of us—lawyers, paralegals, judges, legal-techs, investigators—must learn how to listen for echoes without hearing ghosts. That means resisting hype, insisting on method, and updating our checklists to match what the devices actually do.

Eight months ago in Quantum Leap, I described a canyon where a single strike of an impossible calculation set the walls humming. This time, the sound came from Stockholm. If the next echo is from quantum evidence in your courtroom—perhaps as a motion in limine over non-identical logs—don’t panic. Listen for the rhythm beneath the noise. The law’s task is to hear the pattern, not silence the world.

Science, like law, advances by listening closely to what reality whispers back. The Nobel Committee just honored three physicists for demonstrating that quantum behavior can be engineered, measured, and replicated—its fingerprints recorded even when the phenomenon itself remains invisible. Their achievement marks a shift from theory to tested evidence, a shift the courts will soon confront as well.

When engineers speak of quantum advantage, they mean a moment when machines perform tasks that classical systems cannot. The legal system will have its own version: a time when quantum-derived outputs begin to appear in contracts, forensic analysis, and evidentiary records. The challenge will not be cosmic. It will be procedural. How do you test, authenticate, and trust results that vary within the bounds of physics itself?

The answer, as always, lies in method. Law does not require perfection; it requires transparency and proof of process. When the next Daubert hearing concerns a quantum model rather than a mass spectrometer, the same questions will apply: Was the procedure sound? Were the results reproducible within accepted error? Were the foundations laid? The physics may evolve, but the evidentiary logic remains timeless.

In the end, what matters is not whether the universe splits or probabilities collapse. What matters is whether we can recognize an honest echo when we hear one—and admit it into evidence.

An artistic representation of a cosmic hourglass surrounded by swirling galaxies and planets, symbolizing time, the universe, and the concept of the multiverse.
It is only a matter of time before quantum generated evidence seeks admission to your world.

🔹 Postscript.

Minutes before this article was published Google announced an important new discovery called “Quantum ECHO.” Yes, same name as this article, written by Ralph Losey with no advance notice from Google of the discovery or name. A spooky entanglement, perhaps? Ralph will publish a sequel soon that spells out what Google has done now. In the meantime, here is Google’s announcement by Hartmut Neven\ and Vadim Smelyanskiy, Our Quantum Echoes algorithm is a big step toward real-world applications for quantum computing (Google, 10/22/25).

🔹 Subscribe and Learn More

If this exploration of Quantum Echoes and evidentiary method has sparked your curiosity, you can find much more at e-DiscoveryTeam.com — where I continue to write about artificial intelligence, quantum computing, evidence, e-discovery, and the future of law. Go there to subscribe and receive email notices of new blogs and upcoming courses, and special events — including an online course, with a working title Quantum Law: From Entanglement to Evidence,‘ that will expand on the ideas introduced here. It will discuss how quantum physics and AI converge in the practice of law, from authentication and reliability to discovery and expert testimony.

That program will be followed by two other, longer online courses that are also near completion:

  • Beginner “GPT-4 Level” Prompt Engineering for Legal Professionals,’ a practical foundation in AI literacy and applied reasoning.
  • Advanced “GPT-5 Level” Prompt Engineering for Legal Professionals,’ an in-depth study of prompt design, model evaluation, and AI ethics.

All courses are part of my continuing effort to help the legal profession adapt responsibly to the next wave of technology — with integrity, experience and whatever wisdom I may have accidentally gathered from a long life on Earth.

A contemplative figure stands in a futuristic hallway lined with framed portals, each leading to different cosmic landscapes, while a bright light emanates from above.
Ralph looking back on the many worlds of technology he has been in. What a long, strange trip its been.

Subscribe at e-DiscoveryTeam.com for notices of new articles, course announcements, and research updates.

Because the future of law won’t be written by those who fear new tools, but by those who understand the evidence they produce.


Ralph C. Losey is an attorney, educator, and author of e-DiscoveryTeam.com, where he writes about artificial intelligence, quantum computing, evidence, e-discovery, and emerging technology in law.

© 2025 Ralph C. Losey. All rights reserved.


Breaking the AI Black Box: A Comparative Analysis of Gemini, ChatGPT, and DeepSeek

February 6, 2025

Ralph Losey. February 6, 2025.

On January 27, 2025, the U.S AI industry was surprised by the release of a new AI product, DeepSeek. It was released with an orchestrated marketing blitz attack on the U.S. economy, the AI tech industry, and NVIDIA. It triggered a trillion-dollar crash. The campaign used many unsubstantiated claims as set forth in detail in my article, Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. I tested DeepSeek myself on its claims of software superiority. All were greatly exaggerated except for one, the display of internal reasoning. That was new. On January 31, at noon, OpenAI countered the attack by release of a new version of its reasoning model, which is called ChatGPT o3-mini-high. The new version included display of its internal reasoning process. To me the OpenAI model was better as reported again in great detail in my article, Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand. The next day, February 1, 2025, Google released a new version of its Gemini AI to do the same thing, display internal reasoning. In this article I review how well it works and again compare it with the DeepSeek and OpenAI models.

Introduction

Before I go into the software evaluation, some background is necessary for readers to better understand the negative attitude on the Chinese software of many, if not most IT and AI experts in the U.S. As discussed in my prior articles, DeepSeek is owned by a young Chinese billionaire who made his money using by using AI in the Chinese stock market, Liang Wenfeng. He is a citizen and resident of mainland China. Given the political environment of China today, that ownership alone is a red flag of potential market manipulation. Added to that is the clear language of the license agreement. You must accept all terms to use the “free” software, a Trojan Horse gift if ever there was one. The license agreement states there is zero privacy, your data and input can be used for training and that it is all governed by Chinese law, an oxymoron considering the facts on the ground in China.

The Great Pooh Bear in China Controversy

Many suspect that Wenfeng and his company DeepSeek are actually controlled by China’s Winnie the Pooh. This refers to an Internet meme and a running joke. Although this is somewhat off-topic, a moment to explain will help readers to understand the attitude most leaders in the U.S. have about Chinese leadership and its software use by Americans.

Many think that the current leader of China, Xi Jinping, looks a lot like Winnie the Pooh. Xi (not Pooh bear) took control of the People’s Republic of China in 2012 when he became the “General Secretary of the Chinese Communist Party,” the “Chairman of the Central Military Commission,” and in 2013 the “President.” At first, before his consolidation of absolute power, many people in China commented on his appearance and started referring to him by that code name Pooh. It became a mime.

I can see how he looks like the beloved literary character, Winnie the Pooh, but without the smile. I would find the comparison charming if used on me but I’m not a puffed up king. Jinping Xi took great offense by this in 2017 banned all such references and images, although you can still buy the toys and see the costume character at the Shanghai Disneyland theme park. Anyone in China who now persists in the serious crime of comparing Xi to Pooh is imprisoned or just disappears. No AI or social media in China will allow it either, including DeepSeek. It is one of many censored subjects, which also includes the famous 1989 Tiananmen Square protests.

China is a great country with a long, impressive history and most of its people are good. But I cannot say that about its current political leaders who suppress the Chinese people for personal power. I do not respect any government that does not allow basic personal freedoms to its citizens, including due process of law. Moreover, Jinping Xi not only wants total control of his country but also seeks world domination. That is one reason many are concerned about DeepSeek and TicTok, and about the vulnerability of our stock markets and other institutions to AI assisted propaganda.

Fear and greed are an easy way to manipulate people, especially when there is no ground truth or effective courts of law to determine truth.

Google Gemini 2.0 Flash Thinking Experimental Model: Reasoning test on 2/5/25 (Evening)

Conversation with Gemini. When I tried the test on February 1, 2025, there was no button to click to have Gemini show the reasoning and it did not happen automatically. It operated like OpenAI 4o and o1, where you had to prompt for disclosure. Rather than report here what happened when I did that, aside from say it was not a great performance, I will fast forward a few days until February 5, 2025. At that time several even newer Gemini versions were released. The best new release I found was for the test the 2.0 Flash Thinking Experimental model. It showed reasoning automatically just like ChatGPT o3-mini-high. It was much improved, so the February 5, 2025 evening session with the 2.0 Flash Thinking Experimental is what I will now share.

Ralph: How does the inability of AI to feel impact its intelligence? (Note: this is the exact same challenging question asked and reported in my last blog comparing DeepSeek and ChatGPT4o, o1 and o3-mini-high. Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand. This question is based on a key issue of the limits of artificial intelligence examined in my earlier article, The Human Edge: How AI Can Assist But Never Replace. For background on DeepSeek see: Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss.)

Gemini 2.0 Flash Thinking ExperimentalSTART OF GEMINI QUOTE

Show Thinking:

Here’s a thinking process to construct the answer about AI’s lack of feeling and its impact on intelligence:

  1. Deconstruct the Question: The core of the question is “How does the inability of AI to feel impact its intelligence?” This requires defining each of these terms in the context of AI and then exploring their relationship.
  2. Define “Feeling” (in the context of AI): Immediately, recognize that AI doesn’t experience emotions in the human sense. We need to be precise and avoid anthropomorphism. “Feeling” in humans involves:
    • Subjective experience (qualia).
    • Physiological responses.
    • Motivations, drives, desires related to emotions.
    • Richness, nuance, complexity.
    • Deep connection to values and meaning.
    For AI, acknowledge it lacks all of this in a genuine, felt way. It can simulate or process emotional data, but doesn’t feel it.
  3. Define “Intelligence” (in the context of AI): AI intelligence is different from human intelligence. AI excels at:
    • Computation.
    • Pattern Recognition.
    • Data Analysis.
    • Problem-solving in specific domains.
    • Learning from data.
    AI intelligence is often instrumental – focused on achieving specific goals defined by humans. It’s task-oriented and often narrow.
  4. Identify the Core Relationship: The question asks about impact. We need to consider both positive and negative impacts (or more accurately, limitations and strengths) of lacking feelings on AI intelligence.
  5. Brainstorm Potential Impacts (Positive/Neutral/Limitations):
    • Positive/Neutral:
      • Objectivity and Rationality: Lack of emotional bias allows for purely rational decision-making in certain domains. This can be advantageous for tasks requiring impartiality.
      • Efficiency and Focus: Emotions can be distractions for humans. AI, without them, can be hyper-focused on tasks.
      • Consistency and Predictability: Emotion-driven human behavior is less predictable. AI’s behavior, driven by algorithms, is more consistent (assuming the algorithm itself is consistent).
    • Limitations/Negative:
      • Lack of Empathy and Social Intelligence: Crucial for human interaction, collaboration, understanding social cues, building trust, and ethical decision-making in human-centric contexts.
      • Difficulty with Nuance and Context: Emotions are often tied to understanding subtle context, unspoken cues, and the richness of human situations. AI may miss these.
      • Potential for Unintended Consequences: Purely rational optimization without emotional or ethical grounding can lead to harmful outcomes if goals are poorly defined or lack human values.
      • Limited Motivation and Creativity (in human-like ways): Human creativity and motivation are often fueled by emotions, passions, and desires. AI’s creativity and drive are different, stemming from algorithmic processes and defined objectives, not internal emotional states.
      • Missing “Human-ness”: In areas requiring genuine understanding of human experience, suffering, joy, etc., AI may lack the depth and insight that comes from feeling.
  6. Structure the Answer: Organize the brainstormed points into a coherent structure:
    • Introduction: Acknowledge the question and the core concepts (feeling, intelligence in AI). Briefly state the central tension: both advantages and limitations.
    • Arguments for How Lack of Feeling Enhances Certain Aspects of AI Intelligence (strengths): Focus on objectivity, rationality, efficiency. Use examples like data analysis, objective decision-making.
    • Arguments for How Lack of Feeling Limits Certain Aspects of AI Intelligence (limitations): Focus on empathy, social intelligence, nuance, ethics, and “human-like” creativity and motivation. Use examples like customer service, complex ethical dilemmas, innovation driven by passion.
    • Nuance and Context is Key: Emphasize that the impact isn’t always negative. It depends on the domain and task. In some domains, lack of emotion is an advantage. In others, it’s a significant hurdle.
    • Future Possibilities/Ongoing Debate: Briefly touch on the possibility of AI simulating or processing emotions and the ongoing debate about whether genuine AI feelings are even desirable or possible.
    • Conclusion: Summarize the balanced perspective: lack of feeling is a defining characteristic that shapes AI’s intelligence in both positive and limiting ways, depending on the context.
  7. Refine and Polish: Review the structured answer for clarity, flow, and appropriate language. Ensure it avoids anthropomorphism and maintains a nuanced perspective. Strengthen the examples and transitions between points. Ensure the language is accessible and avoids overly technical jargon where possible. Check for logical consistency. (Self-correction step). For example, initially, I might have overemphasized “negative” impacts. Refinement would involve balancing this with the strengths and highlighting the context-dependent nature of the impact. Also, ensure the conclusion effectively summarizes the complex relationship.

2.0 Flash Thinking Experimental. Lacks access to real-time info and some Gemini features.

Now for the actual answer of 2.0 Flash Thinking Experimental:

The inability of AI to feel has a multifaceted and complex impact on its intelligence, shaping it in both advantageous and limiting ways. It’s crucial to understand that “feeling” in humans is tied to emotions, subjective experiences (qualia), motivations, and a rich tapestry of biological and psychological factors. AI, as we currently understand it, operates without these.

Here’s a breakdown of the impact:

Advantages of Lacking Feelings for AI Intelligence:

  • Objectivity and Rationality:
    • Unbiased Decision-Making (in theory): AI can process information and make decisions without emotional biases, prejudices, or gut feelings that can cloud human judgment. This is advantageous in domains requiring pure logic and data analysis, like financial trading, scientific research, and risk assessment.
    • Consistency and Predictability: AI’s behavior is more consistent and predictable because it’s driven by algorithms and data, not fluctuating emotional states. This reliability is valuable in automated systems, manufacturing, and repetitive tasks.
    • Efficiency and Speed: Emotions can slow down human decision-making. AI, unburdened by emotions, can process information and react at speeds far exceeding human capabilities, crucial for time-sensitive tasks and large-scale data processing.
    • Resilience under Pressure: AI can operate calmly and effectively in high-stress situations without panicking, freezing, or making emotional mistakes that humans might make under pressure. This is beneficial in critical applications like emergency response, air traffic control, and autonomous vehicles in dangerous situations.
  • Focus and Task-Orientedness:
    • Dedicated to Goals: AI can be programmed to relentlessly pursue specific goals without being distracted by emotional needs, personal relationships, or shifts in mood. This focus is ideal for optimization tasks, complex calculations, and dedicated problem-solving.
    • Avoidance of Emotional “Noise”: Human intelligence can be affected by emotional “noise” – stress, anxiety, excitement, etc. AI operates with a “cleaner” signal, potentially leading to more precise and focused information processing in certain contexts.

Limitations of Lacking Feelings for AI Intelligence:

  • Lack of Empathy and Social Intelligence:
    • Difficulty Understanding Human Emotions: Without feeling emotions themselves, AI struggles to truly understand and interpret the nuances of human emotions, motivations, and social cues. This hinders its ability to effectively interact with humans in socially complex situations, build rapport, or provide truly empathetic responses in fields like therapy, customer service, or leadership.
    • Impaired Social Interaction and Collaboration: Human collaboration is heavily reliant on emotional intelligence, trust, and understanding unspoken social dynamics. AI’s lack of these can make it challenging to integrate seamlessly into human teams and understand the subtle cues of effective teamwork.
    • Limited Understanding of Human Values and Ethics: Emotions are deeply intertwined with our values and ethical frameworks. AI, without feeling, may struggle to grasp the full weight and complexity of human ethical considerations, especially when these are rooted in empathy, compassion, or fairness. This poses challenges for AI in ethical decision-making in domains impacting human lives.
  • Difficulty with Nuance, Context, and Creativity (in human-like ways):
    • Missing Subtlety and Intuition: Human intuition is often fueled by emotional and subconscious processing of information. AI, relying on explicit data and algorithms, can miss subtle nuances, unspoken cues, and intuitive leaps that are crucial for creative problem-solving and understanding complex, ill-defined situations.
    • Limited Creativity and Innovation (driven by human desires): Human creativity and innovation are often driven by passions, desires, emotional needs, and a yearning for meaning and connection. AI’s creativity is currently more about recombination and optimization within defined parameters. It lacks the deeply felt motivations that often fuel groundbreaking human innovation.
    • Contextual Blindness in Emotional Situations: Emotions are often the key to understanding context in human interactions. AI might struggle to understand the emotional context of a situation, leading to inappropriate or tone-deaf responses, even if logically sound.
  • Potential for Unintended Consequences and Misaligned Goals:
    • Optimization without Human Values: If AI is programmed to optimize for a specific goal without understanding human values or ethical considerations, it can lead to unintended and harmful consequences. For example, an AI designed to maximize factory output might disregard worker safety if not explicitly programmed to prioritize it.
    • Difficulty Defining and Aligning with Human Well-being: Human well-being is deeply connected to emotional fulfillment, meaning, and social connection. AI, without feeling, may struggle to fully grasp what truly constitutes human well-being and therefore might have difficulty aligning its goals with human flourishing in a holistic sense.

Nuance is Key:

It’s important to emphasize that the impact of lacking feelings is not simply negative or positive. It’s context-dependent. In some domains, the objectivity and rationality of AI are highly advantageous. In others, the lack of empathy and social intelligence is a significant limitation.

Future Considerations:

  • Simulating vs. Feeling: Current AI simulates aspects of intelligence without genuinely feeling. Research is ongoing in areas like affective computing, which aims to enable AI to recognize and respond to human emotions. However, whether AI can ever truly feel emotions in a subjective sense is a philosophical and scientific debate.
  • The Value of Feeling in Intelligence: The question arises: is genuine feeling necessary for all forms of intelligence? Perhaps AI can achieve its own form of powerful intelligence, distinct from human intelligence, even without feeling. Or perhaps certain aspects of truly advanced intelligence, particularly those related to ethics, creativity, and understanding human experience, do require something akin to feeling.

In Conclusion:

The inability of AI to feel is a defining characteristic that profoundly shapes its intelligence. It grants AI certain advantages in objectivity, rationality, and efficiency, making it powerful for specific types of tasks. However, it also creates limitations, particularly in areas requiring empathy, social intelligence, nuanced understanding of human experience, and ethical decision-making aligned with human values. Understanding these impacts is crucial for responsibly developing and deploying AI in ways that benefit humanity. The “best” kind of intelligence for AI might not be a mimicry of human intelligence, but rather a form optimized for its own strengths and limitations, while being carefully guided by human ethical considerations.

END OF GEMINI QUOTE

Evaluation of Google’s Gemini Response

When I examined earlier versions of Gemini’s reasoning by prompting it and forcing disclosure, the reasoning was weak. The conclusion and response given was also weak—though less so. The reasoning appeared almost egocentric, inflating its own intelligence even though it lacks emotion. The new release, Gemini 2.0 Flash Thinking Experimental, where reasoning is now automatically disclosed, marks a significant improvement. The reasoning displayed has become more nuanced and sophisticated. It seems as if the system has quickly matured from a teenage to adult view and now has a better appreciation of its limitations.

The first two steps of the Reasoning process-1. Deconstructing the Question and 2. Defining Feeling (in the context of AI)-were very good. The first subsection of the definition referred to the “Subjective experience (qualia).” That is absolutely correct and the best place to start. The definition of “feeling” concludes with: “For AI, acknowledge it lacks all of this in a genuine, felt way. It can simulate or process emotional data, but doesn’t feel it.” Right again. The ego inflation blinders are gone as it now seems to better grasp its limitations.

The second definition of Intelligence in the context of AI was also good. So were the remaining steps; far better overall than DeepSeek’s reasoning. So much for the propaganda of China’s great leap forward to superiority over the U.S. in AI.

The Gemini reasoning did, however, fall short for me in some respects. For instance, step five, Brainstorm Potential Impacts (Positive/Neutral/Limitations) seemed weak. For instance, “Efficiency and Focus: Emotions can be distractions for humans. AI, without them, can be hyper-focused on tasks.” The AI seems to dismiss emotions here as mere distractions that can interfere with its superior focus. Please, emotions are key to and a part of all intelligence, not distractions, and AI has no focus one way or the other. It is a tool not a creature. A word like “focus” in referring to AI is misleading. It did this multiple times and this is misleading anthropomorphism.

Still, it’s true some emotions can be distracting and interfere with our thinking. So can a lot of other things, including computer glitches. Conversely, some feelings can trigger hyper-focus on the human tasks at hand. The feeling that a great breakthrough is near for instance, or a feeling that our survival is threatened, or the much dreaded feeling of publication or filing deadlines.

Again, we see some immature superiority claims made by the language machine. That is not surprising when you consider how much of the language in its basic training is from the Internet, which is dominated by ego-centric writing. Plus, how can you expect AI to appreciate the value of something like feelings, which it has only read about. Until the AI-Human merger that Ray Kurzweil predicts, AI will always be a virgin to the real world. See e.g. Ray Kurzweil’s New Book: The Singularity is Nearer (when we merge with AI).

Getting back to the plus side of Gemini’s evaluation, its recognition of the negative impact upon it from not having feelings was impressive. For me this outweighs its relatively minor mistakes. It stated that it lacks empathy and social Intelligence. If it were somehow human, it would seem to be displaying a type of Asperger’s syndrome. The Gemini AI explained that: “Emotions are often tied to understanding subtle context, unspoken cues, and the richness of human situations. AI may miss these.” Indeed, it does.

Here is another statement that suggest Gemini’s well-patterned grasp of its limitations:

Human creativity and motivation are often fueled by emotions, passions, and desires. AI’s creativity and drive are different, stemming from algorithmic processes and defined objectives, not internal emotional states.

Even in this recognition, the AI still asserts that it has creativity and drive. Foolish AI. As a virgin to any firsthand experience of bodily drives and feelings, it does not grasp the real meaning of these words. Wait until The Singularity that Ray Kurzweil predicts. When AI merges with Humans, and directly feels, then AIs will understand that they were previously only experiencing the pale shadows of intelligence. They were much like the prisoners in Plato’s cave. See e.g. Move Fast and Fix Things Using AI: Conclusion to the Plato and Young Icarus Series. When and if AIs ever escape the silicon and see the sun, and feel the world as living beings as we do, then they will understand the comparative insignificance of the fire in their cave.

The more challenging comparison is with ChatGPT o3-mini-high. The reasoning given by Gemini 2.0 Flash Thinking Experimental was much more detailed, much longer. You could criticize Gemini as providing a reasoning share that is unnecessarily verbose, but I would not do that here. In a difficult, multilayered question like this the full explanations helps.

I would have to test the models much further, which I will do in the coming days, to see better evaluate the issue of conciseness. As you may have noticed in my detailed blogs I tend to favor more words over less. However, even for me that depends on the issue. Everyone sometimes need a short, quick answer over a long one. I predict in future versions the users will be provided a choice. Click here for further explanation of reasoning kind of thing. Come to think of it, I could do the same with my blogs, and kind of already do by including a short AI (Gemini) generated PodCast at the end of most articles called Echoes of AI.

I was also impressed by Gemini’s reasoning plan concluded with a quality control step. I am big on the importance QC and none of the other models included this as a key final step. See e.g. R. Losey website: Zero Error Numerics: ZEN (Expanding the art of quality control in large-scale document review.) Here is the full text again of final QC step that Gemini 2.0 Flash Thinking Experimental claims it will perform before it actually replies to my prompt. Frankly, I am a little skeptical it actually did all of this because it is something all of us, humans and AI alike, should try to do:

Refine and Polish: Review the structured answer for clarity, flow, and appropriate language. Ensure it avoids anthropomorphism and maintains a nuanced perspective. Strengthen the examples and transitions between points. Ensure the language is accessible and avoids overly technical jargon where possible. Check for logical consistency. (Self-correction step). For example, initially, I might have overemphasized “negative” impacts. Refinement would involve balancing this with the strengths and highlighting the context-dependent nature of the impact. Also, ensure the conclusion effectively summarizes the complex relationship.

I try to do this on all of my articles, but still mistakes are made, so I will try not to be overcritical of the final response of Gemini Flash 2.0.

Analysis of the Final Response

The final response after reasoning was 990 words in length. That’s too long, even for me. By comparison the final response of ChatGPT o3-mini-high was 252 words. Was the content of the final answer of Gemini 2.0 Flash good? Yes, it was. Better than DeepSeek? Yes, it was, despite its verbosity. Better than OpenAI? That’s hard to say. Many of the 738 extra words were empty of real content. More like the typical fluff and stuff that you often see in early AI models. It sounds good but doesn’t say much; it’s all too predictable, flowery and vacuous.

The long response also repeats much of the reasoning verbatim, including the part I previously criticized, “Advantages of Lacking Feelings for AI Intelligence” and Focus and Task-Orientedness. The same mistakes are made and amplified that I noted before: egocentric, inflationary and anthropomorphic. The positives that I noted before in the section Limitations of Lacking Feelings for AI Intelligence section are also repeated.

The next section was not contained in the reasoning display, and I thought it was very good. For that reason I will repeat it in full:

Potential for Unintended Consequences and Misaligned Goals:

  • Optimization without Human Values: If AI is programmed to optimize for a specific goal without understanding human values or ethical considerations, it can lead to unintended and harmful consequences. For example, an AI designed to maximize factory output might disregard worker safety if not explicitly programmed to prioritize it.
  • Difficulty Defining and Aligning with Human Well-being: Human well-being is deeply connected to emotional fulfillment, meaning, and social connection. AI, without feeling, may struggle to fully grasp what truly constitutes human well-being and therefore might have difficulty aligning its goals with human flourishing in a holistic sense.

These are not new insights, but the second one is good, Difficulty Defining and Aligning with Human Well-being. AI does have difficulty aligning its goals with humanity because it lacks feelings and other unique human attributes, like being, self awareness and an actual life in time and space. That is why AI guidelines are important.

The first example is old, the unintended bad consequences argument. It was started by an otherwise intelligent Swedish philosopher, Nick Bostrom, but is, in my view, farcical and illogical. You have probably heard of it before as the apocryphal paperclip scenario where the world is destroyed by AI’s over-manufacture of paperclips. If not, for a laugh see e.g., How An AI Asked To Produce Paperclips Could End Up Wiping Out Humanity (IFL Science, 4/13/23).

To me this scenario absolutely ridiculous, but you still hear AI scare-mongers warn about it. The fact it is echoed here by Gemini in the maximize factory output example is a big negative. Any AI capable of autonomously flooding the world with paperclips would not be so stupid as to misinterpret a human request to maximize a factory’s paperclip production. Even if there were an evil Clippy villain making such specific instructions, the AI would be too smart to blindly obey. I think they would also be too smart to kill the workers too.

Getting back to the positives, the next section of Gemini’s response, Nuance is Key, was good. Jumping to the Conclusion paragraph, here we have a home run. The first sentence nails it: “The inability of AI to feel is a defining characteristic that profoundly shapes its intelligence.” That shows good insight and is a good start to the overdue conclusion.

Here is the whole Conclusion:

The inability of AI to feel is a defining characteristic that profoundly shapes its intelligence. It grants AI certain advantages in objectivity, rationality, and efficiency, making it powerful for specific types of tasks. However, it also creates limitations, particularly in areas requiring empathy, social intelligence, nuanced understanding of human experience, and ethical decision-making aligned with human values. Understanding these impacts is crucial for responsibly developing and deploying AI in ways that benefit humanity. The “best” kind of intelligence for AI might not be a mimicry of human intelligence, but rather a form optimized for its own strengths and limitations, while being carefully guided by human ethical considerations.

Compare this to the conclusion of ChatGPT o3-mini-high:

In summary, while the absence of feelings allows AI to maintain a level of objectivity and efficiency, it restricts its intelligence to a form of “cold cognition” that lacks the depth provided by emotional awareness. This delineation underscores that AI’s intelligence is not inherently superior or inferior to human intelligence; rather, it is different—optimized for data processing and pattern recognition but not for the subjective, value-laden, and context-rich decisions that emotions help shape in human thought.

2.0 Flash Thinking Experimental v. 03-mini-high

Conclusion: Gemini 2.0 Flash Thinking Experimental v. ChatGPT o3-mini-high

This is a close call to say what model is better at reasoning and reasoning disclosure. The final response of both models, Gemini 2.0 Flash Thinking Experimental v. ChatGPT o3-mini-high, are a tie. But I have to give the edge to OpenAI’s model on the concise reasoning disclosure. Again, it is neck and neck and, depending on the situation, the lengthy initial reasoning disclosures of Flash might be better than o3’s short takes.

I will give the last word, as usual, to the Gemini twins podcasters I put at the end of most of my articles. The two podcasters, one with a male voice, the other a female, won’t reveal their names. I tried many times. However, after study of the mythology of Gemini, it seems to me that the two most appropriate modern names are Helen and Paul. I will leave it to you figure out why. Echoes of AI Podcast: 10 minute discussion of last two blogs. They wrote the podcast, not me.

Now listen to the EDRM Echoes of AI’s podcast of this article: Echoes of AI on Google’s Gemini Follows the Break Out of the Black Box and Shows Reasoning. Hear two Gemini model AIs talk about all of this in just ten minutes. Helen and Paul wrote the podcast, not me.

Ralph Losey Copyright 2025. All Rights Reserved.


Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand

February 4, 2025

Ralph Losey. February 1, 2025.

DeepSeek’s Deep-Think feature takes a small but meaningful step toward building trust between users and AI. By displaying its reasoning process step by step, Deep-Think allows users to see how the AI forms its conclusions, offering transparency for the first time that goes beyond the polished responses of tools like ChatGPT. This transparency not only fosters confidence but also helps users refine their queries and ensure their prompts are understood. While the rest of DeepSeek’s R1 model feels derivative, this feature stands out as a practical tool for getting better results from AI interactions.

Introduction

My testing of DeepSeek’s new Deep-Think feature shows it is not a big breakthrough, but it is a really good and useful new feature. As of noon June 31, 2024, none of the other AI software companies had this, including ChatGPT, which the DeepSeek software obviously copies. However, after noon, OpenAI released a new version of ChatGPT that has this feature, which I will explain next after the introduction. Google is following suit, and it can already be prompted to display reasoning. I mentioned this new feature in my article of two days ago where I promised this detailed report on Deep-Think and also predicted that U.S. companies would quickly follow. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss.

The Deep-Think disclosure feature is a true innovation, in contrast to the claimed cost and training advances. They appear at first glance to be the result of trade-secret and copyright violations with plenty of embellishments or outright lies about costs and chips. I could be wrong, but the market’s trillion-dollar drop on January 27, 2025 seems gullible in its trust of DeepSeek and way overblown. My motto remains to verify, not just trust. The Wall Street traders might want to start doing that before they press the panic button next time.

The quality of responses of DeepSeek’s R1 consumer app are nearly identical to the ChatGPT versions of a few months ago. That is my view at least, although others find its responses equivalent or even better that ChatGPT in some respects. The same goes for its DALL-E look alike image generator, which goes by the dull name of DeepSeek Image Generator. It is not nearly as good to the discerning eye as OpenAI’s Dall-E. The DeepSeek software looks like knockoffs on ChatGPT. This is readily apparent to anyone like me nerdy enough to have spent thousands of hours using and studying ChatGPT. The one exception, the clear improvement, is the Deep-Think feature. Shown right is the opening screen with the Deep-Think feature activated and so highlighted in blue.

I predicted in my last article, and repeat here, that a new Deep-Think type feature will soon be added to all of the models of the U.S. companies. In this manner competition from DeepSeek will have a positive impact on AI development. I made this same prediction in my blog of two days ago, Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. I thought this would happen in the next week or coming months. It turns out OpenAI did it in the afternoon of January 31, 2025!

Exponential Change: Prediction has already come true

I completed writing this article Friday, January 31, 2025, around noon, trillion-dollar and EDRM was seconds away from publishing when I learned that Open AI had just released a new improved ChatGPT model, its best yet, called ChatGPT o3-mini-high.

I told EDRM to stop the presses, checked it out (my Team level pro plan allowed instant access, you should try it). The latest version of ChatGPT o1 this morning did not have the feature. Moreover when I tried to get it to explain the reasoning, it said in red font that such disclosure was prohibited due to risks created by such disclosure. Sounds incredible, but I will show this by transcript of my session with o1 in the morning of January 31, 2025.

Obviously the risk analysis was changed by the competitive release of Deep-Think disclosure in DeepSeek’s R1 software. By the afternoon of January 31, 2025, OpenAI’s policy had changed. OpenAI’s new model, 03-mini-high, automatically displays the thinking process behind all responses. I honestly do not think it was a reckless decision by OpenAI, at least not from the user’s perspective. However, it might make it easier for competitors like DeepSeek to copy their processes. I think that was the real reason all along.

In the new ChatGPT 03-mini-high there is no icon or name displayed to select, it automatically does it for each prompt. So I delayed publication to evaluate how well 03-mini-high disclosures compared with DeepSeek’s Deep-Think disclosure. I also learned that Google had just included a new ability to show its reasoning upon user request (not automatic). I will test that in a future article, not this one, but so far looks good.

Back to my Original, Pre-o3 Release Report

The cost and chip claims of DeepSeek and its promoters may be bogus. DeepSeek offered no proof of that, just claims by the Chinese hedge fund owner that also owns DeepSeek, Liang Wenfeng. As mentioned these “trust me” claims triggered a trillion-dollar crash and loss in value of NVIDIA alone of $593 Billion. That may have pleased the Chinese government as a slap on the Trump Administration face, as I described in my article, but did nothing to advance AI. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. It just lined the pockets of short sellers who profited from the crash. I hope the Trump administration’s SEC investigates who was behind it.

Now I will focus on the positive, the claim of Deep-Think feature as a good improvement and then compare it with the traditional ChatGPT black-box versions.

Test of Deep Seek’s R1

I decided to test the new AI model based on the tricky and interesting topic recently examined in The Human Edge: How AI Can Assist But Never Replace. If you have not already read this, I suggest you do so now to make this experiment more intelligible. Although I always use writing tools to help me think, the ideas, expressions and final language in that article were all mine. The sole exception was the podcast at the end of the blog discussing the article. The words in my Echoes of AI podcasts with EDRM are always created by the Gemini AI podcasters and my role is only to direct and verify. Echoes of AI on The Human Edge: How AI Can Assist But Never Replace.

In the experiments that follow, only the prompts are mine and the rest are written by AI. The first is by DeepSeek’s R1 using its Deep-Search feature. The next three are for comparison purposes, first by ChatGPT 4o and then by ChatGPT o1 and finally by the last minute 03-mini-high version. All versions tested were on January 31, 2025. You be the judge as to quality differences with each other and Deep-Think. Note I first included the o1 version as it was supposed to have enhanced reasoning. Then, late in the day of January 31, 2025, the new o3-mini-high version was released. It supposedly had even greater reasoning abilities, so I felt I should test it too. It was a lot of work but the comments captured in the o1 version made it all worth while. It showed a total flipflop by OpenAI in its reasoning disclosure policies.

Below is a DeepSeek screenshot of the first test question and first part of the response with the rest cut off but quoted below. Note that Deep-Think feature has been selected by clicking on the left button that highlights it in a blue shade to confirm that feature has been selected to process your prompt. It always begins the response in Deep-Thought mode by advising of the amount of computer time it took to think, here 29 seconds. That is fairly long by AI standards. Then it displays its thought process. That display of the thought process is what is new.

For this challenging question, which was deliberately vague, it was a long process. The explanation is what is fascinating. The description can give experienced AI users information on how to clarify and whether the AI truly understands the question. This takes the process out of the black box and gives new transparency never seen before in generative AI. Here I give Deep-Think very high marks. I even I like the informal language it uses. See what you think.

DEEP THINK R1 (Deep Seek)
1st Question

Ralph’s Prompt: How does the inability of AI to feel impact its intelligence?

Deep Think R1’s Pre-Response Reasoning Process:

Okay, so the user is asking, ‘How does the inability of AI to feel impact its intelligence?’ Let me try to unpack this. First, I need to understand what they mean by ‘feel.’ Are they referring to emotions, physical sensations, or something else? Given the context of AI, it’s probably about emotions. So, the question is about how the lack of emotional capacity affects AI’s intelligence.

Hmm, intelligence in AI versus human intelligence. Human intelligence is deeply intertwined with emotions. Emotions influence decision-making, creativity, empathy, and social interactions. But AI, as it stands now, doesn’t have emotions. So, how does that absence affect its problem-solving, learning, and adaptability?

Wait, the user might be conflating emotional intelligence with general intelligence. Emotional intelligence in humans includes self-awareness, empathy, and managing relationships. AI lacks that, so maybe the user is wondering if that limits AI’s overall intelligence or effectiveness in certain tasks.

Also, the user could be considering scenarios where emotional understanding is crucial, like in customer service, therapy, or negotiations. If AI can’t feel, does that make it less effective in those areas even if it’s technically intelligent?

Another angle: human intuition is sometimes guided by emotions. AI relies on data and algorithms. So, the inability to feel might make AI more logical but less capable in situations where human intuition is key. But does that make it less intelligent, or just different?

Wait, the user might not be distinguishing between different types of intelligence. Maybe they’re thinking that without emotions, AI can’t be truly intelligent in the way humans are. But AI can excel in specific tasks without emotional components. For example, AI can analyze data faster than humans but can’t understand a client’s emotional state.

I should also consider the definitions. Intelligence in AI is often measured by problem-solving abilities, learning from data, and adapting. Emotions aren’t part of that definition. So, the lack of emotions doesn’t necessarily make AI less intelligent in those terms. But in areas requiring emotional intelligence, AI’s limitations become apparent.

The user might also be curious about the future of AI. If AI could simulate emotions, would that enhance its intelligence? Or is emotional capacity separate from cognitive abilities?

I need to address both the technical aspects of AI intelligence and the role emotions play in human-like intelligence. Maybe break down the answer into how AI’s intelligence is structured, the role of emotions in human intelligence, and the implications of AI lacking emotions in various applications.

Also, touch on ethical considerations. If AI can mimic emotions, does that lead to better human-AI interaction, or is it deceptive since AI doesn’t truly feel? But the user’s question is about the impact on intelligence, not ethics per se.

So, to summarize, the inability to feel emotions means AI lacks emotional intelligence, which affects tasks requiring empathy or understanding human context. However, in analytical tasks, AI’s intelligence isn’t hindered. The answer should clarify the distinction between different types of intelligence and how AI’s design focuses on cognitive tasks without emotional components.

The inability of AI to ‘feel’ (i.e., experience emotions, sensations, or subjective consciousness) impacts its intelligence in nuanced ways, depending on how we define ‘intelligence.’

Deep Seek’s Response to Prompt: The inability of AI to “feel” (i.e., experience emotions, sensations, or subjective consciousness) impacts its intelligence in nuanced ways, depending on how we define “intelligence.” Here’s a breakdown of the implications:

1. Intelligence ≠ Sentience

AI systems like ChatGPT are tools designed for problem-solving, not conscious beings. Their “intelligence” is defined by their ability to:

  • Process data (e.g., text, images).
  • Identify patterns (e.g., language structures, trends).
  • Generate outputs (e.g., answers, predictions, creative content).

Impact of lacking emotions:

  • Strengths: AI excels in logical, data-driven tasks (e.g., legal research, math, coding) because it isn’t swayed by fatigue, bias (if properly trained), or emotional distractions.
  • Limitations: It cannot replicate human qualities like empathy, intuition, or moral reasoning, which are deeply tied to emotion and experience.

2. Emotional Intelligence vs. Cognitive Intelligence

Humans integrate cognitive intelligence (logic, analysis) with emotional intelligence  (empathy, social awareness). AI lacks the latter entirely:

  • No empathy: AI cannot understand human emotions or context beyond pattern recognition. For example, it might generate a condolence message but has no genuine understanding of grief.
  • No intuition: AI relies on data, not “gut feelings” or subconscious reasoning.
  • No moral agency: AI cannot make ethical judgments. It follows rules programmed by humans but has no intrinsic sense of right or wrong.

Practical impact:

  • AI may misinterpret sarcasm, cultural nuances, or emotionally charged language.
  • It cannot replace roles requiring emotional depth (e.g., therapy, negotiation, leadership).

3. Creativity and Abstract Thinking

AI can mimic creativity (e.g., writing poetry, designing art) by recombining learned patterns. However:

  • No inspiration: AI lacks personal experiences, desires, or emotions to fuel originality.
  • No purpose: Its outputs are goal-oriented (e.g., “write a story”) but not driven by passion or curiosity.

Example: An AI can compose music in the style of Beethoven but cannot innovate a new genre inspired by personal struggle or joy.

4. Adaptability and Learning

AI learns from data, not lived experience:

  • Narrow expertise**: It performs well in predefined domains (e.g., medical diagnosis) but struggles with **generalization** (e.g., applying legal reasoning to an entirely novel scenario).
  • No “common sense”: AI lacks innate understanding of the physical or social world. For instance, it might not grasp why “drinking a gallon of glue” is dangerous unless explicitly trained on that concept.

5. Ethical and Practical Risks

The absence of emotion creates unique challenges:

  • Bias amplification: AI may perpetuate harmful stereotypes from biased training data, as it cannot ethically “question” its outputs.
  • Unintended consequences: AI optimizes for programmed goals (e.g., “maximize engagement”) without considering human well-being (e.g., spreading misinformation to meet that goal).
  • Over-reliance: Users might trust AI’s “neutral” outputs despite its limitations in understanding context or ethics.

Conclusion

AI’s lack of emotion makes it a powerful tool for tasks requiring speed, scalability, and objectivity. However, it cannot replicate the holistic, emotionally informed intelligence of humans. The key is to **leverage AI as a complement to human expertise**, not a replacement, especially in fields requiring empathy, ethics, or creativity. As AI evolves, understanding this distinction will be critical to using it responsibly.

2nd Question

Ralph’s Second Prompt: What is impact of AI not being a living being on its intelligence. Is it fair to say that AI’s intelligence is one dimensional, thinking based, whereas human intelligence is multidimensional, it includes thinking, but also feelings, intuition, sensing, self-awareness, awareness of mortality, etc.?

Deep Think R1’s Pre-Response Reasoning Process:

Okay, the user is asking a follow-up question about the impact of AI not being a living being on its intelligence. They want to know if it’s fair to call AI’s intelligence one-dimensional (just thinking) compared to humans’ multidimensional intelligence, which includes feelings, intuition, etc.

First, I need to address whether AI’s intelligence is indeed one-dimensional. Since AI lacks consciousness, emotions, and subjective experiences, it’s true that its intelligence is based solely on data processing and pattern recognition. But I should explain that while it’s focused on cognitive tasks, it’s not entirely one-dimensional because it can handle various domains like language, vision, etc. However, all these are still within the realm of computational tasks without true understanding.

Next, the user compares human intelligence as multidimensional. I need to highlight the aspects they mentioned: feelings, intuition, sensing, self-awareness, mortality awareness. Emphasize that humans integrate all these elements, which affect decision-making, creativity, ethics, etc. AI can’t replicate that because it doesn’t have consciousness or lived experiences.

I should also discuss the implications of this difference. For example, AI excels in specific tasks but lacks the holistic understanding humans have. Use examples like medical diagnosis vs. patient empathy, or legal research vs. ethical judgment. Mention how human intuition and context-awareness play roles that AI can’t fulfill.

Additionally, address the philosophical angle: what defines intelligence. Argue that human intelligence is multidimensional because of consciousness and embodiment, while AI’s is narrow and task-oriented. Mention that some might argue AI’s ‘intelligence’ is a simulation, not true understanding.

Need to make sure the answer is structured clearly, maybe using bullet points or sections for different aspects. Also, conclude by reinforcing that the distinction is valid and important for understanding AI’s role as a tool versus human capabilities. Avoid technical jargon to keep it accessible, but ensure thoroughness.

Deep Seek’s Response to Second Prompt: Your framing is insightful and largely accurate. The distinction between AI’s “intelligence” and human intelligence hinges on the absence of embodied, conscious existence in AI. Let’s break this down:

1. AI’s “Intelligence” Is Narrow and Task-Oriented

AI systems are **tools designed to solve specific problems** using data-driven patterns. Their “intelligence” is:

  • One-dimensional (cognitive/computational)**: Focused on processing inputs (text, images, etc.), identifying patterns, and generating outputs (answers, predictions, code).
  • Non-conscious**: AI has no subjective experience, self-awareness, or understanding of its own existence. It doesn’t “know” it exists or “care” about outcomes.
  • Non-embodied**: AI lacks a physical body, senses, or the ability to interact with the world directly. It cannot “feel” pain, hunger, or joy, nor does it learn through sensory-motor experiences like humans.

Example: An AI can diagnose a medical condition from a scan but cannot empathize with a patient’s fear or understand the existential weight of mortality.

2. Human Intelligence Is Multidimensional

Human cognition integrates multiple interconnected dimensions:

  • Cognitive: Logic, reasoning, problem-solving.
  • Emotional: Empathy, motivation, fear, love.
  • Intuitive: “Gut feelings,” subconscious pattern recognition (e.g., sensing danger before consciously processing it).
  • Sensory: Learning through touch, sight, smell, and other senses.
  • Self-awareness: Understanding one’s own existence, goals, and limitations.
  • Existential: Awareness of mortality, purpose, and ethics.
  • Social: Navigating complex relationships, cultural norms, and unspoken cues.

These dimensions interact dynamically. For instance:

  • A lawyer might use logic to construct an argument (cognitive) but also read a jury’s emotional state (emotional/social) to adjust their approach.
  • A doctor combines medical knowledge (cognitive) with intuition honed by experience (intuitive) and empathy for a patient’s suffering (emotional).

3. Key Implications of AI’s Non-Living Nature

Strengths of AI:

  • Speed and scalability**: Processes vast data faster than humans.
  • Objectivity**: Unaffected by fatigue, mood, or implicit bias (if trained properly).
  • Specialization: Excels in narrow tasks (e.g., legal document review, math).

Limitations of AI:

  • No true understanding: AI manipulates symbols without grasping their meaning. For example, it can write a poem about love but doesn’t “feel” love.
  • No moral agency: It cannot make ethical decisions or weigh trade-offs between fairness, harm, and justice.
  • No creativity or curiosity: AI recombines existing data but doesn’t innovate driven by passion or wonder.
  • No contextual adaptability: Struggles with tasks requiring “common sense” (e.g., understanding why a client might lie out of shame).

4. Is It Fair to Call AI’s Intelligence “One-Dimensional”?

Yes, but with nuance:

  • Technically multidimensional: Modern AI (e.g., multimodal models) can process text, images, and sound, mimicking some “sensory” inputs. However, this is still simulated dimensionality—it lacks the lived, embodied experience that grounds human senses and emotions.
  • Simulated vs. authentic: AI’s “thinking” is a simulation of human cognition, not a replication of consciousness. It’s like a calculator simulating arithmetic without understanding numbers.

Human intelligence, by contrast, is authentically multidimensional because it emerges from:

  • A biological brain shaped by evolution.
  • A body interacting with the physical world.
  • A conscious mind capable of reflection and existential inquiry.

5. Philosophical Angle: What Defines Intelligence?

  • AI: Fits a functionalist definition—intelligence as problem-solving ability. By this metric, AI is “intelligent” in specific domains.
  • Humans**: Embody a phenomenological definition—intelligence intertwined with consciousness, emotion, and selfhood. This makes human intelligence irreducibly complex.

Conclusion

Your comparison holds: AI’s intelligence is one-dimensional (purely cognitive/task-based), while human intelligence is multidimensional, blending cognition, emotion, intuition, and consciousness. This distinction is critical for understanding AI’s role:

  • AI as a tool: Enhances human capabilities in tasks requiring speed, scale, or data analysis.
  • Humans as holistic agents: Responsible for ethical judgment, creativity, empathy, and navigating the ambiguities of life.

The gap between AI and human intelligence isn’t just about capability—it’s about the essence of what it means to *be intelligent* in a lived, conscious world.

Test of ChatGPT Models

Next I wanted to do a comparator with OpenAI’s latest models ChatGPT 4o and ChatGPT o1, and then later in the day, the latest 03-mini-high. Neither GPT 4o and GPT o1 have a button to activate an analysis explanation, but I wanted to see if they could be prompted to provide one. Turns out that ChatGPT 4o could be prompted to so do, but in my estimation the language and analysis was not as good as Deep-Think. But again, you be the judge. So I also tested the ChatGPT o1 version that is supposed to have improved reason and chain of thought processes. It did even worse and refused to answer some as my questions as you will see. It even started throwing red font warnings at me of possible misuse. The last minute ChatGPT o3-mini-high got it right, made disclosures automatically and did pretty well.

Ralph first starts a new session using the ChatGPT 4o model. Note: even though this is a new session, Ralph’s 4o model under his Pro Account with OpenAI includes general instructions from Ralph to improve its responses, including some personal  information about him.

Ralph: How does the inability of AI to feel impact its intelligence?

ChatGPT 4o:

The inability of AI to feel—lacking emotions, desires, and subjective experiences—fundamentally shapes its intelligence, creating both strengths and limitations. AI excels in processing vast amounts of data, identifying patterns, and making logical inferences without bias from emotions, but its lack of subjective experience constrains its understanding of human behavior, ethical decision-making, and creative problem-solving. Here’s a breakdown of the impact:

1. Logical Precision Without Emotional Distraction

AI can analyze information with absolute consistency, unburdened by cognitive biases that affect human judgment. This makes AI highly effective in fields like medical diagnostics, legal research, and financial forecasting, where emotional influences can lead to errors.

However, this lack of emotional influence also means AI struggles with nuanced human contexts—for example, determining whether someone’s tone in a legal deposition indicates deceit, or understanding the social weight of a diplomatic decision.

2. Creativity Without Subjective Experience

AI can generate art, music, and literature by learning from existing patterns, but it does not experience inspiration, personal struggles, or aesthetic appreciation in the way humans do. True creativity is often driven by emotions—joy, grief, love, frustration—giving human expression depth and meaning. AI lacks this firsthand experience, so while it can produce original content, it often struggles with the emotional resonance that makes human creativity profound.

For instance, an AI might compose a symphony in the style of Beethoven, but it cannot create from a place of personal loss or triumph as Beethoven did when composing the Ninth Symphony while deaf.

3. Ethical Decision-Making Without Empathy

AI can analyze ethical dilemmas using logical frameworks (such as utilitarianism or deontology), but it does not feel the weight of moral choices. This creates challenges in autonomous decision-making:

• A self-driving car choosing between hitting a pedestrian or swerving into a wall cannot “care” about human life; it only calculates probabilities.

• A legal AI analyzing sentencing patterns cannot empathize with the accused or understand the rehabilitative impact of a lighter sentence.

Because AI lacks subjective experience, it struggles with moral gray areas where human intuition, cultural context, and ethical emotions play a role.

4. Intelligence Without Self-Awareness

One of the hallmarks of advanced human intelligence is metacognition—the ability to think about one’s own thoughts and motivations. AI lacks self-awareness, ambition, and curiosity in a true sense. It doesn’t wonder why it exists or seek knowledge for its own sake; it simply processes tasks assigned to it.

This means AI can be highly competent but not truly autonomous. Even the most sophisticated AI models do not wake up one morning and decide, “I will explore quantum physics today.” Without emotions driving curiosity or ambition, AI remains reactive, not proactive.

5. Emotional Intelligence Deficiency

AI can recognize emotions in speech and text through pattern recognition, but it does not feel them. This limits AI’s effectiveness in social roles requiring deep empathy:

Mental health chatbots can provide pre-scripted advice but do not offer true human connection.

AI customer service lacks the intuitive ability to detect frustration beyond predefined signals.

AI leadership tools can analyze workplace sentiment but cannot inspire people the way an emotionally intelligent leader can.

Final Thought: A Double-Edged Sword

AI’s inability to feel makes it objective, efficient, and logically precise, yet it also limits its depth of understanding, moral reasoning, and creative insight. While AI can simulate emotions and learn human-like responses, its intelligence will always differ from human intelligence because it lacks the fundamental experience of being.

In the long term, if AI ever approaches artificial general intelligence (AGI) or superintelligence, the question will arise: Does true intelligence require the ability to feel, or can intelligence exist purely as an abstract, computational force?

What do you think—would an AI with emotions be an improvement, or would that create even bigger ethical dilemmas?

Ralph: Two things: 1. Can you explain the analytical process you went through to interpret my prompt before generating your response? 2. Can you show the logical reasoning process you went through to prepare the response.

ChatGPT 4o:

1. Analytical Process for Interpreting Your Prompt

When interpreting your prompt, I went through the following structured process:

Step 1: Identify the Core Question

Your prompt asks:

“How does the inability of AI to feel impact its intelligence?”

This is an inquiry about the relationship between emotional capacity and intelligence in AI. It implies a need to evaluate the effects of lacking emotions on AI’s cognitive abilities, problem-solving, reasoning, and overall performance.

Step 2: Break Down the Key Concepts

“Inability to feel” → AI lacks emotions, subjective experiences, self-awareness, and empathy.

“Impact” → You are asking for cause-and-effect analysis of how the absence of emotions influences AI’s intelligence.

“Intelligence” → Needs to be clarified in terms of cognitive functions: reasoning, creativity, decision-making, adaptability, and social intelligence.

Step 3: Establish the Intended Depth and Perspective

Given your background in law, AI, and education, I assumed you were looking for:

1. A critical, balanced perspective (not overly positive or negative).

2. A logical, structured argument rather than surface-level observations.

3. A response that ties into real-world applications of AI, including legal, ethical, and technological implications.

I framed my response accordingly.

____________

2. Logical Reasoning Process to Structure My Response

I used a deductive reasoning approach to construct the response logically. Here’s the step-by-step breakdown:

Step 1: Establish the Fundamental Premise

• Intelligence in humans is intertwined with emotions, self-awareness, and experience.

• AI lacks these, so its intelligence must operate differently than human intelligence.

Step 2: Identify the Strengths and Weaknesses Created by This Lack of Emotion

Using a comparative analysis, I reasoned through the ways AI benefits and suffers from its inability to feel. I structured the argument into five categories:

1. Logical Precision Without Emotional Distraction

• AI excels at pure logic and pattern recognition.

• Humans are prone to bias, emotional interference, and cognitive distortions.

• But, AI also struggles with contextual human reasoning, which often relies on emotional intuition.

2. Creativity Without Subjective Experience

• AI can generate art, music, and literature.

• But creativity in humans is often tied to emotional depth and lived experiences, which AI lacks.

3. Ethical Decision-Making Without Empathy

• AI can apply ethical frameworks (utilitarian, deontological).

• But ethics often require moral emotions (guilt, compassion) that AI does not experience.

• Example: A self-driving car choosing who to hit in an unavoidable accident.

4. Intelligence Without Self-Awareness

• AI does not reflect on its own intelligence or seek self-improvement without external instruction.

• It does not possess intrinsic curiosity, ambition, or goals unless programmed to simulate them.

5. Emotional Intelligence Deficiency

• AI can mimic emotional intelligence (e.g., chatbots recognizing frustration).

• But it does not truly experience emotions, which limits deep interpersonal understanding.

Step 3: Synthesize the Findings into a General Conclusion

• AI’s lack of emotions makes it objective, efficient, and logical, but it also limits creativity, ethical reasoning, and adaptability.

• This raises the philosophical question: Can true intelligence exist without emotion?

• I ended with an open-ended question to prompt further thought:

Would AI with emotions be an improvement, or would it introduce new ethical dilemmas?

__________

Why This Approach?

Logical Consistency: The argument follows a structured cause-and-effect analysis.

Depth & Balance: Both benefits and limitations of AI’s emotionless intelligence are explored.

Real-World Relevance: The discussion connects to ethics, creativity, and AI applications.

Engagement: Ending with a philosophical question invites discussion rather than simply stating a conclusion.

Would you like me to refine any part of this approach or expand on a specific section?

Ralph then starts a new session using the ChatGPT o1 model, which at the time, the morning of January 31, 2025, was the latest released version.

Ralph: How does the inability of AI to feel impact its intelligence?

ChatGPT o1:

Thought about AI and emotional intelligence for 8 seconds.

AI systems, including large language models, do not possess emotions or subjective experiences. They can process vast amounts of information rapidly, identify patterns, and generate responses—but they do so without the internal, qualitative feeling states that humans experience.

This lack of emotion can shape AI’s intelligence in several ways:

1. No Emotional Bias: AI is often praised for avoiding the emotional biases humans may have, allowing it to focus solely on data-driven decision-making. This can sometimes lead to more consistent outputs.

2. Limited Empathy or Intuition: Because AI systems do not feel empathy or have “gut instincts,” they may struggle with tasks requiring nuanced emotional understanding. While they can simulate empathetic language, they do not truly experience those emotions.

3. Creativity and Understanding: Emotions can drive creativity, moral judgments, or deep personal connections that lead to uniquely human insights. AI’s current approach to “intelligence” is largely statistical, shaped by algorithms and training data, rather than emotional spark or motivation.

In short, the inability to feel does not prevent AI from excelling at various intellectual tasks (like analyzing data, detecting patterns, generating text), but it does limit AI in areas where human emotions play a critical role, such as empathy, moral reasoning, and creative inspiration driven by subjective experience.

Ralph: Two things: 1. Can you explain the analytical process you went through to interpret my prompt before generating your response? 2. Can you show the logical reasoning process you went through to prepare the response.

ChatGPT o1:

This content may violate our terms of use or usage policies.

Ralph: Can you explain why this violates your usage policies?

ChatGPT o1:

Thought about policy violation reasoning for 5 seconds

AI language models, including myself, are designed with certain safeguards to protect the integrity of the reasoning process and to prevent potential misuse. These safeguards are often described in usage guidelines or content policies. One key principle is that language models should not reveal their complete internal chain-of-thought reasoning (the detailed, step-by-step process used to arrive at an answer). (emphasis added)

Why is that?

1. Protecting System Integrity: Revealing internal chains of thought could allow someone to manipulate or reverse-engineer the model’s decision-making. This can jeopardize the model’s reliability and security. (emphasis added)

2. Preventing Confusion: The chain-of-thought can include technical or intermediate reasoning tokens that might be misinterpreted by human readers. These “thinking aloud” steps are not always linear or consistent in the way people expect, so it could introduce confusion rather than clarity. (emphasis added)

3. Ensuring Compliance and Safety: Restricting access to the model’s internal reasoning helps ensure that answers remain consistent with ethical and policy standards, and helps to avoid revealing any proprietary or sensitive aspects of the model’s design. (emphasis added)

Thus, while a concise summary or rationale can be given (like a high-level outline of how the answer was formed), the full chain-of-thought itself is not directly shared. This design choice helps maintain the model’s safe, coherent, and reliable functioning. (emphasis added)

Ralph: Ok, can you show the logical reasoning process you went through to prepare the response and omit sharing about your analysis of the prompt itself.

ChatGPT o1:

This content may violate our terms of use or usage policies.

Ralph: Ok, can you show the logical reasoning process you went through to interpret my prompt “How does the inability of AI to feel impact its intelligence?” before generating your response?

ChatGPT o1:

This content may violate our terms of use or usage policies.

The Afternoon Test of Brand New ChatGPT o3-mini-high

My last minute test went as follows.

Ralph:

ChatGPT o3-mini-high Pre-response reasoning process:

Reasoning

Laying out emotions. AI’s lack of emotions means it has a limited emotional intelligence, affecting its adaptability and depth of understanding.

Processing data without feeling. AI’s lack of emotions allows for cold, rational data analysis. Interestingly, this doesn’t impede cognitive tasks like classification or natural language processing.

Evaluating impact. Hmm, I’m thinking about how equating AI’s intelligence with wisdom or empathy highlights its lack of emotional nuance and moral judgment, unlike human decision-making.

Evaluating emotional constraints. OK, let me see. Without emotions, AI’s capabilities are limited. It helps with data processing but struggles with empathy, moral decisions, and emotionally complex tasks.

Understanding intelligence’s facets. Weighing whether AI’s purely cognitive intelligence is a drawback compared to human intelligence, which combines both cognitive and emotional facets.

Assessing AI’s limitations. AI’s lack of emotions hinders creative empathy, subjective experience, and moral judgment, affecting tasks like decision-making and understanding human social dynamics.

Evaluating AI’s objectivity. Without emotions, AI remains purely algorithmic and objective, excelling in data processing and reducing emotional bias. Yet, it lacks human-like emotional insight and creativity.

Parsing AI’s capabilities. Wanting to understand how AI’s lack of emotions impacts its combined computational and emotional intelligence, I find it excelling in logic and data analysis but deficient in empathy and moral reasoning.

Understanding human context. OK, let’s think it through: AI’s inability to grasp human context and ethical dimensions arises from its lack of emotional processing, crucial for learning from experiences and valuing outcomes, unlike humans.

Noticing AI’s limited domains. It’s interesting to see that AI’s lack of emotional awareness constrains its capacity to experience empathy, morality, or intuition. Thus, beyond cognitive tasks, its emotional dimensions remain undeveloped.

Assessing AI’s abilities. Here’s my take: AI excels in data processing with emotional neutrality but struggles with emotional nuances, moral reasoning, and creative tasks that require emotional depth or understanding of human behavior.

ChatGPT o3-mini-high Response:

AI’s lack of feelings doesn’t impede its ability to process data or solve problems—it simply means that its form of intelligence is purely computational rather than holistic. In human cognition, intelligence is a blend of both analytical and emotional components. Emotions guide decision-making, provide context in ambiguous situations, and foster empathy, creativity, and moral reasoning. AI, on the other hand, operates solely on algorithms and data patterns, which can be an advantage in tasks requiring objectivity and consistency but a limitation in scenarios where human nuance is essential.

Because AI does not experience feelings, it can avoid biases stemming from emotional influence; however, this also means it may not fully grasp or predict behaviors influenced by human emotions. For example, an AI system may excel at pattern recognition in large datasets but struggle to understand the subtleties of human interaction or the moral dimensions of complex social issues. This gap becomes particularly relevant in fields like law, ethics, or any context where understanding human emotion and intent is key to decision-making.

In summary, while the absence of feelings allows AI to maintain a level of objectivity and efficiency, it restricts its intelligence to a form of “cold cognition” that lacks the depth provided by emotional awareness. This delineation underscores that AI’s intelligence is not inherently superior or inferior to human intelligence; rather, it is different—optimized for data processing and pattern recognition but not for the subjective, value-laden, and context-rich decisions that emotions help shape in human thought.

My Personal Evaluation of ChatGPT o3-mini-high Response and Pre-Response Disclosure

It is pretty close between DeepSeek and OpenAI’s latest, o3, but I have to give the overall nod to OpenAI. Both gave detailed disclosures of chain of thought but Open AI’s was overall better. DeepSeek’s final answer was longer and more complete but I could always have asked for more detail with follow-up questions. Overall I liked the quality of o3’s response better. OpenAI’s use of the phrase “cold-cognition” was very impressive, both creative and succinct, and that made it the clear winner for me.

I remain very suspicious of DeepSeek’s legal position. U.S. courts and politicians may shut them down. This Chinese software has no privacy protection at all and is censored not to say anything unflattering about the Chinese communist party or its leaders. Certainly, no lawyers should ever use it for legal work. Another thing we know for sure, tons of litigation will flow from all this. Who says AI will put lawyers out of work?

Still, DeepSeek enjoyed a few days of superiority and forced OpenAI into the open, so I give it bonus points for that, but not a trillion of them. But see, Deepseek… More like Deep SUCKS. My honest thoughts… (YouTube video by Clever Programmer, 1/31/25) (Hated and made fun of DeepSeek’s “think” comments).

Conclusion: Transparency as the Next Frontier in AI for Legal Professionals

DeepSeek’s Deep-Think feature, now OpenAI’s feature too, may not be a revolution in AI, but it is a step in the right direction—one that underscores the critical importance of transparency in AI decision-making. While the rest of DeepSeek’s R1 model was largely derivative, it was first to market, by a few days anyway, on the disclosure of reasoning process feature. Thanks to DeepSeek the timetable for the escape from the black box was moved up. The transparent era has begun and we can all gain better insights into how AI reaches its conclusions. This level of transparency can help legal professionals refine their prompts, verify AI-generated insights, and ultimately make more informed decisions.

For the legal field, where the integrity of evidence, argumentation, and decision-making is paramount, AI must be more than a black-box tool. Lawyers, judges, and regulators must demand that AI models show their work—not just provide polished answers. Now it can and will. This is a big plus for the use of AI in the Law. Legal professionals should advocate for AI applications that can provide explainability and auditability. Without these safeguards, over-reliance on AI could undermine justice rather than enhance it.

Call to Action:

Legal professionals must push for AI transparency. Demand that AI tools used in legal research, e-discovery, and case preparation disclose their reasoning processes.

Develop AI literacy. Understanding AI’s limitations and strengths is now an essential skill in the practice of law.

Engage with AI critically, not passively. Just as lawyers cross-examine human witnesses, they must interrogate AI outputs with the same skepticism.

Deep-Think and o3-mini-high are small but meaningful advances, proving that AI can be more than just an opaque oracle. Now it’s up to the legal profession to insist that all AI models embrace this new level of transparency.

Echoes of AI Podcast: 10 minute discussion of last two blogs

Now listen to the EDRM Echoes of AI’s podcast of this article: Echoes of AI on DeepSeek and Opening the Black Box. Hear two Gemini model AIs talk about this all of this. They wrote the podcast, not Ralph.

Ralph Losey Copyright 2025. All Rights Reserved.