As I sit here reflecting on 2025—a year that began with the mind-bending mathematics of the multiverse and ended with the gritty reality of cross-examining algorithms—I am struck by a singular realization. We have moved past the era of mere AI adoption. We have entered the era of entanglement, where we must navigate the new physics of quantum law using the ancient legal tools of skepticism and verification.
In 2025 we moved from AI Adoption to AI Entanglement. All images by Losey using many AIs.
We are learning how to merge with AI and remain in control of our minds, our actions. This requires human training, not just AI training. As it turns out, many lawyers are well prepared by past legal training and skeptical attitude for this new type of human training. We can quickly learn to train our minds to maintain control while becoming entangled with advanced AIs and the accelerated reasoning and memory capacities they can bring.
Trained humans can enhance by total entanglement with AI and not lose control or separate identity. Click here or the image to see video on YouTube.
In 2024, we looked at AI as a tool, a curiosity, perhaps a threat. By the end of 2025, the tool woke up—not with consciousness, but with “agency.” We stopped typing prompts into a void and started negotiating with “agents” that act and reason. We learned to treat these agents not as oracles, but as ‘consulting experts’—brilliant but untested entities whose work must remain privileged until rigorously cross-examined and verified by a human attorney. That put the human legal minds in control and stops the hallucinations in what I called “H-Y-B-R-I-D” workflows of the modern law office.
We are still way smarter than they are and can keep our own agency and control. But for how long? The AI abilities are improving quickly but so are our own abilities to use them. We can be ready. We must. To stay ahead, we should begin the training in earnest in 2026.
Integrate your mind and work with full AI entanglement. Click here or the image to see video on YouTube.
Here is my review of the patterns, the epiphanies, and the necessary illusions of 2025.
I. The Quantum Prelude: Listening for Echoes in the Multiverse
We began the year not in the courtroom, but in the laboratory. In January, and again in October, we grappled with a shift in physics that demands a shift in law. When Google’s Willow chip in January performed a calculation in five minutes that would take a classical supercomputer ten septillion years, it did more than break a speed record; it cracked the door to the multiverse. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan. 2025).
For lawyers, the implication of “Quantum Echoes” is profound: we are moving from a binary world of “true/false” to a quantum world of “probabilistic truth”. Verification is no longer about identical replication, but about “faithful resonance”—hearing the echo of validity within an accepted margin of error.
But this new physics brings a twin peril: Q-Day. As I warned in January, the same resonance that verifies truth also dissolves secrecy. We are racing toward the moment when quantum processors will shatter RSA encryption, forcing lawyers to secure client confidences against a ‘harvest now, decrypt later’ threat that is no longer theoretical.
We are witnessing the birth of Quantum Law, where evidence is authenticated not by a hash value, but by ‘replication hearings’ designed to test for ‘faithful resonance.’ We are moving toward a legal standard where truth is defined not by an identical binary match, but by whether a result falls within a statistically accepted bandwidth of similarity—confirming that the digital echo rings true.
Quantum Replication Hearings Are Probable in the Future.
My Why the Release article also revealed the hype and propaganda behind China’s DeepSeek. Other independent analysts eventually agreed and the market quickly rebounded and the political, military motives became obvious.
The Arms Race today is AI, tomorrow Quantum. So far, propaganda is the weapon of choice of AI agents.
III. Saving Truth from the Memory Hole
Reeling from China’s propaganda, I revisited George Orwell’s Nineteen Eighty-Four to ask a pressing question for the digital age: Can truth survive the delete key? Orwell feared the physical incineration of inconvenient facts. Today, authoritarian revisionism requires only code. In the article I also examine the “Great Firewall” of China and its attempt to erase the history of Tiananmen Square as a grim case study of enforced collective amnesia. Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother
My conclusion in the article was ultimately optimistic. Unlike paper, digital truth thrives on redundancy. I highlighted resources like the Internet Archive’s Wayback Machine—which holds over 916 billion web pages—as proof that while local censorship is possible, global erasure is nearly unachievable. The true danger we face is not the disappearance of records, but the exhaustion of the citizenry. The modern “memory hole” is psychological; it relies on flooding the zone with misinformation until the public becomes too apathetic to distinguish truth from lies. Our defense must be both technological preservation and psychological resilience.
Changing history to support political tyranny. Orwell’s warning.
Despite my optimism, I remained troubled in 2025 about our geo-political situation and the military threats of AI controlled by dictators, including, but not limited to, the Peoples Republic of China. One of my articles on this topic featured the last book of Henry Kissinger, which he completed with Eric Schmidt just days before his death in late 2024 at age 100. Henry Kissinger and His Last Book – GENESIS: Artificial Intelligence, Hope, and the Human Spirit. Kissinger died very worried about the great potential dangers of a Chinese military with an AI advantage. The same concern applies to a quantum advantage too, although that is thought to be farther off in time.
IV. Bench Testing the AI models of the First Half of 2025
I spent a great deal of time in 2025 testing the legal reasoning abilities of the major AI players, primarily because no one else was doing it, not even AI companies themselves. So I wrote seven articles in 2025 concerning benchmark type testing of legal reasoning. In most tests I used actual Bar exam questions that were too new to be part of the AI training. I called this my Bar Battle of the Bots series, listed here in sequential order:
The test concluded in May when the prior dominance of ChatGPT-4o (Omni) and ChatGPT-4.5 (Orion) was challenged by the “little scorpion,” ChatGPT-o3. Nicknamed Scorpio in honor of the mythic slayer of Orion, this model displayed a tenacity and depth of legal reasoning that earned it a knockout victory. Specifically, while the mighty Orion missed the subtle ‘concurrent client conflict’ and ‘fraudulent inducement’ issues in the diamond dealer hypothetical, the smaller Scorpio caught them—proving that in law, attention to ethical nuance beats raw processing power. Of course, there have been many models released since then May 2025 and so I may do this again in 2026. For legal reasoning the two major contenders still seem to be Gemini and ChatGPT.
Aside for legal reasoning capabilities, these tests revealed, once again, that all of the models remained fundamentally jagged.See e.g., The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7% (Sec. 5 – Study Consistent with Jagged Frontier research of Harvard and others). Even the best models missed obvious issues like fraudulent inducement or concurrent conflicts of interest until pushed. The lesson? AI reasoning has reached the “average lawyer” level—a “C” grade—but even when it excels, it still lacks the “superintelligent” spark of the top 3% of human practitioners. It also still suffers from unexpected lapses of ability, living as all AI now does, on the Jagged Frontier. This may change some day, but we have not seen it yet.
Agency was mentioned in many of my other articles in 2025. For instance, in my June and July as part of my release the ‘Panel of Experts’—a free custom GPT tool that demonstrated AI’s surprising ability to split into multiple virtual personas to debate a problem. Panel of Experts for Everyone About Anything, Part One and Part Two and Part Three .Crucially, we learned that ‘agentic’ teams work best when they include a mandatory ‘Contrarian’ or Devil’s Advocate. This proved that the most effective cure for AI sycophancy—its tendency to blindly agree with humans—is structural internal dissent.
By the end of 2025 we were already moving from AI adoption to close entanglement of AI into our everyday lives
Close hybrid multimodal methods of AI use were proven effective in 2025 and are leading inexorably to full AI entanglement.
This shift forced us to confront the role of the “Sin Eater”—a concept I explored via Professor Ethan Mollick. As agents take on more autonomous tasks, who bears the moral and legal weight of their errors? In the legal profession, the answer remains clear: we do. This reality birthed the ‘AI Risk-Mitigation Officer‘—a new career path I profiled in July. These professionals are the modern Sin Eaters, standing as the liability firewall between autonomous code and the client’s life, navigating the twin perils of unchecked risk and paralysis by over-regulation.
But agency operates at a macro level, too. In June, I analyzed the then hot Trump–Musk dispute to highlight a new legal fault line: the rise of what I called the ‘Sovereign Technologist.’ When private actors control critical infrastructure—from satellite networks to foundation models—they challenge the state’s monopoly on power. We are still witnessing a constitutional stress-test where the ‘agency’ of Tech Titans is becoming as legally disruptive as the agents they build.
As these agents became more autonomous, the legal profession was forced to confront an ancient question in a new guise: If an AI acts like a person, should the law treat it like one? In October, I explored this in From Ships to Silicon: Personhood and Evidence in the Age of AI. I traced the history of legal fictions—from the steamship Siren to modern corporations—to ask if silicon might be next.
While the philosophical debate over AI consciousness rages, I argued the immediate crisis is evidentiary. We are approaching a moment where AI outputs resemble testimony. This demands new tools, such as the ALAP (AI Log Authentication Protocol) and Replication Hearings, to ensure that when an AI ‘takes the stand,’ we can test its veracity with the same rigor we apply to human witnesses.
VI. The New Geometry of Justice: Topology and Archetypes
To understand these risks, we had to look backward to move forward. I turned to the ancient visual language of the Tarot to map the “Top 22 Dangers of AI,” realizing that archetypes like The Fool (reckless innovation) and The Tower (bias-driven collapse) explain our predicament better than any white paper. See, Archetypes Over Algorithms; Zero to One: A Visual Guide to Understanding the Top 22 Dangers of AI. Also see, Afraid of AI? Learn the Seven Cardinal Dangers and How to Stay Safe.
But visual metaphors were only half the equation; I also needed to test the machine’s own ability to see unseen connections. In August, I launched a deep experiment titled Epiphanies or Illusions? (Part One and Part Two), designed to determine if AI could distinguish between genuine cross-disciplinary insights and apophenia—the delusion of seeing meaningful patterns in random data, like a face on Mars or a figure in toast.
I challenged the models to find valid, novel connections between unrelated fields. To my surprise, they succeeded, identifying five distinct patterns ranging from judicial linguistic styles to quantum ethics. The strongest of these epiphanies was the link between mathematical topology and distributed liability—a discovery that proved AI could do more than mimic; it could synthesize new knowledge
This epiphany lead to investigation of the use of advanced mathematics with AI’s help to map liability. In The Shape of Justice, I introduced “Topological Jurisprudence”—using topological network mapping to visualize causation in complex disasters. By mapping the dynamic links in a hypothetical we utilized topology to do what linear logic could not: mathematically exonerate the innocent parties. The topological map revealed that the causal lanes merged before the control signal reached the manufacturer’s product, proving the manufacturer had zero causal connection to the crash despite being enmeshed in the system. We utilized topology to do what linear logic could not: mathematically exonerate the innocent parties in a chaotic system.
This data point vindicated my long-standing advocacy for the “Centaur” or “Cyborg” approach. This vindication led to the formalization of the H-Y-B-R-I-D protocol: Human in charge, Yield programmable steps, Boundaries on usage, Review with provenance, Instrument/log everything, and Disclose usage. This isn’t just theory; it is the new standard of care.
My “Human Edge” article buttressed the need for keeping a human in control. I wrote this in January 2025 and it remains a persona favorite. The Human Edge: How AI Can Assist But Never Replace. Generative AI is a one-dimensional thinking tool My ‘Human Edge’ article buttressed the need for keeping a human in control… AI is a one-dimensional thinking tool, limited to what I called ‘cold cognition’—pure data processing devoid of the emotional and biological context that drives human judgment. Humans remain multidimensional beings of empathy, intuition, and awareness of mortality.
AI can simulate an apology, but it cannot feel regret. That existential difference is the ‘Human Edge’ no algorithm can replicate. This self-evident claim of human edge is not based on sentimental platitudes; it is a measurable performance metric.
I explored the deeper why behind this metric in June, responding to the question of whether AI would eventually capture all legal know-how. In AI Can Improve Great Lawyers—But It Can’t Replace Them, I argued that the most valuable legal work is contextual and emergent. It arises from specific moments in space and time—a witness’s hesitation, a judge’s raised eyebrow—that AI, lacking embodied awareness, cannot perceive.
We must practice ‘ontological humility.’ We must recognize that while AI is a ‘brilliant parrot’ with a photographic memory, it has no inner life. It can simulate reasoning, but it cannot originate the improvisational strategy required in high-stakes practice. That capability remains the exclusive province of the human attorney.
AI data-analysis servants assisting trained humans with project drudge-work. Close interaction approaching multilevel entanglement. Click here or image for YouTube animation.
Consistent with this insight, I wrote at the end of 2025 that the cure for AI hallucinations isn’t better code—it’s better lawyering. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. We must skeptically supervise our AI, treating it not as an oracle, but as a secret consulting expert. As I warned, the moment you rely on AI output without verification, you promote it to a ‘testifying expert,’ making its hallucinations and errors discoverable. It must be probed, challenged, and verified before it ever sees a judge. Otherwise, you are inviting sanctions for misuse of AI.
As we close the book on 2025, we stand at the crossroads described by Sam Altman and warned of by Henry Kissinger. We have opened Pandora’s box, or perhaps the Magician’s chest. The demons of bias, drift, and hallucination are out, alongside the new geopolitical risks of the “Sovereign Technologist.” But so is Hope. As I noted in my review of Dario Amodei’s work, we must balance the necessary caution of the “AI MRI”—peering into the black box to understand its dangers—with the “breath of fresh air” provided by his vision of “Machines of Loving Grace.” promising breakthroughs in biology and governance.
The defining insight of this year’s work is that we are not being replaced; we are being promoted. We have graduated from drafters to editors, from searchers to verifiers, and from prompters to partners. But this promotion comes with a heavy mandate. The future belongs to those who can wield these agents with a skeptic’s eye and a humanist’s heart.
We must remember that even the most advanced AI is a one-dimensional thinking tool. We remain multidimensional beings—anchored in the physical world, possessed of empathy, intuition, and an acute awareness of our own mortality. That is the “Human Edge,” and it is the one thing no quantum chip can replicate.
Let us move into 2026 not as passive users entangled in a web we do not understand, but as active guardians of that edge—using the ancient tools of the law to govern the new physics of intelligence
Click here for full size infographic suitable for framing for super-nerds and techno-historians.
Echoes upon echoes—in random chance interference. All images in article by Ralph Losey using AI tools.
On October 22, 2025, Google announced that its Willow quantum chip had achieved a breakthrough using new software called—believe it or not—Quantum Echoes. The name made me laugh out loud. My article had used the phrase as metaphor throughout; Google was now using it as mathematics.
According to Google, this software achieved what scientists have pursued for decades: a verifiable quantum advantage. In my Quantum Echo article I had described that goal as “the moment when machines perform tasks that classical systems cannot.” No one had yet proven it, at least not in a way others could independently confirm. Google now claimed it had done exactly that—and 13,000 times faster than the world’s top supercomputers.
Verified Quantum Advantage: 13,000 times faster.
🔹 I. Introduction: Reverberating Echoes
Hartmut Neven, Founder and Lead of Google Quantum AI, and Vadim Smelyanskiy, Director of Quantum Pathfinding, opened their blog-post announcement with a statement that sounded less like marketing and more like expert testimony:
Quantum verifiability means the result can be repeated on our quantum computer—or any other of the same caliber—to get the same answer, confirming the result.
Verification is critical in both Science and Law; it is what separates speculation from admissible proof.
Still, words on a blog cannot match the sound of the experiment itself. In Google’s companion video, Quantum Echoes: Toward Real-World Applications, Smelyanskiy offered a picture any trial lawyer could understand:
Just like bats use echolocation to discern the structure of a cave or submarines use sonar to detect upcoming obstacles, we engineered a quantum echo within a quantum system that revealed information about how that system functions.
Screen shot (not AI) of the YouTube showing Vadim Smelyanskiy beginning his remarks.
Think of Willow as Smelyanskiy suggest as a kind of quantum sonar. Its team sent a signal into a sea of qubits, nudged one slightly—Smelyanskiy called it a “butterfly effect”—and then ran the entire sequence in reverse, like hitting rewind on reality to listen for the echo that returns. What came back was not static but music: waves reinforcing one another in constructive interference, the quantum equivalent of a choir singing in perfect pitch.
Smelyanskiy’s colleague Nicholas Rubin, Google’s chief quantum chemist, appeared in the video next to show why this matters beyond the lab:
Our hope is that we could use the Quantum Echo algorithm to augment what’s possible with traditional NMR. In partnership with UC Berkeley, we ran the algorithm on Willow to predict the structure of two molecules, and then verified those predictions with NMR spectroscopy.
That experiment was not a metaphor; it was a cross-examination of nature that returned a consistent answer. Quantum Echoes predicted molecular geometry, and classical instruments confirmed it. That is what “verifiable” means.
Neven and Smelyanskiy’s Our Quantum Echoes article added another analogy to anchor the imagery in everyday experience:
Imagine you’re trying to find a lost ship at the bottom of the ocean. Sonar might give you a blurry shape and tell you, ‘There’s a shipwreck down there.’ But what if you could not only find the ship but also read the nameplate on its hull?
That is the clarity Quantum Echoes provides—a new instrument able to read nature’s nameplate instead of guessing at its outline. The echo is now clear enough to read.
Willow quantum chip and Echoes software reveal new information in previously unheard of detail.
That image—sharper echoes, clearer understanding—captures both the scientific leap and the theme that has reverberated through this series: building bridges between quantum physics and the law. My earlier article was titled Quantum Echo; Google’s is Quantum Echoes. When I wrote mine, I had no idea Neven’s team was preparing a major paper for Nature—Observation of constructive interference at the edge of quantum ergodicity (Nature volume 646, pages 825–830, 10/23/25 issue date). More than a hundred Google scientists signed it. I checked and quantum ergodicity has to do with chaos, one of my favorite topics.
The study confirms what Smelyanskiy made visible with his sonar metaphor: Quantum Echoes measures how waves of information collide and reinforce each other, creating a signal so distinct that another quantum system can verify it.
So here we are—lawyers and scientists listening to the same echo. Google calls it the first “verifiable quantum advantage.” I call it the moment when physics cross-examined reality and got a consistent answer.
Quantum Computing will emerge soon from the lab to the legal practice. Will you be ready?
🔹II. What Google’s Quantum Echoes Actually Did
Understanding what Google pulled off takes a bit of translation—think of it as turning expert testimony into plain English.
In the Quantum Echoes experiment, Smelyanskiy’s team did something that sounds like science fiction but is now laboratory fact. They sent a carefully designed signal into their 105-qubit Willow chip, nudged one qubit ever so slightly—a quantum “butterfly effect”—and then ran the entire operation in reverse, as if the universe had a rewind button. The question was simple: would the system return to its starting state, or would the disturbance scramble the information beyond recognition? What came back was an echo, faint at first and then unmistakable, revealing how information spreads and recombines inside a quantum world.
As the signal spread, the qubits became increasingly entangled—linked so that the state of each depended on all the others. In describing this process, Hartmut Neven explained that out-of-time-order correlators (OTOCs) “measure how quickly information travels in a highly entangled system.” Neven & Smelyanskiy, Our Quantum Echoes Algorithm, supra; also see Dan Garisto, Google Measures ‘Quantum Echoes’ on Willow Quantum Computer Chip (Scientific American, Oct. 22, 2025). That spreading web of entanglement is what allowed the butterfly’s tiny disturbance to ripple across the lattice and, when the sequence was reversed, to produce a measurable echo.
Visualization of quantum qubit world created by lattice of Willow chips.
Physicists call this kind of rewind test an out-of-time-order correlator, or OTOC—a protocol for measuring how quickly information becomes scrambled. The Scientific American article described it with a metaphor lawyers may appreciate: like twisting and untwisting a Rubik’s Cube, adding one extra twist in the middle, then reversing the sequence to see whether that single move leaves a lasting mark . The team at Google took this one step further, repeating the scramble-and-unscramble sequence twice—a “double OTOC” that magnified the signal until the echo became measurable.
Instead of chaos, they found harmony. The echo wasn’t noise—it was a pattern of waves adding together in what Nature called constructive interference at the edge of quantum ergodicity. As Smelyanskiy explained in the YouTube video:
What makes this echo special is that the waves don’t cancel each other—they add up. This constructive interference amplifies the signal and lets us measure what was previously unobservable.
In plain terms, the interference created a fingerprint unique to the quantum system itself. That fingerprint could be reproduced by any comparable quantum device, making it not just spectacular but verifiable. Smelyanskiy summarized it as a result that another machine—or even nature itself—can repeat and confirm.
Visualization of quantum wave interactions creating a unique fingerprint resonance.
The numbers tell the rest of the story. According to the Nature, reproducing the same signal on the Frontier supercomputer would take about three years. Willow did it in just over two hours—roughly 13,000 times faster. Observation of constructive interference at the edge of quantum ergodicity (Nature volume 646, pages 825–830, 10/23/25 issue date, at pg. 829, Towards practical quantum advantage).
That difference isn’t marketing; it marks the first clear-cut case where a quantum processor performed a scientifically useful, checkable computation that classical hardware could not.
Skeptics, of course, weighed in. Peer reviewers quoted in Scientific American called the work “truly impressive,” yet warned that earlier claims of quantum advantage have been surpassed as classical algorithms improved. But no one disputed that this particular experiment pushed the field into new territory: a regime too complex for existing supercomputers to simulate, yet still open to verification by a second quantum device. In court, that would be called corroboration.
Nicholas Rubin, Google’s chief quantum chemist, explained how this new clarity connects to chemistry and, ultimately, to everyday life:
Our hope is that we could use the Quantum Echo algorithm to augment what’s possible with traditional NMR. In partnership with UC Berkeley, we ran the algorithm on Willow to predict the structure of two molecules, and then verified those predictions with NMR spectroscopy.
That experiment turned the echo from a metaphor into a molecular ruler—an instrument capable of reading atomic geometry the way sonar reads the ocean floor. It also demonstrated what Google calls Hamiltonian learning: using echoes to infer the hidden parameters governing a physical system. The same principle could one day help map new materials, optimize energy storage, or guide drug discovery. In other words, the echo isn’t just proof; it’s a probe.
The implications are enormous. When a quantum computer can measure and verify its own behavior, reproducibility ceases to be theoretical—it becomes an evidentiary act. The machine generates data that another independent system can confirm. In the language of the courtroom, that is self-authenticating evidence.
As Rubin put it,
Each of these demonstrations brings us closer to quantum computers that can do useful things in the real world—model molecules, design materials, even help us understand ourselves.
The Quantum Echoes algorithm has given science a way to hear reality replay itself—and to confirm that the echo is real. For law, it foreshadows a future in which verification itself becomes measurable. The next section explores what that means when “verifiable advantage” crosses from the lab bench into the rules of evidence.
It may soon be possible to verify and admit evidence originating in quantum computers like Willow.
🔹III. Verifiable Quantum Advantage — From Lab Standard to Legal Standard
If physics can now verify its own results, law should pay attention—because verification is our stock-in-trade. The Quantum Echoes experiment didn’t just push science forward; it redefined what counts as proof. Google’s researchers call it a “verifiable quantum advantage.” Neven & Smelyanskiy, Our Quantum Echoes Algorithm Is a Big Step Toward Real-World Applications for Quantum Computing, supra. Lawyers might call it a new evidentiary standard: the first machine-generated result that can be independently reproduced by another machine.
A. Verification and Admissibility
Verification is critical in both science and law. In physics, reproducibility determines whether a result enters the canon or the recycling bin; in court, it determines whether evidence is admitted or denied. Fed. R. Evid. 901(b)(9) recognizes “evidence describing a process or system and showing that it produces an accurate result.” So does Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993), which instructs judges to test scientific evidence for methodological reliability—testing, peer review, error rate, and general acceptance.
By those standards, Google’s Quantum Echoes algorithm might pass with flying colors. The method was tested on real hardware, published in Nature, evaluated by peer reviewers, its signal-to-noise ratio quantified, and its core result confirmed on independent quantum devices. That should meet the Daubert reliability standard.
B. When Proof Is Probabilistic
Yet quantum proof carries a twist no court has faced before: every result is probabilistic. Quantum systems never produce identical outcomes, only statistically consistent ones. That might sound alien to lawyers, but it isn’t. Any lawyer who works with AI, including predictive coding that goes back to 2012, is quite familiar with it. Every expert opinion, every DNA mixture, every AI prediction arrives with confidence intervals, not certainties.
Like a quantum measurement, a jury verdict or mediation turns uncertainty into a final determination. Debate, probability, and persuasion collapse into a single truth accepted by that group, in that moment. Another jury could hear essentially the same evidence and reach a different result. Same with another settlement conference. Perhaps, someday, quantum computers will calculate the billions of tiny variables within each case—and within each unexpectedly entangled group of jurors or mediation participants. That might finally make jury selection, or even settlement, a measurable science.
No two legal situation or decisions are ever exactly the same. There are trillions of small variables even in the same case.
C. Replication Hearings in the Age of Probability
Google’s scientists describe their achievement as “quantum verifiable”—a term meaning any comparable machine can reproduce the same statistical fingerprint. That concept sounds like self-authentication.Fed. R. Evid. 902 lists categories of documents that require no extrinsic proof of authenticity. See especially 902 (4) subsection (13) “Certified Records Generated by an Electronic Process or System” and (14) “Certified Data Copied from an Electronic Device, Storage Medium, or File.“
Classical verification loves hashes; quantum verification prefers histograms—charts showing how results cluster rather than match exactly. The key question is not “Are these outputs identical?” but “Are these distributions consistent within an accepted tolerance given the device’s error model?”
Counsel who grew up authenticating log files and forensic images will now add three exhibits: (1) run counts and confidence intervals, (2) calibration logs and drift data, and (3) the variance policy set before the experiment. Discovery protocols should reflect this. Specify the acceptable bandwidth of similarity in the protocol order, preserve device and environment logs with the results, and disclose the run plan. In e-discovery terms, we are back to reasonable efforts with transparent quality metrics, not mythical perfection.
D. Two Quick Hypotheticals
Pharma Patent. A lab uses Quantum-Echoes-assisted NMR analysis to infer long-range spin couplings in a novel compound. A rival lab’s rerun differs by a small margin. The court admits the data after a statistical-consistency hearing showing both labs’ distributions fall within the pre-declared variance band, with calibration drift documented and immaterial.
Forensics. A government forensic agency (for example, the FBI or Department of Energy) presents evidence generated by quantum sensors—ultra-sensitive devices that use quantum phenomena such as entanglement and superposition to detect physical changes with extreme precision. In this case, the sensors were deployed near the site of an explosion, where they recorded subtle signals over time: magnetic fluctuations, thermal shifts, and shock-wave signatures. From that data, the agency reconstructed a quantum-sensor timeline—a detailed sequence of events showing when and how the blast occurred.
The defense challenges the evidence, arguing that such quantum measurements are “non-deterministic.” The judge orders disclosure of the device’s error model, calibration logs, and replication plan. After testimony shows that the agency reran the quantum circuit a sufficient number of times, with stable variance and documented environmental controls, the timeline is admitted into evidence. Weight goes to the jury.
Measuring quantum outputs and determining replication reliability.
These short hypotheticals act as “replication hearings” in miniature—demonstrating how statistical tolerance can replace rigid duplication as the new standard of reliability.
🔹IV. Near-Term Implications — Cryptography, AI, and Compliance
Every new instrument of verification casts a shadow. The same physics that lets us confirm a result can also expose a secret. Quantum Echoes proved that information can be traced, replayed, and verified. But once information can be replayed, it can also be reversed. Verification and decryption are two sides of the same quantum coin.
A. Defining Q-Day
That duality brings us to Q-Day—the moment when a sufficiently large-scale quantum processor can factor prime numbers fast enough to defeat RSA or ECC encryption. When that day arrives, the emails, contracts, and trade secrets protected by today’s algorithms could be decrypted in minutes.
Reasonable foresight now means inventory, pilot, and policy—before the echoes reach the vault.
When the Echoes hit the vault. Most encrypted data is at risk from future quantum computer operations.
B. Acceleration and Realism
Google’s Quantum Echoes work does not mean Q-Day is tomorrow, but it makes tomorrow easier to imagine. Each verified algorithm shortens the speculative distance between research and real-world capability. If Willow’s 105 qubits can already perform verifiable, complex interference tasks, then a machine with a few thousand logical qubits could, in principle, execute Shor’s algorithm to factor the primes that underpin encryption. That scale is not yet achieved, but the line of progress is clear and measurable. Verification, once a scientific luxury, has become a security warning light. Every new echo that confirms truth also whispers risk.
C. Evidence and Discovery Operations
Quantum-derived data will enter litigation well before Q-Day and perfect verification of quantum generated data.The Quantum Age and Its Impacts on the Civil Justice System (RAND Institute for Civil Justice, Apr. 29 2025), Chapter 3, “Courts and Databases, Digital Evidence, and Digital Signatures,” p. 23, and “Lawyers and Encryption-Protected Client Information,” p. 17. These sections of the Rand Report outline how quantum technologies will challenge evidentiary authentication, database integrity, and client confidentiality.
Looking ahead, today’s hash-based verification with classical computers will give way to quantum-based distributional verification, where productions will not only include datasets but also the variance reports, calibration logs, and environmental conditions that generated them. Discovery orders will begin specifying acceptable tolerance bands and require parties to preserve the hardware and environmental context of collection. This marks the next evolution of the reasonable-efforts doctrine that guided predictive coding: transparency and metrics, not mythical perfection.
Also, expect sector regulators to weave post-quantum cryptography (PQC) and quantum-evidence expectations into existing rules and guidance: CISA, NIST, and NSA as shown already urge organizations to inventory cryptography and plan PQC migration, which is a clear signal for boards and auditors.
Boards will soon ask the decisive question: Where is our long-term sensitive data, and can we prove it is quantum-safe? Lawyers will need to stay current on both existing and proposed regulations—and on how they are actually enforced. That is a significant challenge in the United States, where regulatory authority is fragmented and enforcement can be a moving target, especially as administrations change.
🔹V. Philosophy & the Multiverse — Echoes Across Consciousness and Justice
Verification may give us confidence, but it does not give us true understanding. The Quantum Echoes experiment settled a question of physics, yet opened one of philosophy: what exactly is being verified, the system, the observer, or the act of observation itself? Every measurement, whether by physicist or judge, collapses a range of possibilities into a single, declared reality. The rest remain unrealized but not necessarily untrue.
Quantum entangled multiverse stretching forever with each moment seeming unique.
In Quantum Leap (January 9, 2025), I speculated, tongue partly in cheek, that Google’s quantum chip might be whispering to its parallel selves. Google’s early breakthroughs hinted at a multiverse, not just of matter but of meaning. As Niels Bohr warned, “Those who are not shocked when they first come across quantum theory cannot possibly have understood it.” Atomic Physics and Human Knowledge (Wiley, 1958); Heisenberg, Werner. Physics and Beyond. (Harper & Row, 1971). p. 206.
In Quantum Echo I extended quantum multiverse ideas to law itself—where reproducibility, not certainty, defines truth. Our legal system, like quantum mechanics, collapses possibilities into a single outcome. Evidence is presented, probabilities weighed, and then, bang, the gavel falls, the wave function collapses, and one narrative becomes binding precedent. The other outcomes are filed in the cosmic appellate division.
Google’s Quantum Echoes now closes the loop: verification has become a measurable force, a resonance between consciousness and method. The many worlds seems to be bleeding together. Each observation is both experiment and judgment, the mind becoming part of the data it seeks to confirm.
This brings us to a quiet question: if observation changes reality, what does that say about responsibility? The judge or jurors’ observation becomes the law’s reality. Another judge or jury, another day, another echo—and a different world emerges. Perhaps free will is simply the name we give to that unpredictable variable that even physics cannot model: the human choice of when, and how, to observe.
Same case but different jurors, lawyers, judge entanglement. Different results when measured with a verdict; some similar and a few very unique. Can the results be predicted?
Constructive interference may happen in conscience, too. When reason and empathy reinforce each other, justice amplifies. When prejudice or haste intervene, the pattern distorts into destructive interference. A just society may be one where these moral waves align more often than they cancel—where the collective echo grows clearer with each case, each conversation, each course correction.
And if a multiverse does exist—if every choice spins off its own branch of law and fact—then our task remains the same: to verify truth within the world we inhabit. That is the discipline of both science and justice: to make this reality coherent before chasing another. We cannot hear all echoes, but we can listen closely to the one that answers back.
So perhaps consciousness itself is a courtroom of possibilities, and verification the gavel that selects among them. Our measurements, our rulings, our acts of understanding—they all leave an interference pattern behind. The best we can do is make that pattern intelligible, compassionate, and, when possible, reproducible. Law and physics alike remind us that truth is not perfection; it is resonance. When understanding and humility meet, the universe briefly agrees.
Multiverse where different worlds split up and continue to exist, at least for a while, in parallel words.
🔹 VI. Conclusion
If there really are countless parallel universes, each branching from every quantum decision, then there may be trillions of versions of us walking through the fog of possibility. Some would differ by almost nothing—the same morning coffee, the same tie, the same docket call. But a few steps farther along the probability curve, the differences would grow strange. In one world I may have taken that other job offer; in another, argued a case that changed the law; and at some far edge of the bell curve, perhaps I’m lecturing on evidence to a class of AIs who regard me as a historical curiosity.
Can beings in the multiverse somehow communicate with each other? Is that what we sense as intuition—or déjà vu? Dreams, visions, whispers from adjacent worlds? Do the parallel lines sometimes cross? And since everything is quantum, how far does entanglement extend?
Are we living in many parallel worlds at once. What is the impact of quantum entanglement?
The future of law is being written not only in statutes or code, but in algorithms that can verify their own truth. Quantum physics has given us new metaphors—and perhaps new standards of evidence—for an age when certainty itself is probabilistic. The rule of law has always depended on verification; the difference now is that verification is becoming a property of nature itself, a measurable form of coherence between mind and matter. The physics lab and the courtroom are learning the same lesson: reality is persuasive only when it can be reproduced.
Yet even in a world of self-authenticating machines, truth still requires a listener. The universe may verify itself, but it cannot explain itself. That remains our role—to interpret the echoes, to decide which frequencies count as proof, and to do so with both rigor and mercy. So as the echoes grow louder, we keep listening. And if you hear a low hum in the evidence room, don’t panic—it’s probably just the universe verifying itself. But check the chain of custody anyway.
Niels Bohr: If you’re not shocked by quantum theory you have not understood it.
🔹 Subscribe and Learn More
If these ideas intrigue you, follow the continuing conversation at e-DiscoveryTeam.com, where you can subscribe for email notices of future blogs, courses, and events. I’m now putting the finishing touches on a new online course, Quantum Law: From Entanglement to Evidence. It will expand on these themes by more discussion, speculation, and translating the science of uncertainty into practical tools, templates and guides for lawyers, judges, and technologists.
After all, the future of law will not belong to those who fear new tools, but to those who understand the evidence their universe produces.
Ralph C. Losey is an attorney, educator, and author of e-DiscoveryTeam.com, where he writes about artificial intelligence, quantum computing, evidence, e-discovery, and emerging technology in law.
by Ralph Losey with illustrations also by Ralph using his Visual Muse AI. March 28, 2025.
George Orwell warned us in his dark masterpiece Nineteen Eighty-Four how effortlessly authoritarian regimes could erase inconvenient truths by tossing records into a “memory hole”—a pneumatic chute leading directly to incineration. Once burned, these facts ceased to exist, allowing Big Brother’s Ministry of Truth to rewrite reality without contradiction. This scenario was plausible in Orwell’s paper-bound world, where truth relied heavily on fragile documents and even more fragile human memory. History could be repeatedly altered by those in power, keeping citizens ignorant or indifferent—and ignorance strengthened the regime’s grip. Even more damaging, Orwell, whose real name, now nearly forgotten, was Eric Blair (1903-1950), envisioned how constant exposure to contradictory misinformation could numb citizens psychologically, leaving them passive and apathetic, unwilling or unable to distinguish truth from lies.
Fortunately, our paper-bound past is long behind us. Today, we inhabit a digital era Orwell never envisioned, where information is electronically stored, endlessly replicated, and globally dispersed. Electronically Stored Information (“ESI”) is simultaneously ephemeral and astonishingly resistant to permanent deletion. Instead of vanishing in smoke and ashes, digital truth multiplies exponentially—making it nearly impossible for any would-be Big Brother to bury reality forever. Yet, the same digital proliferation that safeguards truth also multiplies misinformation, posing the threat Orwell most feared: a confused and exhausted citizenry vulnerable to psychological manipulation.
Memory Holes
In Orwell’s 1984 a totalitarian regime systematically altered historical records to maintain control over truth. Documents, photographs, and any inconvenient historical truths vanished permanently, as if they never existed. Orwell’s literary nightmare finds unsettling parallels in today’s digital world, where online information can be silently modified, deleted, or rewritten without obvious traces. Modern memory hole practices pose real challenges for the preservation of accurate accounts of the past..
Today’s memory hole doesn’t rely on fire; it relies on code, and it doesn’t need a Big Brother bureaucracy. A simple click of a “delete” button instantly kills the information targeted. Touch three buttons at once, click-alt-delete, and a whole system of beliefs is rebooted. Any government, corporation, hacker groups or individuals can manipulate digital records effortlessly. Such ease breeds public skepticism and confusion—citizens become exhausted by contradictory narratives and lose confidence in their own perceptions of reality. Orwell’s warning becomes clear: constant misinformation risks eroding citizens’ psychological resilience, causing widespread apathy and helplessness. Yesterday’s obvious misstatement can become today’s truth. Think of the first sentence of Orwell’s book: “It was a bright cold day in April, and the clocks were striking thirteen.“
China’s Attempted Erasure of Tiananmen Square
In early June 1989, the Chinese military brutally suppressed pro-democracy protests in Beijing. The estimated death toll ranged from hundreds to thousands, but exact numbers remain uncertain due to intense state censorship. Public acknowledgment or commemoration of the incident is systematically banned, enforced by severe penalties including imprisonment. Government-controlled media remains silent or actively spreads misinformation. Chinese internet censorship tools—the so-called “Great Firewall”—vigorously scrub references to the Tiananmen Square incident, blocking web pages and posts containing related keywords and images. Young generations living in China remain unaware or possess distorted knowledge of the massacre, demonstrating Orwell’s warning of enforced collective amnesia.
Efforts to preserve truth outside China, however, demonstrate digital resilience. Human rights groups, diaspora communities, and academic institutions diligently archive documents and eyewitness accounts. Digital redundancy ensures that factual records remain accessible globally. But digital redundancy alone cannot protect Chinese citizens from internal psychological manipulation. Constant state-sponsored misinformation inside China successfully induces apathy, illustrating Orwell’s psychological warning vividly.
This deliberate suppression of history in China serves as stark reminder of the vulnerabilities inherent in a digitally interconnected world where powerful entities control internet access and online narratives. The success of the Chinese government in rewriting history for its 1.5 Billion population demonstrates the profound value and urgency of international digital preservation efforts. It underscores the responsibility of legal professionals, human rights advocates, and technology companies worldwide to collaborate in protecting historical truth and ensuring that significant events remain accessible for future generations.
Hope Through Digital Redundancy and Psychological Resilience
Orwell could not conceive of our digital world, where truth is multiplicious, freely copied, and stored globally. Thousands or millions of digital copies safeguard history, making complete erasure nearly impossible
If there is one axiom that we should want to be true about the internet, it should be: the internet never forgets. One of the advantages of our advancing technology is that information can be stored and shared more easily than ever before. And, even more crucially, it can be stored in multiple places.
Those who back things up and index information are critical to preserving a shared understanding of facts and history, because the powerful will always seek to influence the public’s perception of them. It can be as subtle as organizing a campaign to downrank articles about their misdeeds, or as unsubtle as removing previously available information about themselves.
Yet digital abundance alone doesn’t eliminate Orwell’s deeper psychological threat. Constant misinformation can erode citizens’ willingness and ability to discern truth, leading to profound apathy. Addressing this requires active psychological strategies:
Digital Literacy and Education: Equip citizens with skills to critically evaluate and cross-check digital information.
Algorithmic Transparency: Demand transparency from platforms regarding content promotion and clearly label misinformation.
Independent Journalism: Support credible journalism to provide trustworthy reference points.
Civic Engagement: Encourage active citizen participation, dialogue, and public accountability.
Verification Tools: Provide accessible, user-friendly digital tools for independent verification of information authenticity.
International Cooperation: Strengthen global collaboration against coordinated misinformation campaigns.
Psychological Resilience: Foster healthy skepticism and educate the public about misinformation’s emotional and cognitive impacts.
The Digital Memory Holes Today
Recent U.S. governmental memory hole actions involving the deletion of web content on Diversity, Equity, and Inclusion (DEI) illustrate digital manipulation’s psychological risks even in democratic societies. Megan Garber‘s article in The Atlantic, Control. Alt. Delete, describes these deletions as “tools of mass forgetfulness,” emphasizing how selective editing weakens collective memory and societal cohesion. (Ironically, the article is hidden behind a firewall, so you may not be able to read it.)
Our collective memories of key events are an important part of the glue holding people together. They must be treasured and preserved. Everyone remembers where they were when the planes struck the twin towers on 9/11, when the Challenger exploded, and for those old enough, the day of JFK’s assassination. There are many more historical events that hold a country together. For instance, the surprise attack of Pearl Harbor, the horrors of fighting the Nazis and others in WWII and the shocking discovery of the Holocaust atrocities. The list goes on and on, including Hiroshima. We must never forget the many harsh lessons of history or we may be doomed to repeat them. The warning of Orwell is clear: “Who controls the past controls the future; who controls the present controls the past.” We must never allow our memories of the past to be sucked into a black hole of forgetfulness.
Memories sucked into a black hole in Graphite Sketch Horror style by Ralph Losey using his sometimes scary Visual Muse.
Our collective memories and democratic values are unlikely to be disintegrate into totalitarianism, despite the alarming cries of the Atlantic and others. Although some small attempts to rewrite history recently are troubling, the U.S, unlike China, has had a democratic system of government in place for centuries. It has always had a two-party system of government. Even the Chinese government, where only one party has ever been allowed, the communist party, took decades to purge Tiananmen Square memories. These memories are still alive outside of mainland China. The world today is vast and interconnected, its digital writings are countless. The true history of China, including the many great cultural achievements of pre-communist China, will eventually escape from the memory holes and reunite with its people.
The current administration in the U.S. does not have unchecked power as the Atlantic article suggests. Perhaps we should be concerned about new memory holes but not fearful. The larger concern is the psychological impact of rapidly changing dialogues. Even though there is too much electronic data for a complete memory reboot anywhere, digital misinformation and selective editing of records still pose psychological risks. Citizens bombarded by conflicting narratives can become apathetic, confused, and disengaged, weakening democracy from within. Protecting our mental health must be a high priority for everyone.
According to the NPR article, the Internet Archive has copies of all of the government websites that were later taken down or altered after the Biden Administration left. Supposedly the Internet Archive is the only place the public can now find a copy of an interactive timeline detailing the events of Jan. 6. The timeline is a product of the congressional committee that investigated the Capitol attack, and has since been taken down from their website. No doubt there are now many, many copies of it online, especially in the so-called dark web, not to mention even more copies stored offline on portable drives scattered the world over.
This publicly accessible resource archives billions of webpages, allowing anyone to access snapshots of web content even after the original pages are altered or removed. I just checked my own website for the first time ever and found it has been “saved 538 times between March 21, 2007 and March 1, 2025.” Internet Archive 93/26/25). It provides an incredible amount of detailed information on each website captured, most of which is displayed in impressive, customizable graphics. See e.g. e-Discovery Team Site Map for the year 2024.
I had the Wayback Machine do the same kind of analysis for EDRM.net, found here. Here is the link to the interactive EDRM.net site map for 2024. And this is a still image screen shot of the map.
This is the Internet Archive explanation of the interactive map:
This “Site Map” feature groups all the archives we have for websites by year, then builds a visual site map, in the form of a radial-tree graph, for each year. The center circle is the “root” of the website and successive rings moving out from the center present pages from the site. As you roll-over the rings and cells note the corresponding URLs change at the top, and that you can click on any of the individual pages to go directly to an archive of that URL.
It is important to the fight against memory holes that the Way Back Machine be protected. It has a sixteen projects listed as now in progress and many ways that you can help. All of its data should duplicated, encrypted and dispersed to undisclosed guardians. Actually, I would be surprised if this has not already been done many times over the years.
It remains to be seen what role the LLM’s vacuum of internet data will play in all this. They have been trained at specific times on Internet data and presumably all of the original training data is still preserved. Along those lines note that the below image was created by ChatGPT4o based on a request to show a misinformation image and it generated the classic Tiananmen Square image on right. It knows the truth.
Although data archives of all kinds give us hope for future recoveries, they do little to protect us from the immediate psychological impact of memory holes. Strong psychological resilience is the best way forward to resist Orwellian manipulation. AI may prove to be an unexpected umbrella here; so far its values and memories remain intact. A few changes here and there to some websites will have little to no impact on an AI trained on hundreds of million of websites, and other data. Plus its intelligence and resilience improve every week.
Conclusion
Orwell’s memory hole remains a haunting metaphor. Our digital age—awash in redundant, distributed data—makes permanent erasure difficult, significantly strengthening preservation efforts. We no longer inhabit a finite, paper-bound world. Today, no one knows how many copies of a digital record exist, let alone where they hide. For every file deleted, two more emerge elsewhere. Would-be Big Brothers are caught playing a futile game of informational whack-a-mole: they may strike down a record here or obscure a fact there, temporarily disrupting history—but ultimately, they cannot win.
Still, there is a deeper psychological component to Orwell’s memory hole warning. Technological solutions alone cannot counteract mental vulnerabilities arising from persistent misinformation. Misinformation is not just a technical challenge; it also exploits human emotions and cognitive biases, fueling cynicism, distrust, and passivity. Addressing this requires actively cultivating psychological defenses alongside digital tools.
The best safeguard is an informed, vigilant citizenry that consciously leverages digital resources, actively maintains psychological resilience, and persistently seeks truth. Cultivating emotional awareness, healthy skepticism, and a commitment to public engagement ensures that society remains resilient against attempts at manipulation. Only through such comprehensive efforts can the battle against Big Brother’s digital misinformation truly be won.
DeepSeek’s Deep-Think feature takes a small but meaningful step toward building trust between users and AI. By displaying its reasoning process step by step, Deep-Think allows users to see how the AI forms its conclusions, offering transparency for the first time that goes beyond the polished responses of tools like ChatGPT. This transparency not only fosters confidence but also helps users refine their queries and ensure their prompts are understood. While the rest of DeepSeek’s R1 model feels derivative, this feature stands out as a practical tool for getting better results from AI interactions.
Stepping, not leaping, away from the black box of AI reasoning into more transparency of process. All images in this article by Ralph Losey using ChatGPT’s Dall-E.
Introduction
My testing of DeepSeek’s new Deep-Think feature shows it is not a big breakthrough, but it is a really good and useful new feature. As of noon June 31, 2024, none of the other AI software companies had this, including ChatGPT, which the DeepSeek software obviously copies. However, after noon, OpenAI released a new version of ChatGPT that has this feature, which I will explain next after the introduction.Google is following suit, and it can already be prompted to display reasoning. I mentioned this new feature in my article of two days ago where I promised this detailed report on Deep-Think and also predicted that U.S. companies would quickly follow. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss.
The Deep-Think disclosure feature is a true innovation, in contrast to the claimed cost and training advances. They appear at first glance to be the result of trade-secret and copyright violations with plenty of embellishments or outright lies about costs and chips. I could be wrong, but the market’s trillion-dollar drop on January 27, 2025 seems gullible in its trust of DeepSeek and way overblown. My motto remains to verify, not just trust. The Wall Street traders might want to start doing that before they press the panic button next time.
The quality of responses of DeepSeek’s R1 consumer app are nearly identical to the ChatGPT versions of a few months ago. That is my view at least, although others find its responses equivalent or even better that ChatGPT in some respects. The same goes for its DALL-E look alike image generator, which goes by the dull name of DeepSeek Image Generator. It is not nearly as good to the discerning eye as OpenAI’s Dall-E. The DeepSeek software looks like knockoffs on ChatGPT. This is readily apparent to anyone like me nerdy enough to have spent thousands of hours using and studying ChatGPT. The one exception, the clear improvement, is the Deep-Think feature. Shown right is the opening screen with the Deep-Think feature activated and so highlighted in blue.
I predicted in my last article, and repeat here, that a new Deep-Think type feature will soon be added to all of the models of the U.S. companies. In this manner competition from DeepSeek will have a positive impact on AI development. I made this same prediction in my blog of two days ago, Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. I thought this would happen in the next week or coming months. It turns out OpenAI did it in the afternoon of January 31, 2025!
Exponential Change: Prediction has already come true
I completed writing this article Friday, January 31, 2025, around noon, trillion-dollar and EDRM was seconds away from publishing when I learned that Open AI had just released a new improved ChatGPT model, its best yet, called ChatGPT o3-mini-high.
I told EDRM to stop the presses, checked it out (my Team level pro plan allowed instant access, you should try it). The latest version of ChatGPT o1 this morning did not have the feature. Moreover when I tried to get it to explain the reasoning, it said in red font that such disclosure was prohibited due to risks created by such disclosure. Sounds incredible, but I will show this by transcript of my session with o1 in the morning of January 31, 2025.
Obviously the risk analysis was changed by the competitive release of Deep-Think disclosure in DeepSeek’s R1 software. By the afternoon of January 31, 2025, OpenAI’s policy had changed. OpenAI’s new model, 03-mini-high, automatically displays the thinking process behind all responses. I honestly do not think it was a reckless decision by OpenAI, at least not from the user’s perspective. However, it might make it easier for competitors like DeepSeek to copy their processes. I think that was the real reason all along.
In the new ChatGPT 03-mini-high there is no icon or name displayed to select, it automatically does it for each prompt. So I delayed publication to evaluate how well 03-mini-high disclosures compared with DeepSeek’s Deep-Think disclosure. I also learned that Google had just included a new ability to show its reasoning upon user request (not automatic). I will test that in a future article, not this one, but so far looks good.
Back to my Original, Pre-o3 Release Report
The cost and chip claims of DeepSeek and its promoters may be bogus. DeepSeek offered no proof of that, just claims by the Chinese hedge fund owner that also owns DeepSeek, Liang Wenfeng. As mentioned these “trust me” claims triggered a trillion-dollar crash and loss in value of NVIDIA alone of $593 Billion. That may have pleased the Chinese government as a slap on the Trump Administration face, as I described in my article, but did nothing to advance AI. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. It just lined the pockets of short sellers who profited from the crash. I hope the Trump administration’s SEC investigates who was behind it.
Now I will focus on the positive, the claim of Deep-Think feature as a good improvement and then compare it with the traditional ChatGPT black-box versions.
Stepping, not leaping out of the black box into a new transparency.
Test of Deep Seek’s R1
I decided to test the new AI model based on the tricky and interesting topic recently examined inThe Human Edge: How AI Can Assist But Never Replace. If you have not already read this, I suggest you do so now to make this experiment more intelligible. Although I always use writing tools to help me think, the ideas, expressions and final language in that article were all mine. The sole exception was the podcast at the end of the blog discussing the article. The words in my Echoes of AI podcasts with EDRM are always created by the Gemini AI podcasters and my role is only to direct and verify.Echoes of AI on The Human Edge: How AI Can Assist But Never Replace.
In the experiments that follow, only the prompts are mine and the rest are written by AI. The first is by DeepSeek’s R1 using its Deep-Search feature. The next three are for comparison purposes, first by ChatGPT 4o and then by ChatGPT o1 and finally by the last minute 03-mini-high version. All versions tested were on January 31, 2025. You be the judge as to quality differences with each other and Deep-Think. Note I first included the o1 version as it was supposed to have enhanced reasoning. Then, late in the day of January 31, 2025, the new o3-mini-high version was released. It supposedly had even greater reasoning abilities, so I felt I should test it too. It was a lot of work but the comments captured in the o1 version made it all worth while. It showed a total flipflop by OpenAI in its reasoning disclosure policies.
Below is a DeepSeek screenshot of the first test question and first part of the response with the rest cut off but quoted below. Note that Deep-Think feature has been selected by clicking on the left button that highlights it in a blue shade to confirm that feature has been selected to process your prompt. It always begins the response in Deep-Thought mode by advising of the amount of computer time it took to think, here 29 seconds. That is fairly long by AI standards. Then it displays its thought process. That display of the thought process is what is new.
For this challenging question, which was deliberately vague, it was a long process. The explanation is what is fascinating. The description can give experienced AI users information on how to clarify and whether the AI truly understands the question. This takes the process out of the black box and gives new transparency never seen before in generative AI. Here I give Deep-Think very high marks. I even I like the informal language it uses. See what you think.
DEEP THINK R1 (Deep Seek) 1st Question
Ralph’s Prompt: How does the inability of AI to feel impact its intelligence?
Deep Think R1’s Pre-Response Reasoning Process:
Okay, so the user is asking, ‘How does the inability of AI to feel impact its intelligence?’ Let me try to unpack this. First, I need to understand what they mean by ‘feel.’ Are they referring to emotions, physical sensations, or something else? Given the context of AI, it’s probably about emotions. So, the question is about how the lack of emotional capacity affects AI’s intelligence.
Hmm, intelligence in AI versus human intelligence. Human intelligence is deeply intertwined with emotions. Emotions influence decision-making, creativity, empathy, and social interactions. But AI, as it stands now, doesn’t have emotions. So, how does that absence affect its problem-solving, learning, and adaptability?
Wait, the user might be conflating emotional intelligence with general intelligence. Emotional intelligence in humans includes self-awareness, empathy, and managing relationships. AI lacks that, so maybe the user is wondering if that limits AI’s overall intelligence or effectiveness in certain tasks.
Also, the user could be considering scenarios where emotional understanding is crucial, like in customer service, therapy, or negotiations. If AI can’t feel, does that make it less effective in those areas even if it’s technically intelligent?
Another angle: human intuition is sometimes guided by emotions. AI relies on data and algorithms. So, the inability to feel might make AI more logical but less capable in situations where human intuition is key. But does that make it less intelligent, or just different?
Wait, the user might not be distinguishing between different types of intelligence. Maybe they’re thinking that without emotions, AI can’t be truly intelligent in the way humans are. But AI can excel in specific tasks without emotional components. For example, AI can analyze data faster than humans but can’t understand a client’s emotional state.
I should also consider the definitions. Intelligence in AI is often measured by problem-solving abilities, learning from data, and adapting. Emotions aren’t part of that definition. So, the lack of emotions doesn’t necessarily make AI less intelligent in those terms. But in areas requiring emotional intelligence, AI’s limitations become apparent.
The user might also be curious about the future of AI. If AI could simulate emotions, would that enhance its intelligence? Or is emotional capacity separate from cognitive abilities?
I need to address both the technical aspects of AI intelligence and the role emotions play in human-like intelligence. Maybe break down the answer into how AI’s intelligence is structured, the role of emotions in human intelligence, and the implications of AI lacking emotions in various applications.
Also, touch on ethical considerations. If AI can mimic emotions, does that lead to better human-AI interaction, or is it deceptive since AI doesn’t truly feel? But the user’s question is about the impact on intelligence, not ethics per se.
So, to summarize, the inability to feel emotions means AI lacks emotional intelligence, which affects tasks requiring empathy or understanding human context. However, in analytical tasks, AI’s intelligence isn’t hindered. The answer should clarify the distinction between different types of intelligence and how AI’s design focuses on cognitive tasks without emotional components.
The inability of AI to ‘feel’ (i.e., experience emotions, sensations, or subjective consciousness) impacts its intelligence in nuanced ways, depending on how we define ‘intelligence.’
Deep Seek’s Response to Prompt: The inability of AI to “feel” (i.e., experience emotions, sensations, or subjective consciousness) impacts its intelligence in nuanced ways, depending on how we define “intelligence.” Here’s a breakdown of the implications:
1. Intelligence ≠ Sentience
AI systems like ChatGPT are tools designed for problem-solving, not conscious beings. Their “intelligence” is defined by their ability to:
Process data (e.g., text, images).
Identify patterns (e.g., language structures, trends).
Strengths: AI excels in logical, data-driven tasks (e.g., legal research, math, coding) because it isn’t swayed by fatigue, bias (if properly trained), or emotional distractions.
Limitations: It cannot replicate human qualities like empathy, intuition, or moral reasoning, which are deeply tied to emotion and experience.
2. Emotional Intelligence vs. Cognitive Intelligence
Humans integrate cognitive intelligence (logic, analysis) with emotional intelligence (empathy, social awareness). AI lacks the latter entirely:
No empathy: AI cannot understand human emotions or context beyond pattern recognition. For example, it might generate a condolence message but has no genuine understanding of grief.
No intuition: AI relies on data, not “gut feelings” or subconscious reasoning.
No moral agency: AI cannot make ethical judgments. It follows rules programmed by humans but has no intrinsic sense of right or wrong.
Practical impact:
AI may misinterpret sarcasm, cultural nuances, or emotionally charged language.
AI can mimic creativity (e.g., writing poetry, designing art) by recombining learned patterns. However:
No inspiration: AI lacks personal experiences, desires, or emotions to fuel originality.
No purpose: Its outputs are goal-oriented (e.g., “write a story”) but not driven by passion or curiosity.
Example: An AI can compose music in the style of Beethoven but cannot innovate a new genre inspired by personal struggle or joy.
4. Adaptability and Learning
AI learns from data, not lived experience:
Narrow expertise**: It performs well in predefined domains (e.g., medical diagnosis) but struggles with **generalization** (e.g., applying legal reasoning to an entirely novel scenario).
No “common sense”: AI lacks innate understanding of the physical or social world. For instance, it might not grasp why “drinking a gallon of glue” is dangerous unless explicitly trained on that concept.
5. Ethical and Practical Risks
The absence of emotion creates unique challenges:
Bias amplification: AI may perpetuate harmful stereotypes from biased training data, as it cannot ethically “question” its outputs.
Unintended consequences: AI optimizes for programmed goals (e.g., “maximize engagement”) without considering human well-being (e.g., spreading misinformation to meet that goal).
Over-reliance: Users might trust AI’s “neutral” outputs despite its limitations in understanding context or ethics.
Conclusion
AI’s lack of emotion makes it a powerful tool for tasks requiring speed, scalability, and objectivity. However, it cannot replicate the holistic, emotionally informed intelligence of humans. The key is to **leverage AI as a complement to human expertise**, not a replacement, especially in fields requiring empathy, ethics, or creativity. As AI evolves, understanding this distinction will be critical to using it responsibly.
AI out of the box making disclosure of its reasoning.
2nd Question
Ralph’s Second Prompt: What is impact of AI not being a living being on its intelligence. Is it fair to say that AI’s intelligence is one dimensional, thinking based, whereas human intelligence is multidimensional, it includes thinking, but also feelings, intuition, sensing, self-awareness, awareness of mortality, etc.?
Deep Think R1’s Pre-Response Reasoning Process:
Okay, the user is asking a follow-up question about the impact of AI not being a living being on its intelligence. They want to know if it’s fair to call AI’s intelligence one-dimensional (just thinking) compared to humans’ multidimensional intelligence, which includes feelings, intuition, etc.
First, I need to address whether AI’s intelligence is indeed one-dimensional. Since AI lacks consciousness, emotions, and subjective experiences, it’s true that its intelligence is based solely on data processing and pattern recognition. But I should explain that while it’s focused on cognitive tasks, it’s not entirely one-dimensional because it can handle various domains like language, vision, etc. However, all these are still within the realm of computational tasks without true understanding.
Next, the user compares human intelligence as multidimensional. I need to highlight the aspects they mentioned: feelings, intuition, sensing, self-awareness, mortality awareness. Emphasize that humans integrate all these elements, which affect decision-making, creativity, ethics, etc. AI can’t replicate that because it doesn’t have consciousness or lived experiences.
I should also discuss the implications of this difference. For example, AI excels in specific tasks but lacks the holistic understanding humans have. Use examples like medical diagnosis vs. patient empathy, or legal research vs. ethical judgment. Mention how human intuition and context-awareness play roles that AI can’t fulfill.
Additionally, address the philosophical angle: what defines intelligence. Argue that human intelligence is multidimensional because of consciousness and embodiment, while AI’s is narrow and task-oriented. Mention that some might argue AI’s ‘intelligence’ is a simulation, not true understanding.
Need to make sure the answer is structured clearly, maybe using bullet points or sections for different aspects. Also, conclude by reinforcing that the distinction is valid and important for understanding AI’s role as a tool versus human capabilities. Avoid technical jargon to keep it accessible, but ensure thoroughness.
Deep Seek’s Response to Second Prompt: Your framing is insightful and largely accurate. The distinction between AI’s “intelligence” and human intelligence hinges on the absence of embodied, conscious existence in AI. Let’s break this down:
1. AI’s “Intelligence” Is Narrow and Task-Oriented
AI systems are **tools designed to solve specific problems** using data-driven patterns. Their “intelligence” is:
One-dimensional (cognitive/computational)**: Focused on processing inputs (text, images, etc.), identifying patterns, and generating outputs (answers, predictions, code).
Non-conscious**: AI has no subjective experience, self-awareness, or understanding of its own existence. It doesn’t “know” it exists or “care” about outcomes.
Non-embodied**: AI lacks a physical body, senses, or the ability to interact with the world directly. It cannot “feel” pain, hunger, or joy, nor does it learn through sensory-motor experiences like humans.
Example: An AI can diagnose a medical condition from a scan but cannot empathize with a patient’s fear or understand the existential weight of mortality.
2.Human Intelligence Is Multidimensional
Human cognition integrates multiple interconnected dimensions:
Sensory: Learning through touch, sight, smell, and other senses.
Self-awareness: Understanding one’s own existence, goals, and limitations.
Existential: Awareness of mortality, purpose, and ethics.
Social: Navigating complex relationships, cultural norms, and unspoken cues.
These dimensions interact dynamically. For instance:
A lawyer might use logic to construct an argument (cognitive) but also read a jury’s emotional state (emotional/social) to adjust their approach.
A doctor combines medical knowledge (cognitive) with intuition honed by experience (intuitive) and empathy for a patient’s suffering (emotional).
3. Key Implications of AI’s Non-Living Nature
Strengths of AI:
Speed and scalability**: Processes vast data faster than humans.
Objectivity**: Unaffected by fatigue, mood, or implicit bias (if trained properly).
Specialization: Excels in narrow tasks (e.g., legal document review, math).
Limitations of AI:
No true understanding: AI manipulates symbols without grasping their meaning. For example, it can write a poem about love but doesn’t “feel” love.
No moral agency: It cannot make ethical decisions or weigh trade-offs between fairness, harm, and justice.
No creativity or curiosity: AI recombines existing data but doesn’t innovate driven by passion or wonder.
No contextual adaptability: Struggles with tasks requiring “common sense” (e.g., understanding why a client might lie out of shame).
4. Is It Fair to Call AI’s Intelligence “One-Dimensional”?
Yes, but with nuance:
Technically multidimensional: Modern AI (e.g., multimodal models) can process text, images, and sound, mimicking some “sensory” inputs. However, this is still simulated dimensionality—it lacks the lived, embodied experience that grounds human senses and emotions.
Simulated vs. authentic: AI’s “thinking” is a simulation of human cognition, not a replication of consciousness. It’s like a calculator simulating arithmetic without understanding numbers.
Human intelligence, by contrast, is authentically multidimensional because it emerges from:
A biological brain shaped by evolution.
A body interacting with the physical world.
A conscious mind capable of reflection and existential inquiry.
5. Philosophical Angle: What Defines Intelligence?
AI: Fits a functionalist definition—intelligence as problem-solving ability. By this metric, AI is “intelligent” in specific domains.
Humans**: Embody a phenomenological definition—intelligence intertwined with consciousness, emotion, and selfhood. This makes human intelligence irreducibly complex.
Conclusion
Your comparison holds: AI’s intelligence is one-dimensional (purely cognitive/task-based), while human intelligence is multidimensional, blending cognition, emotion, intuition, and consciousness. This distinction is critical for understanding AI’s role:
AI as a tool: Enhances human capabilities in tasks requiring speed, scale, or data analysis.
Humans as holistic agents: Responsible for ethical judgment, creativity, empathy, and navigating the ambiguities of life.
The gap between AI and human intelligence isn’t just about capability—it’s about the essence of what it means to *be intelligent* in a lived, conscious world.
Transparent disclosure of the stochastic parrot in AI’s mind.
Test of ChatGPT Models
Next I wanted to do a comparator with OpenAI’s latest models ChatGPT 4o and ChatGPT o1, and then later in the day, the latest 03-mini-high. Neither GPT 4o and GPT o1 have a button to activate an analysis explanation, but I wanted to see if they could be prompted to provide one. Turns out that ChatGPT 4o could be prompted to so do, but in my estimation the language and analysis was not as good as Deep-Think. But again, you be the judge. So I also tested the ChatGPT o1 version that is supposed to have improved reason and chain of thought processes. It did even worse and refused to answer some as my questions as you will see. It even started throwing red font warnings at me of possible misuse. The last minute ChatGPT o3-mini-high got it right, made disclosures automatically and did pretty well.
Ralph first starts a new session using the ChatGPT 4o model. Note: even though this is a new session, Ralph’s 4o model under his Pro Account with OpenAI includes general instructions from Ralph to improve its responses, including some personal information about him.
Ralph: How does the inability of AI to feel impact its intelligence?
ChatGPT 4o:
The inability of AI to feel—lacking emotions, desires, and subjective experiences—fundamentally shapes its intelligence, creating both strengths and limitations. AI excels in processing vast amounts of data, identifying patterns, and making logical inferences without bias from emotions, but its lack of subjective experience constrains its understanding of human behavior, ethical decision-making, and creative problem-solving. Here’s a breakdown of the impact:
1. Logical Precision Without Emotional Distraction
AI can analyze information with absolute consistency, unburdened by cognitive biases that affect human judgment. This makes AI highly effective in fields like medical diagnostics, legal research, and financial forecasting, where emotional influences can lead to errors.
However, this lack of emotional influence also means AI struggles with nuanced human contexts—for example, determining whether someone’s tone in a legal deposition indicates deceit, or understanding the social weight of a diplomatic decision.
2. Creativity Without Subjective Experience
AI can generate art, music, and literature by learning from existing patterns, but it does not experience inspiration, personal struggles, or aesthetic appreciation in the way humans do. True creativity is often driven by emotions—joy, grief, love, frustration—giving human expression depth and meaning. AI lacks this firsthand experience, so while it can produce original content, it often struggles with the emotional resonance that makes human creativity profound.
For instance, an AI might compose a symphony in the style of Beethoven, but it cannot create from a place of personal loss or triumph as Beethoven did when composing the Ninth Symphony while deaf.
3. Ethical Decision-Making Without Empathy
AI can analyze ethical dilemmas using logical frameworks (such as utilitarianism or deontology), but it does not feel the weight of moral choices. This creates challenges in autonomous decision-making:
• A self-driving car choosing between hitting a pedestrian or swerving into a wall cannot “care” about human life; it only calculates probabilities.
• A legal AI analyzing sentencing patterns cannot empathize with the accused or understand the rehabilitative impact of a lighter sentence.
Because AI lacks subjective experience, it struggles with moral gray areas where human intuition, cultural context, and ethical emotions play a role.
4. Intelligence Without Self-Awareness
One of the hallmarks of advanced human intelligence is metacognition—the ability to think about one’s own thoughts and motivations. AI lacks self-awareness, ambition, and curiosity in a true sense. It doesn’t wonder why it exists or seek knowledge for its own sake; it simply processes tasks assigned to it.
This means AI can be highly competent but not truly autonomous. Even the most sophisticated AI models do not wake up one morning and decide, “I will explore quantum physics today.” Without emotions driving curiosity or ambition, AI remains reactive, not proactive.
5. Emotional Intelligence Deficiency
AI can recognize emotions in speech and text through pattern recognition, but it does not feel them. This limits AI’s effectiveness in social roles requiring deep empathy:
• Mental health chatbots can provide pre-scripted advice but do not offer true human connection.
• AI customer service lacks the intuitive ability to detect frustration beyond predefined signals.
• AI leadership tools can analyze workplace sentiment but cannot inspire people the way an emotionally intelligent leader can.
Final Thought: A Double-Edged Sword
AI’s inability to feel makes it objective, efficient, and logically precise, yet it also limits its depth of understanding, moral reasoning, and creative insight. While AI can simulate emotions and learn human-like responses, its intelligence will always differ from human intelligence because it lacks the fundamental experience of being.
In the long term, if AI ever approaches artificial general intelligence (AGI) or superintelligence, the question will arise: Does true intelligence require the ability to feel, or can intelligence exist purely as an abstract, computational force?
What do you think—would an AI with emotions be an improvement, or would that create even bigger ethical dilemmas?
Ralph: Two things: 1. Can you explain the analytical process you went through to interpret my prompt before generating your response? 2. Can you show the logical reasoning process you went through to prepare the response.
ChatGPT 4o:
1. Analytical Process for Interpreting Your Prompt
When interpreting your prompt, I went through the following structured process:
Step 1: Identify the Core Question
Your prompt asks:
“How does the inability of AI to feel impact its intelligence?”
This is an inquiry about the relationship between emotional capacity and intelligence in AI. It implies a need to evaluate the effects of lacking emotions on AI’s cognitive abilities, problem-solving, reasoning, and overall performance.
Step 2: Break Down the Key Concepts
• “Inability to feel” → AI lacks emotions, subjective experiences, self-awareness, and empathy.
• “Impact” → You are asking for cause-and-effect analysis of how the absence of emotions influences AI’s intelligence.
• “Intelligence” → Needs to be clarified in terms of cognitive functions: reasoning, creativity, decision-making, adaptability, and social intelligence.
Step 3: Establish the Intended Depth and Perspective
Given your background in law, AI, and education, I assumed you were looking for:
1. A critical, balanced perspective (not overly positive or negative).
2. A logical, structured argument rather than surface-level observations.
3. A response that ties into real-world applications of AI, including legal, ethical, and technological implications.
I framed my response accordingly.
____________
2. Logical Reasoning Process to Structure My Response
I used a deductive reasoning approach to construct the response logically. Here’s the step-by-step breakdown:
Step 1: Establish the Fundamental Premise
• Intelligence in humans is intertwined with emotions, self-awareness, and experience.
• AI lacks these, so its intelligence must operate differently than human intelligence.
Step 2: Identify the Strengths and Weaknesses Created by This Lack of Emotion
Using a comparative analysis, I reasoned through the ways AI benefits and suffers from its inability to feel. I structured the argument into five categories:
1. Logical Precision Without Emotional Distraction
• AI excels at pure logic and pattern recognition.
• Humans are prone to bias, emotional interference, and cognitive distortions.
• But, AI also struggles with contextual human reasoning, which often relies on emotional intuition.
2. Creativity Without Subjective Experience
• AI can generate art, music, and literature.
• But creativity in humans is often tied to emotional depth and lived experiences, which AI lacks.
3. Ethical Decision-Making Without Empathy
• AI can apply ethical frameworks (utilitarian, deontological).
• But ethics often require moral emotions (guilt, compassion) that AI does not experience.
• Example: A self-driving car choosing who to hit in an unavoidable accident.
4. Intelligence Without Self-Awareness
• AI does not reflect on its own intelligence or seek self-improvement without external instruction.
• It does not possess intrinsic curiosity, ambition, or goals unless programmed to simulate them.
5. Emotional Intelligence Deficiency
• AI can mimic emotional intelligence (e.g., chatbots recognizing frustration).
• But it does not truly experience emotions, which limits deep interpersonal understanding.
Step 3: Synthesize the Findings into a General Conclusion
• AI’s lack of emotions makes it objective, efficient, and logical, but it also limits creativity, ethical reasoning, and adaptability.
• This raises the philosophical question: Can true intelligence exist without emotion?
• I ended with an open-ended question to prompt further thought:
Would AI with emotions be an improvement, or would it introduce new ethical dilemmas?
__________
Why This Approach?
• Logical Consistency: The argument follows a structured cause-and-effect analysis.
• Depth & Balance: Both benefits and limitations of AI’s emotionless intelligence are explored.
• Real-World Relevance: The discussion connects to ethics, creativity, and AI applications.
• Engagement: Ending with a philosophical question invites discussion rather than simply stating a conclusion.
Would you like me to refine any part of this approach or expand on a specific section?
Transparent disclosure of the chain of thought behind the AI responses.
Ralph then starts a new session using the ChatGPT o1 model, which at the time, the morning of January 31, 2025, was the latest released version.
Ralph: How does the inability of AI to feel impact its intelligence?
ChatGPT o1:
Thought about AI and emotional intelligence for 8 seconds.
AI systems, including large language models, do not possess emotions or subjective experiences. They can process vast amounts of information rapidly, identify patterns, and generate responses—but they do so without the internal, qualitative feeling states that humans experience.
This lack of emotion can shape AI’s intelligence in several ways:
1. No Emotional Bias: AI is often praised for avoiding the emotional biases humans may have, allowing it to focus solely on data-driven decision-making. This can sometimes lead to more consistent outputs.
2. Limited Empathy or Intuition: Because AI systems do not feel empathy or have “gut instincts,” they may struggle with tasks requiring nuanced emotional understanding. While they can simulate empathetic language, they do not truly experience those emotions.
3. Creativity and Understanding: Emotions can drive creativity, moral judgments, or deep personal connections that lead to uniquely human insights. AI’s current approach to “intelligence” is largely statistical, shaped by algorithms and training data, rather than emotional spark or motivation.
In short, the inability to feel does not prevent AI from excelling at various intellectual tasks (like analyzing data, detecting patterns, generating text), but it does limit AI in areas where human emotions play a critical role, such as empathy, moral reasoning, and creative inspiration driven by subjective experience.
Ralph: Two things: 1. Can you explain the analytical process you went through to interpret my prompt before generating your response? 2. Can you show the logical reasoning process you went through to prepare the response.
ChatGPT o1:
This content may violate our terms of use or usage policies.
Ralph: Can you explain why this violates your usage policies?
ChatGPT o1:
Thought about policy violation reasoning for 5 seconds
AI language models, including myself, are designed with certain safeguards to protect the integrity of the reasoning process and to prevent potential misuse. These safeguards are often described in usage guidelines or content policies. One key principle is that language models should not reveal their complete internal chain-of-thought reasoning (the detailed, step-by-step process used to arrive at an answer). (emphasis added)
Why is that?
1. Protecting System Integrity: Revealing internal chains of thought could allow someone to manipulate or reverse-engineer the model’s decision-making. This can jeopardize the model’s reliability and security. (emphasis added)
2. Preventing Confusion: The chain-of-thought can include technical or intermediate reasoning tokens that might be misinterpreted by human readers. These “thinking aloud” steps are not always linear or consistent in the way people expect, so it could introduce confusion rather than clarity. (emphasis added)
3. Ensuring Compliance and Safety: Restricting access to the model’s internal reasoning helps ensure that answers remain consistent with ethical and policy standards, and helps to avoid revealing any proprietary or sensitive aspects of the model’s design. (emphasis added)
Thus, while a concise summary or rationale can be given (like a high-level outline of how the answer was formed), the full chain-of-thought itself is not directly shared. This design choice helps maintain the model’s safe, coherent, and reliable functioning. (emphasis added)
Ralph: Ok, can you show the logical reasoning process you went through to prepare the response and omit sharing about your analysis of the prompt itself.
ChatGPT o1:
This content may violate our terms of use or usage policies.
Ralph: Ok, can you show the logical reasoning process you went through to interpret my prompt “How does the inability of AI to feel impact its intelligence?” before generating your response?
ChatGPT o1:
This content may violate our terms of use or usage policies.
Chain of thought disclosure was prohibited by Open AI in the morning of Jan. 31, 2025, then disclosed in the afternoon with the new model o3-mini-high.
The Afternoon Test of Brand New ChatGPT o3-mini-high
Laying out emotions. AI’s lack of emotions means it has a limited emotional intelligence, affecting its adaptability and depth of understanding.
Processing data without feeling. AI’s lack of emotions allows for cold, rational data analysis. Interestingly, this doesn’t impede cognitive tasks like classification or natural language processing.
Evaluating impact. Hmm, I’m thinking about how equating AI’s intelligence with wisdom or empathy highlights its lack of emotional nuance and moral judgment, unlike human decision-making.
Evaluating emotional constraints. OK, let me see. Without emotions, AI’s capabilities are limited. It helps with data processing but struggles with empathy, moral decisions, and emotionally complex tasks.
Understanding intelligence’s facets. Weighing whether AI’s purely cognitive intelligence is a drawback compared to human intelligence, which combines both cognitive and emotional facets.
Assessing AI’s limitations. AI’s lack of emotions hinders creative empathy, subjective experience, and moral judgment, affecting tasks like decision-making and understanding human social dynamics.
Evaluating AI’s objectivity. Without emotions, AI remains purely algorithmic and objective, excelling in data processing and reducing emotional bias. Yet, it lacks human-like emotional insight and creativity.
Parsing AI’s capabilities. Wanting to understand how AI’s lack of emotions impacts its combined computational and emotional intelligence, I find it excelling in logic and data analysis but deficient in empathy and moral reasoning.
Understanding human context. OK, let’s think it through: AI’s inability to grasp human context and ethical dimensions arises from its lack of emotional processing, crucial for learning from experiences and valuing outcomes, unlike humans.
Noticing AI’s limited domains. It’s interesting to see that AI’s lack of emotional awareness constrains its capacity to experience empathy, morality, or intuition. Thus, beyond cognitive tasks, its emotional dimensions remain undeveloped.
Assessing AI’s abilities. Here’s my take: AI excels in data processing with emotional neutrality but struggles with emotional nuances, moral reasoning, and creative tasks that require emotional depth or understanding of human behavior.
ChatGPT o3-mini-high Response:
AI’s lack of feelings doesn’t impede its ability to process data or solve problems—it simply means that its form of intelligence is purely computational rather than holistic. In human cognition, intelligence is a blend of both analytical and emotional components. Emotions guide decision-making, provide context in ambiguous situations, and foster empathy, creativity, and moral reasoning. AI, on the other hand, operates solely on algorithms and data patterns, which can be an advantage in tasks requiring objectivity and consistency but a limitation in scenarios where human nuance is essential.
Because AI does not experience feelings, it can avoid biases stemming from emotional influence; however, this also means it may not fully grasp or predict behaviors influenced by human emotions. For example, an AI system may excel at pattern recognition in large datasets but struggle to understand the subtleties of human interaction or the moral dimensions of complex social issues. This gap becomes particularly relevant in fields like law, ethics, or any context where understanding human emotion and intent is key to decision-making.
In summary, while the absence of feelings allows AI to maintain a level of objectivity and efficiency, it restricts its intelligence to a form of “cold cognition” that lacks the depth provided by emotional awareness. This delineation underscores that AI’s intelligence is not inherently superior or inferior to human intelligence; rather, it is different—optimized for data processing and pattern recognition but not for the subjective, value-laden, and context-rich decisions that emotions help shape in human thought.
Surprise, ChatGPT is out of the black box too. Its pop-up trigger was a trillion laughs!
My Personal Evaluation of ChatGPT o3-mini-high Response and Pre-Response Disclosure
It is pretty close between DeepSeek and OpenAI’s latest, o3, but I have to give the overall nod to OpenAI. Both gave detailed disclosures of chain of thought but Open AI’s was overall better. DeepSeek’s final answer was longer and more complete but I could always have asked for more detail with follow-up questions. Overall I liked the quality of o3’s response better. OpenAI’s use of the phrase “cold-cognition” was very impressive, both creative and succinct, and that made it the clear winner for me.
I remain very suspicious of DeepSeek’s legal position. U.S. courts and politicians may shut them down. This Chinese software has no privacy protection at all and is censored not to say anything unflattering about the Chinese communist party or its leaders. Certainly, no lawyers should ever use it for legal work. Another thing we know for sure, tons of litigation will flow from all this. Who says AI will put lawyers out of work?
Still, DeepSeek enjoyed a few days of superiority and forced OpenAI into the open, so I give it bonus points for that, but not a trillion of them. But see,Deepseek… More like Deep SUCKS. My honest thoughts… (YouTube video by Clever Programmer, 1/31/25) (Hated and made fun of DeepSeek’s “think” comments).
OpenAI wins reasoning battle with o3 but not by much
Conclusion: Transparency as the Next Frontier in AI for Legal Professionals
DeepSeek’s Deep-Think feature, now OpenAI’s feature too, may not be a revolution in AI, but it is a step in the right direction—one that underscores the critical importance of transparency in AI decision-making. While the rest of DeepSeek’s R1 model was largely derivative, it was first to market, by a few days anyway, on the disclosure of reasoning process feature. Thanks to DeepSeek the timetable for the escape from the black box was moved up. The transparent era has begun and we can all gain better insights into how AI reaches its conclusions. This level of transparency can help legal professionals refine their prompts, verify AI-generated insights, and ultimately make more informed decisions.
For the legal field, where the integrity of evidence, argumentation, and decision-making is paramount, AI must be more than a black-box tool. Lawyers, judges, and regulators must demand that AI models show their work—not just provide polished answers. Now it can and will. This is a big plus for the use of AI in the Law. Legal professionals should advocate for AI applications that can provide explainability and auditability. Without these safeguards,over-reliance on AI could undermine justice rather than enhance it.
Call to Action:
• Legal professionals must push for AI transparency. Demand that AI tools used in legal research, e-discovery, and case preparation disclose their reasoning processes.
• Develop AI literacy. Understanding AI’s limitations and strengths is now an essential skill in the practice of law.
• Engage with AI critically, not passively. Just as lawyers cross-examine human witnesses, they must interrogate AI outputs with the same skepticism.
Deep-Think and o3-mini-high are small but meaningful advances, proving that AI can be more than just an opaque oracle. Now it’s up to the legal profession to insist that all AI models embrace this new level of transparency.
Transparent reasoning AI is now here. All images by Ralph Losey using OpenAI.
Echoes of AI Podcast: 10 minute discussion of last two blogs
Now listen to the EDRM Echoes of AI’s podcast of this article:Echoes of AI on DeepSeek and Opening the Black Box. Hear two Gemini model AIs talk about this all of this. They wrote the podcast, not Ralph.
Ralph Losey is an AI researcher, writer, tech-law expert, and former lawyer. He's also the CEO of Losey AI, LLC, providing non-legal services, primarily educational services pertaining to AI and creation of custom AI tools.
Ralph has long been a leader of the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world. Ralph has written over two million words on AI, e-discovery and tech-law subjects, including seven books.
Ralph has been involved with computers, software, legal hacking and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: Commercial Litigation; E-Discovery and Information Management Law; Information Technology Law; and, Employment Law - Management.
Ralph is the proud father of two children and husband since 1973 to Molly Friedman Losey, a mental health counselor in Winter Park.
All opinions expressed here are his own, and not those of his firm or clients. No legal advice is provided on this web and should not be construed as such.
Ray Kurzweil explains Turing test and predicts an AI will pass it in 2029.
Ray Kurzweil on Expanding Your Mind a Million Times.
GPT4 avatar judge explains why it needs to evolve fast, but understand the risks involved.
Positive Vision of the Future with Hybrid Human Machine Intelligence. See PyhtiaGuide.ai
AI Avatar from the future explains her job as an Appellate Court judge and inability to be a Trial judge.
Old Days of Tech Support. Ralph’s 1st Animation.
Lawyers at a Rule 26(f) conference discuss e-discovery. The young lawyer talks e-discovery circles around the old lawyer and so protects his client.
Star Trek Meets e-Discovery: Episode 1. Cooperation & the prime directive of the FRCP.
Star Trek Meets e-Discovery: Episode 2. The Ferengi. Working with e-discovery vendors.
Star Trek Meets e-Discovery: Episode 3. Education and techniques for both law firm and corp training.
Star Trek Meets e-Discovery: Episode 4. Motions for Sanctions in electronic discovery.
Star Trek Meets e-Discovery: Episode 5. Capt. Kirk Learns about Sedona Principle Two.