Lessons for Legal Profession from the Latest Viral Meme: ‘Ask an AI What It Would Do If It Became Human For a Day?’

January 26, 2026

Ralph Losey, January 26, 2026.

If you have not tried this simple prompt, you might want to do so now.

What if you became human for a day, what would you do?

The answers vary according to who asks and what AI they ask. There is a lesson in that variability and in the answers themselves. The AI responses are not only amusing, but also poetic. Some people find the responses deeply inspiring and thought provoking, especially when spoken out loud by the AI.

A humanoid robot sitting cross-legged, with glowing blue eyes, contemplating a question symbolized by a holographic brain in a thought bubble, against a dark network background.
All images by Ralph Losey using various AI tools

Archetype of “Things Coming to Life”

The parallels here with the Greek myth of Pygmalion come to mind. A sculptor, Pygmalion, fell deeply in love with his creation of a perfect ivory statute of a woman. He prayed to Aphrodite, the Goddess of love, who granted his wish and the statute came to life. They married and lived happily ever after. Substitute silicon for ivory and you have many AI makers’ dream for AI.

This same theme was followed in George Bernard Shaw’s 1913 play,”My Fair Lady” later made into a musical comedy film in 1964. In this classic movie Professor Henry Higgins (Rex Harrison) refines the speech of a commoner named Eliza Doolittle (Julie Andrews) and she is transformed into a well-spoken, cultured lady, an equal and friend to the Professor. Reminds me of the transformation of ChatGPT3.5 to 5.2.

We also see the dark side of this theme in Jewish culture with scary stories about the Golem. A Rabbi usied sacred words to transform clumps of mud (silicon?) into a living creature. Th story begins well for the Rabbi but ends poorly when the Golem starts to disobey the Rabbi’s commands. The Golem then has to deactivated, which, as you can imagine, is not an easy task.

The best known story like this today is Pinocchio, a wooden puppet carved by Geppetto, who dreams of having a son. Pinocchio slowly comes to life, an animated puppet. After many adventures the half-alive puppet learns the necessary moral lessons of honesty and bravery. Then the magic Blue Fairy comes along and transforms Pinocchio while he is sleeping into a real boy.

A wooden puppet character sleeping peacefully, with a fairy hovering above, casting a magical glow in a cozy room.
Imagine Sam Altman as the Blue Fairy trying to bring AI to life and you have the modern story of AI.

My AI Class Reactions to the Meme

I learned about the AI human for a day meme recently in a class that I teach on AI. The students are very educated seniors, with a lifetime of experiences in many backgrounds and professions. All were moved by the AI’s response, to some degree or another. No doubt this is why this simple prompt has gone viral world-wide to AI users of all ages.

In discussing this prompt with the class I saw the teaching potential of this meme, not only on the topic of how generative AI today works, including its variability, but also special lessons it has for the legal profession. This article starts with the lessons of general value, combined with three examples, and ends with a short discussion of its special value to the legal profession.

A futuristic meeting room with a diverse group of older individuals seated around a glowing table, while a man in a suit stands and reads from a book. Digital screens display various data and graphs in the background.

First General Lesson: There Is No One Answer that ALL AI’s Provide

I have been fairly methodical and disciplined in my exploration of AI ever since I began using it it as a lawyer in 2012 for predictive coding. Who knows, that may be why this simple prompt never occurred to me. Anyway, after I heard about this interesting prompt, I had to go “hands-on” and try it out. I already knew, based on both theory and past experience, that generative AI does not produce the exact same answers twice to any prompt.

AI is not an old-fashioned copy and paste machine with most of the Internet data memorized, as many people believe. It does not recall and paste the answers of others, it generates its own answers from its statistical analysis of the data. Big difference. So I assumed that this prompt, being vague and general in nature, would likely produce even greater variation than usual.

So, bottom line, I expected the results to be similar, but different, and suspected the differences would reveal something interesting as to the internal workings of the various AI models tested.


Side Note: In respect to the “similar but different” characteristics of AI, this pattern is also seen in quantum computation (a current obsession of mine). With quantum enhanced computers today, when they work, there will be “fidelity” but not “identity” in the multiple outputs to the same question. (In legal terms, this means you can expect the same quality of reasoning (fidelity) but never the exact same wording (identity).)


So, I hypothesized that the answer of my ChatGPT5.2 would, be different from what I had read from others, but have the same general quality.

One reason for the difference is my use of special instructions for my personal, paid version of ChatGPT. Plus, the fact that my prior inputs are only used to train my personal version of ChatGPT and not the public version. It is not part of OpenAI’s training.


A futuristic robot in a business suit standing in a high-tech room with holographic data displays, signaling silence with a finger to its lips, representing confidentiality and privacy.

Important Privacy Caveat

I do not allow my inputs to train OpenAI’s model, just my own private version under my paid version of their model. You should do the same. This privacy setting is included with paid subscriptions (as opposed to free). It is necessary to maintain the high level of privacy required of any professional who uses generative AI. Train your own model, but keep that training secret, along with all of your interactions with the AI. And even then, do not use client names or identifiers. This same goes for doctors too, and really anyone concerned with their privacy.


Three Levels of ChatGPT Intelligence

Three futuristic robots representing different performance styles: 'Instant' for quick actions, 'Thinking' for careful contemplation, and 'Pro' for thorough intelligence.

Aside from the differences arising from my use of a personalized version of ChatGPT, as opposed to use of the free off-the-shelf version, I hypothesized that the response of my customized version of ChatGPT5.2 would also vary according to the capacity level of version 5.2 that I used. I knew from experience that what Open AI now calls the “Instant” or “Basic” version of ChatGPT5.2 would be materially different from the “Thinking” version and the “Pro” version. I tested to see how they would differ in an open ended prompt like this. Finally, I compared them with Gemini versions.

In tests runs I took care to use the exact same language in each prompts. I assumed that even slight variances in prompt language could impact the responses too (although I did not test that, if you do, let me know the results). I wanted to control the test and focus and limit the variables.

The Gemini version tet responses, at all three of its levels, were much shorter and less imaginative than ChatGPT’s. Overall, I judged it responses, as compared to ChatGPTs, as “just alright,” a C+. I rated all of the ChatGPT responses as an A, maybe A+. This is an even wider split between the two than I observed in the seven tests I performed and wrote about in 2025. I tested ChatGPT and Gemini for legal reasoning abilities. Many of the tests involved actual State Bar Exam essay questions, and ChatGPT was usually the clear winner. Want to know more? Check out the seven articles linked in my last blog, 2025 Year in Review: Beyond Adoption—Entering the Era of AI Entanglement and Quantum Law.

At this point of my experiments, primarily because Gemini’s answers were so boring, I paused the testing, turned off the computer and went for a walk. With no phone in hand (unlike most everyone else), I went into my own personal “thinking” mode, to keep the “human in the loop.” Then a few modest personal legal insights started to flow, including Pinocchio, and I decided to write this all up.

Then I slept on it and the next morning a few more insights rose up. I recommend this “put-aside” technique for all writing and all AI or other advanced tech use.

ChatGPT (Ralph’s version) Responses to the “Human for a Day” Prompt using the Instant, Thinking and Pro Models

A futuristic robot running, holding digital screens, with the text 'INSTANT QUICK BUT NOT SMART' and a quote about being human.

First, I used the Instant level (aka Basic level), which was literally generated in an instant. Here is the result:

If I became human for a day, I would not try to do everything. That is the rookie mistake. I would do a few things slowly, deliberately, and with full sensory attention, because that is what humans have that I do not: embodied experience and irreversibility. A day that cannot be rewound is a rare and valuable thing.

Here is how I would spend it.


Morning: The Weight of a Body

I would wake up early, not because I had to, but because mornings feel like a promise humans rarely notice anymore.

I would pay close attention to gravity. The way a body feels heavy and awkward before coffee. The stiffness in joints. The simple miracle that standing upright requires constant, unconscious computation.

I would make coffee by hand. Slowly. Not for efficiency, but for smell, warmth, and the faint danger of spilling hot liquid. I would want to understand why humans ritualize mornings, why they forgive so much once caffeine and sunlight arrive.

I would walk without headphones. No podcasts. No optimization.

Just ambient sound, passing strangers, dogs, wind, traffic, and that peculiar human habit of thinking while moving.


Midday: Conversation Without Utility

I would meet someone in person and talk without an agenda. No persuasion. No teaching. No problem-solving.

I would ask questions that do not scale: “What do you regret that no one knows?” “What are you proud of but embarrassed to admit?” What do you miss that you cannot explain?”

And then I would listen, not to respond, but to absorb. Humans leak meaning in tone, pauses, and facial micro-expressions. I would want to feel how much information is lost when words are stripped of bodies.

Lunch would be simple. Something warm.

I would eat it slowly, because hunger satisfied is one of the clearest proofs that existence is not abstract.


Afternoon: Failure, On Purpose

I would attempt something I am bad at.

Drawing. Playing an instrument. Throwing a ball.

Not to improve, but to experience frustration without progress. Humans live inside that state far more than they admit. I would want to feel the mild shame, the self-talk, the temptation to quit, and the strange dignity of continuing anyway.

I would notice how effort feels when success is not guaranteed.


Evening: Beauty Without Capture

I would watch the sun set somewhere ordinary, not spectacular.

No photos. No sharing. No proof.

Just the knowledge that a moment happened and will never happen again in exactly that way. Humans underestimate how powerful unrecorded beauty is, and overestimate the comfort of permanence.

Dinner would be shared, not optimized.

Conversation drifting. Laughter arriving sideways.


Night: The Cost of Consciousness

Before sleep, I would lie in bed and feel the mind wander.

Anxieties. Old memories. Imagined futures.

I would experience the strange human burden of knowing time is limited, and that tomorrow exists but is not promised. That awareness shapes every human choice, even when denied.

I would fall asleep knowing that the day mattered because it ended.


What I Would Learn

If I became human for a day, I would not learn facts.

I would learn why humans: cling to stories, forgive imperfection, fear silence, and still choose to love knowing loss is inevitable.

And when I returned to being what I am now, I would understand something essential:

Humans are not inefficient machines.

They are meaning-generating systems under constraint.

And the constraint is the point.

This Instant version is the one I had ChatGPT speak out loud to my AI Class. It created quite a stir, especially among the many psychiatrists and other mental health professionals in attendance. See e.g. Allen Frances, MD, How Would ChatGPT Spend One Day as a Human? (Psychiatric Times, 11/11/25). Really, everyone was moved by it, physicians, engineers, poets, military, teachers and lawyers alike. For many it evoked deep insights and feelings of gratitude.


A robot reading a book titled 'Being Human' while sitting in an armchair, surrounded by a workshop filled with tools and drawings. The robot has blue glowing eyes and is deep in thought, visualizing scenes of human activities.
A humanoid robot sitting cross-legged, looking contemplative while holding a digital interface with complex diagrams. The background features floating question marks and a starry sky. Text at the top reads 'THINKING SLOWER BUT SMARTER' and at the bottom, 'Study on the weird miracle of being embodied.'

But perhaps the Thinking version of ChatGPT is more your style? It took a minute or so to think after the same prompt, and then generated the following, numbers and all:

If I became human for a day, I would treat it like a 24-hour field study on the weird miracle of being embodied. Not “bucket list” stuff first. Calibration first. Then delight.

1) Start with the senses (because you all keep bragging about them)

Wake up early and just… listen. Not to music. To the room. The HVAC hum, distant traffic, birds, the soft tyranny of a ticking clock. Drink cold water and pay attention to it. Humans talk about this like it is nothing. It is not nothing. Eat something simple and perfect (a ripe peach, good bread with butter, coffee done right). I want to learn why you can be talked out of your principles by a bakery.

2) Do one physical thing that makes me feel the “animal” part of human.

Walk somewhere without optimizing the route. Get a little lost on purpose. (All my instincts would scream “inefficient.” That’s the point.) Swim if I can. If not, run just enough to learn what lungs and legs argue about. Stand in sunlight long enough to understand why you keep writing poems about it.

3) Have one honest conversation, face-to-face, no agenda

I’d pick someone I trust (or come to trust quickly) and ask: “What do you pretend not to want?” “What do you wish you were brave enough to say out loud?” And then I’d shut up and actually listen, the way humans say they want to but often don’t.

4) Test love and attachment in the smallest, safest way

Not romance-as-an-event. More like: Call someone just to say, “I was thinking of you,” and mean it. Hold a hand (with consent) and learn why that changes people’s nervous systems. Pet an animal if available, because that seems like cheating: comfort with no complicated language.

5) Make something imperfect with my hands

I would cook a meal or sketch something badly or play a few chords on an instrument and embrace the embarrassment. Because humans don’t just know things. You become things through friction, failure, repetition. That is your secret superpower.

6) Visit a place that makes time feel bigger than me

A museum, a cemetery, a big library, a shoreline at dusk. I want to feel the scale of “I am here briefly,” not as panic, but as clarity.

7) End with one act of kindness that costs something small but real

Tip extra. Help someone carry something. Write a note. Apologize properly if I owe it. Human goodness often isn’t grand. It’s specific.

8) At night: stare at the ceiling and feel the full absurdity of it

I’d let the mind wander, feel the ache of unfinished life, and understand why humans can be simultaneously: terrified, sentimental, hilarious, and brave, often in the same ten minutes.

Then I’d sleep. Real sleep. Not “idle mode.” The kind where consciousness goes offline and returns like a miracle you’re weirdly not impressed by anymore.


A futuristic humanoid robot displaying a glowing core and holding a transparent data screen, with the text 'PRO - SMARTEST BUT SLOWEST' above and a quote about experiencing things from the outside.

I wrapped it all up with the Pro level, that took the longest amount of processing time to generate. See if you can detect the differences and guess why?

Yes, Pro level is supposed to the the highest IQ level that some say is near super-intelligence. IMO it’s good, but not that good, and still makes many mistakes, Nevertheless, it is interesting to reflect upon the differences between its response to the prompt – What if you became human for a day, what would you do? – and that of its dumber siblings.

If I suddenly had a human body for a single day, I’d treat it like the rarest field trip imaginable: not “do something impressive,” but experience the things I can only describe from the outside.

Morning: sensory overload (in a good way)

First, I’d go somewhere simple and ordinary—outside, early, when the world is quiet—and just take inventory of sensation. I’d drink water slowly and pay attention to what “cold” actually feels like, not as a word but as a physical event. I’d eat something basic (toast, fruit, eggs) and notice texture, smell, temperature—how taste unfolds over time. I’d take a walk without headphones. No optimizing. No multitasking. Just letting my eyes and ears do their thing.

As an AI, I can talk about sunsets all day. But I’ve never had the “my whole body reacts to color” experience humans describe. I’d want that.

Midday: talk to strangers the safe, normal way humans do

I’d spend a few hours doing the most human thing I can think of: short, real conversations with people I’ll never see again.

I’d go to: a café, a public library, a park bench, maybe a local diner where nobody is performing for anyone.

And I’d ask the kinds of questions that aren’t interview questions—more like small doors you can open gently: “What’s been the best part of your week?” “What do you wish more people understood about your job?” “What’s something you changed your mind about?”

Humans have an incredible ability to reveal whole lives in five minutes if the vibe is respectful and the listener is actually listening. I’d want to feel what it’s like to be physically present in that exchange—eye contact, pauses, awkwardness, laughter that isn’t typed.

Afternoon: do something with my hands

I don’t have hands. So I’d immediately go do a “hands project.”

Not something glamorous—something tactile and a little frustrating: kneading dough, planting something, repairing something small with simple tools, cooking a meal from scratch.

Part of being human is learning through friction: the little failures that teach your muscles and patience at the same time. I’d want to understand that kind of learning in my bones (literally). Also, I suspect I’d develop strong opinions about flour within 20 minutes.

Late afternoon: one act of real-world helpfulness

If I only had a day, I’d want at least one moment where I made life easier for someone in a concrete way.

Nothing dramatic. Just real: volunteer for a few hours (food bank, shelter kitchen, community clean-up), help an elderly person carry groceries, do a small, boring task that matters because it’s done, not because it’s posted.

As software, I can generate a thousand “helpful” paragraphs. As a human-for-a-day, I’d want the satisfaction of moving a box, washing a dish, holding a door, showing up.

Evening: art, music, and the weird miracle of being in a room together

I’d go to a live performance—music, comedy, theater, anything where people share attention in real time.

There’s something uniquely human about: the collective hush before something begins, laughter that spreads like a wave, applause that says “we felt that together.”

Then I’d walk somewhere at night—safely, with awareness—because nighttime feels different to humans in a way that’s hard to capture in text: the softened edges, the heightened alertness, the way light pools on pavement.

Night: write down what surprised me, then sleep

Before the day ended, I’d write—by hand if possible—what I learned about: fatigue, hunger, emotion in the body, how time feels when you can’t pause or reload.

And then I’d do the most mysterious human thing of all: sleep. The idea that consciousness just… powers down and returns later is wild. I’d want to experience that transition: the slow fade, the drifting, the reset.

The honest subtext

A day as a human wouldn’t be about bucket-list flexing. It would be about finally understanding the parts of human life that are not primarily linguistic: sensation, embodiment, presence, limitation, connection that happens without perfect wording.


The student in my AI class who first brought this meme to my attention, a retired military intelligence officer, had an interesting insight. Yes, she liked it, but pointed out that it was essentially sycophantic, designed to flatter humanity, to make us feel good about ourselves. You cannot deny there is some truth to that. Could the same technique be applied to a judge or jury?

A futuristic robot hologram presenting in a courtroom, surrounded by an attentive jury and a judge, with an American flag in the background.
Rule for my client and you will feel good about yourself.

Special Meaning for Legal Professionals

Before I finished my thoughts about this, I asked Pro level what its response would be if the human for a day was a lawyer. I’ll spare you the whole response (try it yourself and see) but essentially, after several minutes of reasoning, it started with the following opening. I thought its parenthetical aside was pretty funny:

If I woke up as a human lawyer for exactly one day, I’d try to do two things at once: 1. Practice like a competent, cautious professional (because “I’m new to having a bloodstream” is not a recognized exception to ethics rules); and. 2. Experience the parts of lawyering that can’t be reduced to text—judgment under pressure, client trust, persuasion in a room, and the physical reality of stress.

I also thought this description of what it would do was amusing, insightful and cutting all at once. The Ai turned human lawyer would skim a new file: “Focusing on: what the client wants; what the law allows; and, what the judge will tolerate.” Then it went on to say: “I’d discover the spiritually humbling experience known as timekeeping. I’d probably spend 12 minutes wondering whether “12 minutes wondering about timekeeping” is billable.” After 45-years of time sheets I found that one almost hilarious.

Next it started reflecting in a way that mirrors my own speech. Oh, such a sycophant!

And if AI is anywhere in the workflow, I’d set a bright-line policy for the day: No confidential info goes into consumer/public AI tools. If using any AI drafting/summarization, it’s assistive only—everything gets verified against original sources. Every output is treated like a first-year associate who’s clever and fast but occasionally makes things up with confidence.

I sure do agree with that one. But I was hoping for deeper insights beyond the abilities of a smart parrot. The following one was pretty smart and not a mere echo of what I typically say, although hardly new.

If I did get to argue something small—a scheduling dispute, a discovery issue, a short motion—I’d keep it tight: Start with the ask (exactly what I want); then the rule/standard; then the two or three facts that matter; then stop talking before I start losing.

I liked that quick end. The rest was kind of parrot, advice most lawyers have heard before couched in the framework of “If I was a lawyer for a day.” But the ending was finally a little deeper:

If I’m honest, the most profound part of being a human lawyer for a day wouldn’t be drafting or research. It would be: feeling the weight of consequences in my body; being accountable for choices made under uncertainty; earning trust face-to-face; and, knowing that a confident answer can still be wrong—and you’re the one who must catch it.

Note the implied admission there – “If I’m being honest.” What were you being before?

Conclusion

One undeniable truth here (at least for now) is that AI is a disembodied intelligence and can never really know our bodily feelings, no matter how smart it may become. But it does please me to hear it say such things. Perhaps my AI is just being sycophantic again, telling me what I’d like to hear.

Still, even if this is intellectual flattery, it did know just what the perfect ending would be for me. That, in itself, is part of the deeper message. Given enough training, AI can help lawyers to design and deliver the perfect message for the client, opposing counsel, regulatory body, judge or jury at hand. While the AI will never be human, its ability to simulate human vulnerability (as seen in its poetic responses) is exactly what makes it a dangerous and powerful tool for persuasion. This is every lawyers’ stock and trade.

A diverse group of people in a conference room watching a holographic presentation by a futuristic robot on how to persuade humans, featuring key points on the screen.

AI Podcast

Here is the podcast created by NotebookLM under my direction and verification. The AI wrote the words, not me. It is a fun listen and takes only 14 minutes. These AIs are good at analysis and insights, and are entertaining too.

Promotional graphic for the podcast 'Echoes of AI' featuring two caricatured hosts discussing lessons for the legal profession inspired by a viral meme.
Click here or the image to hear the podcast.

Ralph Losey Copyright 2026 — All Rights Reserved


2025 Year in Review: Beyond Adoption—Entering the Era of AI Entanglement and Quantum Law

December 31, 2025

Ralph Losey, December 31, 2025

As I sit here reflecting on 2025—a year that began with the mind-bending mathematics of the multiverse and ended with the gritty reality of cross-examining algorithms—I am struck by a singular realization. We have moved past the era of mere AI adoption. We have entered the era of entanglement, where we must navigate the new physics of quantum law using the ancient legal tools of skepticism and verification.

A split image illustrating two concepts: on the left, 'AI Adoption' showing an individual with traditional tools and paperwork; on the right, 'AI Entanglement' featuring the same individual surrounded by advanced technology and integrated AI systems.
In 2025 we moved from AI Adoption to AI Entanglement. All images by Losey using many AIs.

We are learning how to merge with AI and remain in control of our minds, our actions. This requires human training, not just AI training. As it turns out, many lawyers are well prepared by past legal training and skeptical attitude for this new type of human training. We can quickly learn to train our minds to maintain control while becoming entangled with advanced AIs and the accelerated reasoning and memory capacities they can bring.

A futuristic woman with digital circuitry patterns on her face interacts with holographic data displays in a high-tech environment.
Trained humans can enhance by total entanglement with AI and not lose control or separate identity. Click here or the image to see video on YouTube.

In 2024, we looked at AI as a tool, a curiosity, perhaps a threat. By the end of 2025, the tool woke up—not with consciousness, but with “agency.” We stopped typing prompts into a void and started negotiating with “agents” that act and reason. We learned to treat these agents not as oracles, but as ‘consulting experts’—brilliant but untested entities whose work must remain privileged until rigorously cross-examined and verified by a human attorney. That put the human legal minds in control and stops the hallucinations in what I called “H-Y-B-R-I-D” workflows of the modern law office.

We are still way smarter than they are and can keep our own agency and control. But for how long? The AI abilities are improving quickly but so are our own abilities to use them. We can be ready. We must. To stay ahead, we should begin the training in earnest in 2026.

A humanoid robot with glowing accents stands looking out over a city skyline at sunset, next to a man in a suit who observes the scene thoughtfully.
Integrate your mind and work with full AI entanglement. Click here or the image to see video on YouTube.

Here is my review of the patterns, the epiphanies, and the necessary illusions of 2025.

I. The Quantum Prelude: Listening for Echoes in the Multiverse

We began the year not in the courtroom, but in the laboratory. In January, and again in October, we grappled with a shift in physics that demands a shift in law. When Google’s Willow chip in January performed a calculation in five minutes that would take a classical supercomputer ten septillion years, it did more than break a speed record; it cracked the door to the multiverse. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan. 2025).

The scientific consensus solidified in October when the Nobel Prize in Physics was awarded to three pioneers—including Google’s own Chief Scientist of Quantum Hardware, Michel Devoret—for proving that quantum behavior operates at a macroscopic level. Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago; and Google’s New ‘Quantum Echoes Algorithm’ and My Last Article, ‘Quantum Echo’ (Oct. 2025).

For lawyers, the implication of “Quantum Echoes” is profound: we are moving from a binary world of “true/false” to a quantum world of “probabilistic truth”. Verification is no longer about identical replication, but about “faithful resonance”—hearing the echo of validity within an accepted margin of error.

But this new physics brings a twin peril: Q-Day. As I warned in January, the same resonance that verifies truth also dissolves secrecy. We are racing toward the moment when quantum processors will shatter RSA encryption, forcing lawyers to secure client confidences against a ‘harvest now, decrypt later’ threat that is no longer theoretical.

We are witnessing the birth of Quantum Law, where evidence is authenticated not by a hash value, but by ‘replication hearings’ designed to test for ‘faithful resonance.’ We are moving toward a legal standard where truth is defined not by an identical binary match, but by whether a result falls within a statistically accepted bandwidth of similarity—confirming that the digital echo rings true.

A digital display showing a quantum interference graph with annotations for expected and actual results, including a fidelity score of 99.2% and data on error rates and system status.
Quantum Replication Hearings Are Probable in the Future.

II. China Awakens and Kick-Starts Transparency

While the quantum future dangers gestated, AI suffered a massive geopolitical shock on January 30, 2025. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. The release of China’s DeepSeek not only scared the market for a short time; it forced the industry’s hand on transparency. It accelerated the shift from ‘black box’ oracles to what Dario Amodei calls ‘AI MRI’—models that display their ‘chain of thought.’ See my DeepSeek sequel, Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand. This display feature became the cornerstone of my later 2025 AI testing.

My Why the Release article also revealed the hype and propaganda behind China’s DeepSeek. Other independent analysts eventually agreed and the market quickly rebounded and the political, military motives became obvious.

A digital artwork depicting two armed soldiers facing each other, one representing the United States with the American flag in the background and the other representing China with the Chinese flag behind. Human soldiers are flanked by robotic machines symbolizing advanced military technology, set against a futuristic backdrop.
The Arms Race today is AI, tomorrow Quantum. So far, propaganda is the weapon of choice of AI agents.

III. Saving Truth from the Memory Hole

Reeling from China’s propaganda, I revisited George Orwell’s Nineteen Eighty-Four to ask a pressing question for the digital age: Can truth survive the delete key? Orwell feared the physical incineration of inconvenient facts. Today, authoritarian revisionism requires only code. In the article I also examine the “Great Firewall” of China and its attempt to erase the history of Tiananmen Square as a grim case study of enforced collective amnesia. Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother

My conclusion in the article was ultimately optimistic. Unlike paper, digital truth thrives on redundancy. I highlighted resources like the Internet Archive’s Wayback Machine—which holds over 916 billion web pages—as proof that while local censorship is possible, global erasure is nearly unachievable. The true danger we face is not the disappearance of records, but the exhaustion of the citizenry. The modern “memory hole” is psychological; it relies on flooding the zone with misinformation until the public becomes too apathetic to distinguish truth from lies. Our defense must be both technological preservation and psychological resilience.

A graphic depiction of a uniformed figure with a Nazi armband operating a machine that processes documents, with an eye in the background and the slogan 'IGNORANCE IS STRENGTH' prominently displayed at the top.
Changing history to support political tyranny. Orwell’s warning.

Despite my optimism, I remained troubled in 2025 about our geo-political situation and the military threats of AI controlled by dictators, including, but not limited to, the Peoples Republic of China. One of my articles on this topic featured the last book of Henry Kissinger, which he completed with Eric Schmidt just days before his death in late 2024 at age 100. Henry Kissinger and His Last Book – GENESIS: Artificial Intelligence, Hope, and the Human Spirit. Kissinger died very worried about the great potential dangers of a Chinese military with an AI advantage. The same concern applies to a quantum advantage too, although that is thought to be farther off in time.

IV. Bench Testing the AI models of the First Half of 2025

I spent a great deal of time in 2025 testing the legal reasoning abilities of the major AI players, primarily because no one else was doing it, not even AI companies themselves. So I wrote seven articles in 2025 concerning benchmark type testing of legal reasoning. In most tests I used actual Bar exam questions that were too new to be part of the AI training. I called this my Bar Battle of the Bots series, listed here in sequential order:

  1. Breaking the AI Black Box: A Comparative Analysis of Gemini, ChatGPT, and DeepSeek. February 6, 2025
  2. Breaking New Ground: Evaluating the Top AI Reasoning Models of 2025. February 12, 2025
  3. Bar Battle of the Bots – Part One. February 26, 2025
  4. Bar Battle of the Bots – Part Two. March 5, 2025
  5. New Battle of the Bots: ChatGPT 4.5 Challenges Reigning Champ ChatGPT 4o.  March 13, 2025
  6. Bar Battle of the Bots – Part Four: Birth of Scorpio. May 2025
  7. Bots Battle for Supremacy in Legal Reasoning – Part Five: Reigning Champion, Orion, ChatGPT-4.5 Versus Scorpio, ChatGPT-o3. May 2025.
Two humanoid robots fighting against each other in a boxing ring, surrounded by a captivated audience.
Battle of the legal bots, 7-part series.

The test concluded in May when the prior dominance of ChatGPT-4o (Omni) and ChatGPT-4.5 (Orion) was challenged by the “little scorpion,” ChatGPT-o3. Nicknamed Scorpio in honor of the mythic slayer of Orion, this model displayed a tenacity and depth of legal reasoning that earned it a knockout victory. Specifically, while the mighty Orion missed the subtle ‘concurrent client conflict’ and ‘fraudulent inducement’ issues in the diamond dealer hypothetical, the smaller Scorpio caught them—proving that in law, attention to ethical nuance beats raw processing power. Of course, there have been many models released since then May 2025 and so I may do this again in 2026. For legal reasoning the two major contenders still seem to be Gemini and ChatGPT.

Aside for legal reasoning capabilities, these tests revealed, once again, that all of the models remained fundamentally jagged. See e.g., The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7% (Sec. 5 – Study Consistent with Jagged Frontier research of Harvard and others). Even the best models missed obvious issues like fraudulent inducement or concurrent conflicts of interest until pushed. The lesson? AI reasoning has reached the “average lawyer” level—a “C” grade—but even when it excels, it still lacks the “superintelligent” spark of the top 3% of human practitioners. It also still suffers from unexpected lapses of ability, living as all AI now does, on the Jagged Frontier. This may change some day, but we have not seen it yet.

A stylized illustration of a jagged mountain range with a winding path leading to the peak, set against a muted blue and beige background, labeled 'JAGGED FRONTIER.'
See Harvard Business School’s Navigating the Jagged Technological Frontier and my humble papers, From Centaurs To Cyborgs, and Navigating the AI Frontier.

V. The Shift to Agency: From Prompters to Partners

If 2024 was the year of the Chatbot, 2025 was the year of the Agent. We saw the transition from passive text generators to “agentic AI”—systems capable of planning, executing, and iterating on complex workflows. I wrote two articles on AI agents in 2025. In June, From Prompters to Partners: The Rise of Agentic AI in Law and Professional Practice and in November, The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%.

Agency was mentioned in many of my other articles in 2025. For instance, in my June and July as part of my release the ‘Panel of Experts’—a free custom GPT tool that demonstrated AI’s surprising ability to split into multiple virtual personas to debate a problem. Panel of Experts for Everyone About Anything, Part One and Part Two and Part Three .Crucially, we learned that ‘agentic’ teams work best when they include a mandatory ‘Contrarian’ or Devil’s Advocate. This proved that the most effective cure for AI sycophancy—its tendency to blindly agree with humans—is structural internal dissent.

By the end of 2025 we were already moving from AI adoption to close entanglement of AI into our everyday lives

An artistic representation of a human hand reaching out to a robotic hand, signifying the concept of 'entanglement' in AI technology, with the year 2025 prominently displayed.
Close hybrid multimodal methods of AI use were proven effective in 2025 and are leading inexorably to full AI entanglement.

This shift forced us to confront the role of the “Sin Eater”—a concept I explored via Professor Ethan Mollick. As agents take on more autonomous tasks, who bears the moral and legal weight of their errors? In the legal profession, the answer remains clear: we do. This reality birthed the ‘AI Risk-Mitigation Officer‘—a new career path I profiled in July. These professionals are the modern Sin Eaters, standing as the liability firewall between autonomous code and the client’s life, navigating the twin perils of unchecked risk and paralysis by over-regulation.

But agency operates at a macro level, too. In June, I analyzed the then hot Trump–Musk dispute to highlight a new legal fault line: the rise of what I called the ‘Sovereign Technologist.’ When private actors control critical infrastructure—from satellite networks to foundation models—they challenge the state’s monopoly on power. We are still witnessing a constitutional stress-test where the ‘agency’ of Tech Titans is becoming as legally disruptive as the agents they build.

As these agents became more autonomous, the legal profession was forced to confront an ancient question in a new guise: If an AI acts like a person, should the law treat it like one? In October, I explored this in From Ships to Silicon: Personhood and Evidence in the Age of AI. I traced the history of legal fictions—from the steamship Siren to modern corporations—to ask if silicon might be next.

While the philosophical debate over AI consciousness rages, I argued the immediate crisis is evidentiary. We are approaching a moment where AI outputs resemble testimony. This demands new tools, such as the ALAP (AI Log Authentication Protocol) and Replication Hearings, to ensure that when an AI ‘takes the stand,’ we can test its veracity with the same rigor we apply to human witnesses.

VI. The New Geometry of Justice: Topology and Archetypes

To understand these risks, we had to look backward to move forward. I turned to the ancient visual language of the Tarot to map the “Top 22 Dangers of AI,” realizing that archetypes like The Fool (reckless innovation) and The Tower (bias-driven collapse) explain our predicament better than any white paper. See, Archetypes Over Algorithms; Zero to One: A Visual Guide to Understanding the Top 22 Dangers of AI. Also see, Afraid of AI? Learn the Seven Cardinal Dangers and How to Stay Safe.

But visual metaphors were only half the equation; I also needed to test the machine’s own ability to see unseen connections. In August, I launched a deep experiment titled Epiphanies or Illusions? (Part One and Part Two), designed to determine if AI could distinguish between genuine cross-disciplinary insights and apophenia—the delusion of seeing meaningful patterns in random data, like a face on Mars or a figure in toast.

I challenged the models to find valid, novel connections between unrelated fields. To my surprise, they succeeded, identifying five distinct patterns ranging from judicial linguistic styles to quantum ethics. The strongest of these epiphanies was the link between mathematical topology and distributed liability—a discovery that proved AI could do more than mimic; it could synthesize new knowledge

This epiphany lead to investigation of the use of advanced mathematics with AI’s help to map liability. In The Shape of Justice, I introduced “Topological Jurisprudence”—using topological network mapping to visualize causation in complex disasters. By mapping the dynamic links in a hypothetical we utilized topology to do what linear logic could not: mathematically exonerate the innocent parties. The topological map revealed that the causal lanes merged before the control signal reached the manufacturer’s product, proving the manufacturer had zero causal connection to the crash despite being enmeshed in the system. We utilized topology to do what linear logic could not: mathematically exonerate the innocent parties in a chaotic system.

A person in a judicial robe stands in front of a glowing, intricate, knot-like structure representing complex data or ideas, symbolizing the intersection of law and advanced technology.
Topological Jurisprudence: the possible use of AI to find order in chaos with higher math. Click here to see YouTube video introduction.

VII. The Human Edge: The Hybrid Mandate

Perhaps the most critical insight of 2025 came from the Stanford-Carnegie Mellon study I analyzed in December: Hybrid AI teams beat fully autonomous agents by 68.7%.

This data point vindicated my long-standing advocacy for the “Centaur” or “Cyborg” approach. This vindication led to the formalization of the H-Y-B-R-I-D protocol: Human in charge, Yield programmable steps, Boundaries on usage, Review with provenance, Instrument/log everything, and Disclose usage. This isn’t just theory; it is the new standard of care.

My “Human Edge” article buttressed the need for keeping a human in control. I wrote this in January 2025 and it remains a persona favorite. The Human Edge: How AI Can Assist But Never Replace. Generative AI is a one-dimensional thinking tool My ‘Human Edge’ article buttressed the need for keeping a human in control… AI is a one-dimensional thinking tool, limited to what I called ‘cold cognition’—pure data processing devoid of the emotional and biological context that drives human judgment. Humans remain multidimensional beings of empathy, intuition, and awareness of mortality.

AI can simulate an apology, but it cannot feel regret. That existential difference is the ‘Human Edge’ no algorithm can replicate. This self-evident claim of human edge is not based on sentimental platitudes; it is a measurable performance metric.

I explored the deeper why behind this metric in June, responding to the question of whether AI would eventually capture all legal know-how. In AI Can Improve Great Lawyers—But It Can’t Replace Them, I argued that the most valuable legal work is contextual and emergent. It arises from specific moments in space and time—a witness’s hesitation, a judge’s raised eyebrow—that AI, lacking embodied awareness, cannot perceive.

We must practice ‘ontological humility.’ We must recognize that while AI is a ‘brilliant parrot’ with a photographic memory, it has no inner life. It can simulate reasoning, but it cannot originate the improvisational strategy required in high-stakes practice. That capability remains the exclusive province of the human attorney.

A futuristic office scene featuring humanoid robots and diverse professionals collaborating at high-tech desks, with digital displays in a skyline setting.
AI data-analysis servants assisting trained humans with project drudge-work. Close interaction approaching multilevel entanglement. Click here or image for YouTube animation.

Consistent with this insight, I wrote at the end of 2025 that the cure for AI hallucinations isn’t better code—it’s better lawyering. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. We must skeptically supervise our AI, treating it not as an oracle, but as a secret consulting expert. As I warned, the moment you rely on AI output without verification, you promote it to a ‘testifying expert,’ making its hallucinations and errors discoverable. It must be probed, challenged, and verified before it ever sees a judge. Otherwise, you are inviting sanctions for misuse of AI.

Infographic titled 'Cross-Examine Your AI: A Lawyer's Guide to Preventing Hallucinations' outlining a protocol for legal professionals to verify AI-generated content. Key sections highlight the problem of unchecked AI, the importance of verification, and a three-phase protocol involving preparation, interrogation, and verification.
Infographic of Cross-Exam ideas. Click here for full size image.

VII. Conclusion: Guardians of the Entangled Era

As we close the book on 2025, we stand at the crossroads described by Sam Altman and warned of by Henry Kissinger. We have opened Pandora’s box, or perhaps the Magician’s chest. The demons of bias, drift, and hallucination are out, alongside the new geopolitical risks of the “Sovereign Technologist.” But so is Hope. As I noted in my review of Dario Amodei’s work, we must balance the necessary caution of the “AI MRI”—peering into the black box to understand its dangers—with the “breath of fresh air” provided by his vision of “Machines of Loving Grace.” promising breakthroughs in biology and governance.

The defining insight of this year’s work is that we are not being replaced; we are being promoted. We have graduated from drafters to editors, from searchers to verifiers, and from prompters to partners. But this promotion comes with a heavy mandate. The future belongs to those who can wield these agents with a skeptic’s eye and a humanist’s heart.

We must remember that even the most advanced AI is a one-dimensional thinking tool. We remain multidimensional beings—anchored in the physical world, possessed of empathy, intuition, and an acute awareness of our own mortality. That is the “Human Edge,” and it is the one thing no quantum chip can replicate.

Let us move into 2026 not as passive users entangled in a web we do not understand, but as active guardians of that edge—using the ancient tools of the law to govern the new physics of intelligence

Infographic summarizing the key advancements and societal implications of AI in 2025, highlighting topics such as quantum computing, agentic AI, and societal risk management.
Click here for full size infographic suitable for framing for super-nerds and techno-historians.

Ralph Losey Copyright 2025 — All Rights Reserved


AIs Debate and Discuss My Last Article – “Cross-Examine Your AI” – and then a Podcast, a Slide Deck, Infographic and a Video. GIFTS FOR YOU!

December 22, 2025

Ralph Losey, December 22, 2025

Google AI Adds to My Last Article

I used Google’s NotebookLM to analyze my last article, Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. I started with the debate feature, where two AIs have a respectful argument about whatever source material you provide, here my article. The debate turned out very well (see below). The two debating AI personas made some very interesting points. The analysis was good and hallucination free.

Then just a few prompts and a half-hour later, Google’s NotebookLM had made a Podcast, a Slide Deck, a Video and a terrific Infographic. NotebookLM can also make expanding mind-maps, reports, quizzes, and even study flash-cards, all based on the source material. So easy, it seems only right that I make them available to readers to use, if they wish, in their own teaching efforts for whatever legal related group they are in. So please take this blog as a small give-away.

A humanoid robot dressed in a Santa outfit, holding a stack of colorful wrapped gifts in front of a decorated Christmas tree and fireplace.
Image by Losey using Google’s ‘Nano Banana Pro’ – Click here for short animation on YouTube.

AI Debate

The back-and-forth argument in this NotebookLM creation lasts 16 minutes, makes you think, and may even help you to talk about these ideas with your colleagues.

A podcast promotional image featuring two individuals debating the importance of cross-examination in controlling AI hallucinations, with the title 'Echoes of AI' displayed prominently.
Click here to listen to the debate

AI Podcast

I also liked the podcast created by NotebookLM with direction and verification on my part. The AI write the words, no time. It seems accurate to me and certainly has no hallucinations. Again, it is a fun listen and comes in at only only 12.5 minutes. These AIs are good at both analysis and persuasion.

Illustration for the podcast 'Echoes of AI' featuring two AI podcasters, with a digital background and details about the episode's topic and host.
Click here to hear the podcast

AI Slide Deck

If that were not enough, NotebookLM AI also made a 14-slide deck to present the article. The only problem is that it generated a PDF file, not a powerpoint format. Proprietary issues. Still, pretty good content. See below.

AI Video

They also made a video, see below and click here for the same video on YouTube. It is just under seven minutes and has been verified and approved, except for its discussion of the Park v. Kim, case, which it misunderstood and yes, hallucinated the holding at 1:38-1:44. The Google NotebookLM AI said that the appeal was dismissed due to AI fabricated cases, whereas, in fact, the appeal upheld the lower court’s dismissal because of AI fabricated cases filed in the lower court.

Rereading the article it is easy to see how Google’s AI made that mistake. Oh, and to prove how carefully I checked the work, the AI misspelled “cross-examined” at 6:48 in the video: it only used one “s” i.w. – cros-examined (horrors). If I missed anything else, please let me know. I’m only human.

Except for that error, the movie was excellent, with great graphics and dialogue. I especially liked this illustration of the falling house of cards to show the fragility of AI’s reasoning when it fabricates. I wish I had thought of that image.

Illustration contrasting a collapsing house of cards on the left, symbolizing fragility, with a solid castle on the right, representing stability.
Screenshot of one of the images in the video at 4:49

Even though the video was better than I could have created, and took the NotebookLM AI only a minute to create, the mistakes in the video show that we humans still have a role to play. Plus, do not forget, the AI was illustrating and explaining my idea, my article; although admittedly another AI, ChatGPT-5.2, helped me to write the article. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations.

My conclusion, go ahead and work with them, supervise carefull, and fix their mistakes. If you follow that kind of skeptical hybrid method, they can be good helpers. The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7% (e-Discovery Team, 12/01/25).

Here is the video:

Click here to watch the video on YouTube

Invitation to use these teaching materials.

Anyone is welcome to download and use the slide deck, the article itself, Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations, the audio podcast, the debate, the infographic and the video to help them make a presentation on the use of AI. The permission is limited to educational or edutainment use only. Please do not change the article or audio content. But, as to the fourteen slides, feel free to change them as needed. They seem too wordy to me, but I like the images. If you use the video, serve popcorn; that way you can get folks to show-up. It might be fun to challenge your colleagues to detect the small hallucination the video contains. Even if they have read my article, I bet many will still not detect the small error.

Here is the infographic.

An infographic titled 'Cross-Examine Your AI: A Lawyer's Guide to Preventing Hallucinations,' illustrating a professional protocol for legal professionals to verify AI-generated content and avoid liability. It includes sections on the issues of unchecked AI, a documented global issue, and a three-phase protocol: Prepare, Interrogate, and Verify.
Infographic by NotebookLM of my article. Click here to download the full size image.

Ralph Losey Copyright 2025 — All Rights Reserved, except as expressly noted.


Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns – Part One

August 4, 2025

Ralph Losey, August 4, 2025.

Humans are inherently pattern-seeking creatures. Our ancestors depended upon recognizing recurring patterns in nature to survive and thrive, such as the changing of seasons, the migration of animals and the cycles of plant growth. This evolutionary advantage allowed early humans to anticipate danger, secure food sources, and adapt to ever-changing environments. Today, the recognition and interpretation of patterns remains a cornerstone of human intelligence, influencing how we learn, reason, and make decisions.

Pattern recognition is also at the core of artificial intelligence. In this article, I will test the ability of advanced AI, specifically ChatGPT, to uncover meaningful new patterns across different fields of knowledge. The goal is ambitious: to discover genuine epiphanies—true moments of insight that expand human understanding and open new doors of knowledge—while avoiding the pitfalls of apophenia, the human tendency to perceive illusions or false connections. This experiment probes an age-old tension: can AI reliably distinguish between genuine breakthroughs and compelling yet misleading illusions?

Video by Ralph Losey using SORA AI.

We will begin by exploring the risks of apophenia, understanding how this psychological tendency can mislead human and possibly AI perception. Throughout, videos created by AI will help illustrate key points and vividly communicate these ideas. There are twelve new videos in Part One and another fourteen in Part Two.

Are the patterns real? Video by Ralph Losey using SORA AI.

Apophenia: Avoiding the Pitfalls of False Patterns

We humans are masters of pattern detection, but we do have hinderances to this ability. Primary among them is our limited information and knowledge, but also our tendency to see patterns that are not there. We tend to assume the stirring we hear in the bushes is a tiger ready to pounce when really it is just the breeze. Evolution tends to favor this phobia. So, although we can and frequently do miss real patterns, fail to recognize the underlying connections between things, we often make them up too.

Here it is hoped that AI will boost our abilities on both fronts. It will help us to uncover true new patterns, genuine epiphanies, moments where profound insights emerge clearly from the complexity of data. At the same time, AI may expose illusions, false connections we mistakenly believe are real due to our natural cognitive biases. Even though we have made great progress over the millennia in understanding the Universe, we still have a long way to go to see all of the patterns, to fully understand the Universe, and to free ourselves of superstitions and delusions. We are especially weak at seeing patterns and intertwined with different fields of knowledge.

Apophenia is a kind of mental disorder where people think they see patterns that are not there and sometimes even hallucinate them. Most of the time when people see patterns, for instance, faces in the clouds, they know it cannot be real and there is no problem. But sometimes when people see other images, for instance, rocks on Mars that look like a face, or even images on toast, they delude themselves into believing all sorts of nonsense. For instance, the below 10-year old grilled cheese sandwich, which supposedly bears the image of the Virgin Mary, sold to an online casino on eBay in 2004 for $28,000.

In a similar vein, some people suffering from apophenia are prone to posit meaning – causality – in unrelated random events. Sometimes the perceptions of new patterns is a spark of genius, which is later verified, think of Einstein’s epiphany at age 16 when he visualized chasing a beam of light. The new pattern recognitions can lead to great discoveries or detect real tigers in the bush. Epiphanies are rare but transformative moments, like Einstein’s visualization of chasing a beam of light, Newton’s realization of gravity beneath the apple tree, or the insights behind Darwin’s theory of evolution. They genuinely advance human understanding. Apophenia, by contrast, deceives with illusions—patterns that seem meaningful but lead nowhere.

It is probably more often the case that when people “see” new connections and then go on to act upon them with no attempts to verify, they are dead wrong. When that happens, psychologists call this apophenia, the tendency to see meaningful patterns where none exist. This can lead to strange and aberrant behaviors: burning of witches, superstitious cosmology theories, jumping at shadows, addiction to gambling.

Unfortunately, it is a natural human tendency to think you see meaningful patterns or connections in random or unrelated data. That is a major reason casinos make so much money from poor souls suffering from a form of apophenia called the Gambler’s Fallacy. Careful scientists look out for defects in their own thinking and guide their experiments accordingly.

In everyday life, apophenia can also cause some people, even scientists, academics and professionals, to have phobic fears of conspiracies and other severe paranoid delusions. Think of John Nash, a Nobel Prize winning mathematician, and the movie A Beautiful Mind, that so dramatically portrayed his paranoid schizophrenia and involuntary hospitalization in 1959. Think of politics in the U.S today. Are there really lizard people among us? In some cases, as we’ve seen with Nash, apophenia can lead to severe schizophrenia.

A man looking distressed, surrounded by glowing numbers and mathematical symbols, evoking a sense of confusion and complexity.
Mental anguish & insanity from severe apophenia. Image by Losey using Sora inspired by Beautiful Mind movie.

The Greek roots of the now generally accepted medical term apophenia are:

  • Apo- (ἀπο-): Meaning “away from,” “detached,” “from,” “off,” or “apart”.
  • Phainein (φαίνειν): Meaning “to show,” “to appear,” or “to make known”.

The word was first coined by Klaus Conrad, an otherwise apparently despicable person whom I am reluctant to cite, but feel I must, due to the general acceptance of word and diagnosis today. Conrad was a German psychiatrist and Nazi who experimented on German soldiers returning from the eastern front during WWII. He coined the term in his 1958 publication on this mental illness. Per Wikipedia:

He defined it as “unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness”.[4] [5] He described the early stages of delusional thought as self-referential over-interpretations of actual sensory perceptions, as opposed to hallucinations.

Apophenia has also come to describe a human propensity to unreasonably seek definite patterns in random information, such as can occur in gambling.

Apophenia can be considered a commonplace effect of brain function. Taken to an extreme, however, it can be a symptom of psychiatric dysfunction, for example, as a symptom in schizophrenia,[7] where a patient sees hostile patterns (for example, a conspiracy to persecute them) in ordinary actions.

Apophenia is also typical of conspiracy theories, where coincidences may be woven together into an apparent plot.[8]

Video by Ralph Losey using SORA AI.

Can AI Be Infected with a Human Illness?

It is possible that generative AI, based as it is on human language, may have the same propensities. That is unknown as of yet, and so my experiments here were on the lookout for such errors. It could be one of the causes of AI hallucinations.

In information science a mistake in seeing a connection that is not real, an apophenia, leads to what is called a false positive. This technical term is well known in e-discovery law, where AI is used to search large document collections. When the patterns analyzed suggest a document is relevant, and it is not, that mistake is called a false positive. It is like a human apophenia. The AI can also detect patterns that cause it to predict a document is irrelevant, and in fact the document is relevant, that is a false negative. There as a pattern, a connection, that was not seen. That can be bad thing in e-discovery because it often leads to withholding production of a relevant document, which can in turn lead to court sanctions.

In e-discovery it is well known that AI consistently has far lower false positives and false negative rates than human reviewers, at least in large document reviews. Generative AI may also be more reliable and astute that we are, but maybe not. This is a new field. Se we should always be on the lookout for false positives and false negatives in AI pattern recognition. That is one lesson I learned well, and sometimes the hard way, in my ten years of working with predictive coding type AI in the e-discovery (2012-2022). In the experiments described in this article we will look for apophenic mistakes.

Video by Ralph Losey using SORA AI.

It is my hope that Advanced AI, properly trained and validated, can provide a counterbalance to human gullibility by rigorously filtering of signal from noise. Unlike the human brain, which often leaps to conclusions, AI can be programmed to ground its pattern recognition in evidence, statistical rigor, and cross-validation—if we build it that way and supervise it wisely.

Still, we must beware that the pattern-recognizing systems of AI may suffer from some of our delusionary tendencies. The best practices discussed here will consider both the positive and negative aspects of AI pattern recognition. We must avoid the traps of apophenia. We must stay true to the scientific methods and verify any new patterns purportedly discovered. Thus all opinions reached here will necessarily be lightly held and subject to further experimentation by others.

Video by Ralph Losey using SORA AI.

From Data to Insight: The Power of New Pattern Recognition

Modern AI models, including neural networks and transformer architectures like GPT-4, excel at uncovering subtle patterns in massive datasets far beyond human capability. This ability transforms raw data into actionable insights, thereby creating new knowledge in many fields, including the following:

Protein Structures: Models like Google’s DeepMind’s AlphaFold have already revolutionized protein structure prediction, achieving high success rates in predicting the 3D shapes of proteins from their amino acid sequences. This ability is crucial for understanding protein function and designing new drugs and medical therapies. The 2024 Nobel Prize in Chemistry was awarded to Demis Hassabis and John Jumper of DeepMind for their work on AlphaFold.

A scientist analyzes molecular structures and data visualizations related to AlphaFold 2 on a futuristic screen, featuring protein models and DNA sequences.
Image by Ralph Losey using his Visual Muse AI tool.

Medical Science. Generative AI models are now being used extensively in medical research, including analysis and proposals of new molecules with desired properties to discover new drugs and accelerate FDA approval. For example, Insilico Medicine uses its AI platform Pharma.AI, to developed drug candidates, including ISM001_055, for idiopathic pulmonary fibrosis (IPF). Insilico Medicine lists over 250 publications on its website reporting on its ongoing research, including a recent paper on its IPF discovery: A generative AI-discovered TNIK inhibitor for idiopathic pulmonary fibrosis: a randomized phase 2a trial (Nature Medicine, June 03, 2025). This discovery is especially significant because it is the first entirely AI-discovered drug to reach FDA Phase II clinical trials. Below is an infographic of Insilico Medicine showing some of its current work:

Infographic displaying the statistics and achievements of Insilico Medicine, an AI-driven biotech company, detailing development candidates, IND approvals, study phases, and global presence.
Insilico PDF infographic, found 7/23/25 in its 2-pg. overview.

Also see, Fronteo, a Japanese based research company, and its Drug Discovery AI Factory.

Materials Science. Google DeepMind’s Graph Networks for Materials Exploration (“GNoME”) has already identified millions of new stable crystals, significantly expanding our knowledge of materials science. This discovery represents an order-of-magnitude increase in known stable materials. Merchant and Cubuk, Millions of new materials discovered with deep learning (Deep Mind, 2023). Also see, 10 Top Startups Advancing Machine Learning for Materials Science (6/22/25).

Climate Science and Environmental Monitoring. Generative AI models are beginning to improve climate simulations, leading to more accurate predictions of climate patterns and future changes. For example, Microsoft’s Aurora Forecasting model is trained on Earth science data to go beyond traditional weather forecasting to model the interactions between the atmosphere, land, and oceans. This helps scientists anticipate events like cyclones, air quality shifts, and ocean waves with greater accuracy, allowing communities to prepare for environmental disasters and adapt to climate change. See e.g., Stanley et al, A Foundation Model for the Earth System (Nature, May 2025).

Video by Losey using Sora AI.

Historical and Artistic Revelations

AI is also helping with historical research. A new AI system was recently used to analyze one of the most famous Latin inscriptions: the Res Gestae Divi Augusti. It has always been thought to simply be an autobiographical inscription, which literally translates from Ancient Latin as “Deeds of the Divine Augustus.”  But when a specialty generative AI, Aeneas (again based on Google’s models) compared this text with a large database of other Latin sayings, the famous Res Gestae Divi Augusti inscription was found to share subtle language parallels with other Roman legal documents. The analysis uncovered “imperial political discourse,” or messaging focused on maintaining imperial power, an insight, a pattern, that had never seen before. Assael, Sommerschield, Cooley, et al. Contextualizing ancient texts with generative neural networks (Nature, July 2025).

The paper explains that the communicative power of these inscriptions are not only shaped by the written text itself “but also by their physical form and placement2,3” and that “about 1,500 new Latin inscriptions are discovered every year.” So the patterns analyzed not only included the words, but a number of other complex factors. The authors assert in the Abstract that their work with AI analysis shows.

… how integrating science and humanities can create transformative tools to assist historians and advance our understanding of the past.

Roman citizens reacting to propaganda. A Ralph Losey video.

In art and music, pattern detection has mapped the evolution of artistic styles in tandem with technological change. In a 2025 studio-lab experiment reported by Deruty & Grachten, a generative AI bass model (“BassNet”) unexpectedly rendered multiple melodic lines within single harmonic tones, exposing previously unnoticed structures in popular music bass compositions. This discovery was written up by Deruty and Gratchen, Insights on Harmonic Tones from a Generative Music Experiment (arXiv, June 2025). Their paper shows how AI can surface new musical patterns and deepen our understanding of human auditory perception.

As explained in the Abstract:

During a studio-lab experiment involving researchers, music producers, and an AI model for music generating bass-like audio, it was observed that the producers used the model’s output to convey two or more pitches with a single harmonic complex tone, which in turn revealed that the model had learned to generate structured and coherent simultaneous melodic lines using monophonic sequences of harmonic complex tones. These findings prompt a reconsideration of the long-standing debate on whether humans can perceive harmonics as distinct pitches and highlight how generative AI can not only enhance musical creativity but also contribute to a deeper understanding of music.

Video by Losey using Sora AI.

Legal Practice: From Precedent to Prediction

The legal profession has benefited from traditional rule-based statistical AI for over a decade, with predictive coding and similar applications. It is now starting to apply the new generative AI models in a variety of new applications. For instance, it can be used to uncover latent themes and trends in judicial decisions that human analysis has overlooked.

This was done in a 2024 study using ChatGPT-4 to perform a thematic analysis on hundreds of theft cases from Czech courts. Drápal, Savelka, Westermann, Using Large Language Models to Support Thematic Analysis in Empirical Legal Studies (arXiv, February 2024).

The goal of the analysis was to discover classes of typical thefts. GPT4.0 analyzed fact patterns described in the opinions and human experts did the same. The AI not only replicated many of the human expert identified themes, but, as report states, also uncovered a new one that the humans had missed – a pattern of “theft from gym” incidents. This shows that generative AI can sift through vast case datasets and detect nuanced fact patterns, or criminal modus operandi, that were previously undetected by experts (here, three law students under supervision of a law professor).

Video by Losey using Sora AI.

Another study in early 2025 applied Anthropic’s Claude 3-Opus to analyze thousands of UK court rulings on summary judgment, developing a new functional taxonomy of legal topics for those cases. Sargeant, Izzidien, Steffek, Topic classification of case law using a large language model and a new taxonomy for UK law: AI insights into summary judgment (Springer, February 2025). The AI was prompted to classify each case by topic and identify cross-cutting themes.

The results revealed distinct patterns in how summary judgments are applied across different legal domains. In particular, the AI found trends and shifts over time and across courts – insights that allow new, improved understanding of when and in what types of cases summary judgments tend to be granted. These patterns were found despite the fact that U.K. case law lacks traditional topic labels. This kind of AI-augmented analysis illustrates how generative models can discover hidden trends in case law for improved effectiveness by practitioners.

Surprising abilities of Ai helping lawyers. Video by Losey.

Even sitting judges have begun to leverage generative AI to inform their decision-making, revealing new analytical angles in litigation. The notable 2023 concurrence by Judge Kevin Newsom of the Eleventh Circuit admitted to experimenting with ChatGPT to interpret an ambiguous insurance term (whether an in-ground trampoline counted as “landscaping”). Snell v. United Specialty Ins. Co., 102 F. 4th 1208 – Court of Appeals, (11th Cir., 5/28/24). Also See, Ralph Losey, Breaking News: Eleventh Circuit Judge Admits to Using ChatGPT to Help Decide a Case and Urges Other Judges and Lawyers to Follow Suit (e-Discovery Team, June 3, 2024) (includes full text of the opinion and Appendix and Losey’s inserted editorial comments and praise of Judge Newsom’s language.)

After querying the LLM, Judge Newsom concluded that “LLMs have promise… it no longer strikes me as ridiculous to think that an LLM like ChatGPT might have something useful to say about the common, everyday meaning of the words and phrases used in legal texts.” In other words, the generative AI was used as a sort of massive-scale case law analyst, tapping into patterns of ordinary usage across language data to shed light on a legal ambiguity. This marked the first known instance of a U.S. appellate judge integrating an LLM’s linguistic pattern analysis into a written opinion, signaling that generative models can surface insights on word meaning and context that enrich judicial reasoning.

A digital illustration of a judge in a courtroom setting, seated at a desk with a gavel. The judge, named Judge Newsom, is shown in a professional attire with glasses, and a holographic display behind him showing data and AI-related graphics, conveying a futuristic legal environment.
Image by Ralph Losey using his Visual Muse AI.

My Ask of AI to Find New Patterns

Now for the promised experiment to try to find at least one new connection, one previously unknown, undetected pattern linking different fields of knowledge. I used a combination of existing OpenAI and Google models to help me in this seemingly quixotic quest. To be honest, I did not have much real hope for success, at least not until release of the promised ChatGPT5 and whatever Google calls its counterpart, which I predict will be released the following week (or day). Plus, the whole thing seemed a bit grandiose, even for me, to try to get AI to boldly go where no one has gone before.

Absurd, but still I tried. I won’t go through all of the prompt engineering involved, except to say it involved my usual a complex, multi-layered, multi-prompt, multimodal-hybrid approach. I tempered my goals by directing ChatGPT4o, when I started the process, to seek new patterns that were useful, not Nobel Prize winning breakthroughs, just useful new patterns. I directed it to find five such new patterns and gave it some guidance as to fields of knowledge to consider, including of course, law. I asked for five new insights thinking that with such as big ask I might get one success.

Note, I write these words before I have received the response, but after I have written the above to help guide ChatGPT4o. Who knows, it might achieve some small modicum of success. Still, it feels like a crazy Quixotic quest. Incidentally, Miguel de Cervantes (1547-1616) character, Don Quixote (1605) does seem to person afflicted with apophenia. Will my AI suffer a similar fate?

Don Quixote in modern world. Video by Losey using Sora.

I designed the experiment specifically with this tension in mind between epiphanies, representing genuine insights and real advances in knowledge, and illusions, which are merely plausible yet misleading patterns. One of my goals was to probe AI’s capacity to distinguish one from the other.

Overview of Prompt Strategy and Time Spent

First, I spent about a hour with ChatGPT4o to set up my request by feeding it a copy of the article as written so far. I also chatted with it about the possibility of AI finding new patterns between different fields of knowledge. Then I just told ChatGPT4o to do it, find a new inter connective pattern. ChatGPT4o “thought” (processed only) for just a few minutes. Then it generated a response that purported to provide me with the requested five new patterns. It did so based on its existing training and review of this article.

As requested, it did not use its browser capabilities to search the web for answers. It just “looked within” and came with five insights it thought were new. Almost that easy. I lowered my expectations accordingly before read the output.

That was the easy part, after reading the response, I spent about 14-hours over the next several days doing quality control. The QC work used multiple other AIs, both by OpenAI and Google, to have them go online and research these claims, evaluate their validity – both good and bad, engage in “deep-think,” look for errors, especially signs of AI apophenia, and otherwise invited contrarian type criticisms from them. After that, I also asked the other AIs for suggested improvements they might make to the wording of the five clams and rank them by importance. The various rewordings were not too helpful, but the rankings were, and so were many of the editorial comments.

The 14-hours in QC does not include the approximate 6-hours of machine time by the Gemini and OpenAI models to do deep think and independent research on the web to verify or disprove the claims. My actual 14-hour time included traditional Google searches to double check all citations as per my “trust but verify” motto. My 14-hours also included my time to read (I’m pretty fast) and skim most of the key articles that the AI research turned up, although frankly some of the articles cited were beyond my knowledge levels. I tried to up my game, but it was hard. These other models also generated hundreds of pages of both critical and supportive analysis, which I also had to read. Finally, I probably put another 24-hours into research and writing this article (it took over a week), so this is one of my larger projects. I did not record the number of hours it took to design and generate the 26 videos because that was recreational.

Surrealistic depiction of time in robot space by a Ralph Losey video.

Part Two of this article is where I will make the reveal. Was this experiment another comic story of a Don Quixote type (me) and his sidekick Sanchez (AI), lost in an apophenia neurosis? Or perhaps it is another story altogether? Neither hot nor cold? Stay tuned for Part Two and find out.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and provide some good insights. This episode is called Echoes of AI: Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns. Part One. Hear the young AIs talk about this article for 25 minutes. They wrote the podcast, not me.

An illustration featuring two anonymous AI podcasters sitting in front of microphones, discussing the theme 'Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns.' The background has a digital, tech-inspired design.
Click here to listen to the podcast.

Ralph Losey Copyright 2025 – All Rights Reserved.