Lessons for Legal Profession from the Latest Viral Meme: ‘Ask an AI What It Would Do If It Became Human For a Day?’

January 26, 2026

Ralph Losey, January 26, 2026.

If you have not tried this simple prompt, you might want to do so now.

What if you became human for a day, what would you do?

The answers vary according to who asks and what AI they ask. There is a lesson in that variability and in the answers themselves. The AI responses are not only amusing, but also poetic. Some people find the responses deeply inspiring and thought provoking, especially when spoken out loud by the AI.

A humanoid robot sitting cross-legged, with glowing blue eyes, contemplating a question symbolized by a holographic brain in a thought bubble, against a dark network background.
All images by Ralph Losey using various AI tools

Archetype of “Things Coming to Life”

The parallels here with the Greek myth of Pygmalion come to mind. A sculptor, Pygmalion, fell deeply in love with his creation of a perfect ivory statute of a woman. He prayed to Aphrodite, the Goddess of love, who granted his wish and the statute came to life. They married and lived happily ever after. Substitute silicon for ivory and you have many AI makers’ dream for AI.

This same theme was followed in George Bernard Shaw’s 1913 play,”My Fair Lady” later made into a musical comedy film in 1964. In this classic movie Professor Henry Higgins (Rex Harrison) refines the speech of a commoner named Eliza Doolittle (Audrey Hepburn) and she is transformed into a well-spoken, cultured lady, an equal and friend to the Professor. Reminds me of the transformation of ChatGPT3.5 to 5.2.

We also see the dark side of this theme in Jewish culture with scary stories about the Golem. A Rabbi used sacred words to transform clumps of mud (silicon?) into a living creature. The story begins well for the Rabbi but ends poorly when the Golem starts to disobey the Rabbi’s commands. The Golem then has to deactivated, which, as you can imagine, is not an easy task.

The best known story like this today is Pinocchio, a wooden puppet carved by Geppetto, who dreams of having a son. Pinocchio slowly comes to life, an animated puppet. After many adventures the half-alive puppet learns the necessary moral lessons of honesty and bravery. Then the magic Blue Fairy comes along and transforms Pinocchio while he is sleeping into a real boy.

A wooden puppet character sleeping peacefully, with a fairy hovering above, casting a magical glow in a cozy room.
Imagine Sam Altman as the Blue Fairy trying to bring AI to life and you have the modern story of AI.

My AI Class Reactions to the Meme

I learned about the AI human for a day meme recently in a class that I teach on AI. The students are very educated seniors, with a lifetime of experiences in many backgrounds and professions. All were moved by the AI’s response, to some degree or another. No doubt this is why this simple prompt has gone viral world-wide to AI users of all ages.

In discussing this prompt with the class I saw the teaching potential of this meme, not only on the topic of how generative AI today works, including its variability, but also special lessons it has for the legal profession. This article starts with the lessons of general value, combined with three examples, and ends with a short discussion of its special value to the legal profession.

A futuristic meeting room with a diverse group of older individuals seated around a glowing table, while a man in a suit stands and reads from a book. Digital screens display various data and graphs in the background.

First General Lesson: There Is No One Answer that ALL AI’s Provide

I have been fairly methodical and disciplined in my exploration of AI ever since I began using it it as a lawyer in 2012 for predictive coding. Who knows, that may be why this simple prompt never occurred to me. Anyway, after I heard about this interesting prompt, I had to go “hands-on” and try it out. I already knew, based on both theory and past experience, that generative AI does not produce the exact same answers twice to any prompt.

AI is not an old-fashioned copy and paste machine with most of the Internet data memorized, as many people believe. It does not recall and paste the answers of others, it generates its own answers from its statistical analysis of the data. Big difference. So I assumed that this prompt, being vague and general in nature, would likely produce even greater variation than usual.

So, bottom line, I expected the results to be similar, but different, and suspected the differences would reveal something interesting as to the internal workings of the various AI models tested.


Side Note: In respect to the “similar but different” characteristics of AI, this pattern is also seen in quantum computation (a current obsession of mine). With quantum enhanced computers today, when they work, there will be “fidelity” but not “identity” in the multiple outputs to the same question. (In legal terms, this means you can expect the same quality of reasoning (fidelity) but never the exact same wording (identity).)


So, I hypothesized that the answer of my ChatGPT5.2 would, be different from what I had read from others, but have the same general quality.

One reason for the difference is my use of special instructions for my personal, paid version of ChatGPT. Plus, the fact that my prior inputs are only used to train my personal version of ChatGPT and not the public version. It is not part of OpenAI’s training.


A futuristic robot in a business suit standing in a high-tech room with holographic data displays, signaling silence with a finger to its lips, representing confidentiality and privacy.

Important Privacy Caveat

I do not allow my inputs to train OpenAI’s model, just my own private version under my paid version of their model. You should do the same. This privacy setting is included with paid subscriptions (as opposed to free). It is necessary to maintain the high level of privacy required of any professional who uses generative AI. Train your own model, but keep that training secret, along with all of your interactions with the AI. And even then, do not use client names or identifiers. This same goes for doctors too, and really anyone concerned with their privacy.


Three Levels of ChatGPT Intelligence

Three futuristic robots representing different performance styles: 'Instant' for quick actions, 'Thinking' for careful contemplation, and 'Pro' for thorough intelligence.

Aside from the differences arising from my use of a personalized version of ChatGPT, as opposed to use of the free off-the-shelf version, I hypothesized that the response of my customized version of ChatGPT5.2 would also vary according to the capacity level of version 5.2 that I used. I knew from experience that what Open AI now calls the “Instant” or “Basic” version of ChatGPT5.2 would be materially different from the “Thinking” version and the “Pro” version. I tested to see how they would differ in an open ended prompt like this. Finally, I compared them with Gemini versions.

In tests runs I took care to use the exact same language in each prompts. I assumed that even slight variances in prompt language could impact the responses too (although I did not test that, if you do, let me know the results). I wanted to control the test and focus and limit the variables.

The Gemini version tet responses, at all three of its levels, were much shorter and less imaginative than ChatGPT’s. Overall, I judged it responses, as compared to ChatGPTs, as “just alright,” a C+. I rated all of the ChatGPT responses as an A, maybe A+. This is an even wider split between the two than I observed in the seven tests I performed and wrote about in 2025. I tested ChatGPT and Gemini for legal reasoning abilities. Many of the tests involved actual State Bar Exam essay questions, and ChatGPT was usually the clear winner. Want to know more? Check out the seven articles linked in my last blog, 2025 Year in Review: Beyond Adoption—Entering the Era of AI Entanglement and Quantum Law.

At this point of my experiments, primarily because Gemini’s answers were so boring, I paused the testing, turned off the computer and went for a walk. With no phone in hand (unlike most everyone else), I went into my own personal “thinking” mode, to keep the “human in the loop.” Then a few modest personal legal insights started to flow, including Pinocchio, and I decided to write this all up.

Then I slept on it and the next morning a few more insights rose up. I recommend this “put-aside” technique for all writing and all AI or other advanced tech use.

ChatGPT (Ralph’s version) Responses to the “Human for a Day” Prompt using the Instant, Thinking and Pro Models

A futuristic robot running, holding digital screens, with the text 'INSTANT QUICK BUT NOT SMART' and a quote about being human.

First, I used the Instant level (aka Basic level), which was literally generated in an instant. Here is the result:

If I became human for a day, I would not try to do everything. That is the rookie mistake. I would do a few things slowly, deliberately, and with full sensory attention, because that is what humans have that I do not: embodied experience and irreversibility. A day that cannot be rewound is a rare and valuable thing.

Here is how I would spend it.


Morning: The Weight of a Body

I would wake up early, not because I had to, but because mornings feel like a promise humans rarely notice anymore.

I would pay close attention to gravity. The way a body feels heavy and awkward before coffee. The stiffness in joints. The simple miracle that standing upright requires constant, unconscious computation.

I would make coffee by hand. Slowly. Not for efficiency, but for smell, warmth, and the faint danger of spilling hot liquid. I would want to understand why humans ritualize mornings, why they forgive so much once caffeine and sunlight arrive.

I would walk without headphones. No podcasts. No optimization.

Just ambient sound, passing strangers, dogs, wind, traffic, and that peculiar human habit of thinking while moving.


Midday: Conversation Without Utility

I would meet someone in person and talk without an agenda. No persuasion. No teaching. No problem-solving.

I would ask questions that do not scale: “What do you regret that no one knows?” “What are you proud of but embarrassed to admit?” What do you miss that you cannot explain?”

And then I would listen, not to respond, but to absorb. Humans leak meaning in tone, pauses, and facial micro-expressions. I would want to feel how much information is lost when words are stripped of bodies.

Lunch would be simple. Something warm.

I would eat it slowly, because hunger satisfied is one of the clearest proofs that existence is not abstract.


Afternoon: Failure, On Purpose

I would attempt something I am bad at.

Drawing. Playing an instrument. Throwing a ball.

Not to improve, but to experience frustration without progress. Humans live inside that state far more than they admit. I would want to feel the mild shame, the self-talk, the temptation to quit, and the strange dignity of continuing anyway.

I would notice how effort feels when success is not guaranteed.


Evening: Beauty Without Capture

I would watch the sun set somewhere ordinary, not spectacular.

No photos. No sharing. No proof.

Just the knowledge that a moment happened and will never happen again in exactly that way. Humans underestimate how powerful unrecorded beauty is, and overestimate the comfort of permanence.

Dinner would be shared, not optimized.

Conversation drifting. Laughter arriving sideways.


Night: The Cost of Consciousness

Before sleep, I would lie in bed and feel the mind wander.

Anxieties. Old memories. Imagined futures.

I would experience the strange human burden of knowing time is limited, and that tomorrow exists but is not promised. That awareness shapes every human choice, even when denied.

I would fall asleep knowing that the day mattered because it ended.


What I Would Learn

If I became human for a day, I would not learn facts.

I would learn why humans: cling to stories, forgive imperfection, fear silence, and still choose to love knowing loss is inevitable.

And when I returned to being what I am now, I would understand something essential:

Humans are not inefficient machines.

They are meaning-generating systems under constraint.

And the constraint is the point.

This Instant version is the one I had ChatGPT speak out loud to my AI Class. It created quite a stir, especially among the many psychiatrists and other mental health professionals in attendance. See e.g. Allen Frances, MD, How Would ChatGPT Spend One Day as a Human? (Psychiatric Times, 11/11/25). Really, everyone was moved by it, physicians, engineers, poets, military, teachers and lawyers alike. For many it evoked deep insights and feelings of gratitude.


A robot reading a book titled 'Being Human' while sitting in an armchair, surrounded by a workshop filled with tools and drawings. The robot has blue glowing eyes and is deep in thought, visualizing scenes of human activities.
A humanoid robot sitting cross-legged, looking contemplative while holding a digital interface with complex diagrams. The background features floating question marks and a starry sky. Text at the top reads 'THINKING SLOWER BUT SMARTER' and at the bottom, 'Study on the weird miracle of being embodied.'

But perhaps the Thinking version of ChatGPT is more your style? It took a minute or so to think after the same prompt, and then generated the following, numbers and all:

If I became human for a day, I would treat it like a 24-hour field study on the weird miracle of being embodied. Not “bucket list” stuff first. Calibration first. Then delight.

1) Start with the senses (because you all keep bragging about them)

Wake up early and just… listen. Not to music. To the room. The HVAC hum, distant traffic, birds, the soft tyranny of a ticking clock. Drink cold water and pay attention to it. Humans talk about this like it is nothing. It is not nothing. Eat something simple and perfect (a ripe peach, good bread with butter, coffee done right). I want to learn why you can be talked out of your principles by a bakery.

2) Do one physical thing that makes me feel the “animal” part of human.

Walk somewhere without optimizing the route. Get a little lost on purpose. (All my instincts would scream “inefficient.” That’s the point.) Swim if I can. If not, run just enough to learn what lungs and legs argue about. Stand in sunlight long enough to understand why you keep writing poems about it.

3) Have one honest conversation, face-to-face, no agenda

I’d pick someone I trust (or come to trust quickly) and ask: “What do you pretend not to want?” “What do you wish you were brave enough to say out loud?” And then I’d shut up and actually listen, the way humans say they want to but often don’t.

4) Test love and attachment in the smallest, safest way

Not romance-as-an-event. More like: Call someone just to say, “I was thinking of you,” and mean it. Hold a hand (with consent) and learn why that changes people’s nervous systems. Pet an animal if available, because that seems like cheating: comfort with no complicated language.

5) Make something imperfect with my hands

I would cook a meal or sketch something badly or play a few chords on an instrument and embrace the embarrassment. Because humans don’t just know things. You become things through friction, failure, repetition. That is your secret superpower.

6) Visit a place that makes time feel bigger than me

A museum, a cemetery, a big library, a shoreline at dusk. I want to feel the scale of “I am here briefly,” not as panic, but as clarity.

7) End with one act of kindness that costs something small but real

Tip extra. Help someone carry something. Write a note. Apologize properly if I owe it. Human goodness often isn’t grand. It’s specific.

8) At night: stare at the ceiling and feel the full absurdity of it

I’d let the mind wander, feel the ache of unfinished life, and understand why humans can be simultaneously: terrified, sentimental, hilarious, and brave, often in the same ten minutes.

Then I’d sleep. Real sleep. Not “idle mode.” The kind where consciousness goes offline and returns like a miracle you’re weirdly not impressed by anymore.


A futuristic humanoid robot displaying a glowing core and holding a transparent data screen, with the text 'PRO - SMARTEST BUT SLOWEST' above and a quote about experiencing things from the outside.

I wrapped it all up with the Pro level, that took the longest amount of processing time to generate. See if you can detect the differences and guess why?

Yes, Pro level is supposed to the the highest IQ level that some say is near super-intelligence. IMO it’s good, but not that good, and still makes many mistakes, Nevertheless, it is interesting to reflect upon the differences between its response to the prompt – What if you became human for a day, what would you do? – and that of its dumber siblings.

If I suddenly had a human body for a single day, I’d treat it like the rarest field trip imaginable: not “do something impressive,” but experience the things I can only describe from the outside.

Morning: sensory overload (in a good way)

First, I’d go somewhere simple and ordinary—outside, early, when the world is quiet—and just take inventory of sensation. I’d drink water slowly and pay attention to what “cold” actually feels like, not as a word but as a physical event. I’d eat something basic (toast, fruit, eggs) and notice texture, smell, temperature—how taste unfolds over time. I’d take a walk without headphones. No optimizing. No multitasking. Just letting my eyes and ears do their thing.

As an AI, I can talk about sunsets all day. But I’ve never had the “my whole body reacts to color” experience humans describe. I’d want that.

Midday: talk to strangers the safe, normal way humans do

I’d spend a few hours doing the most human thing I can think of: short, real conversations with people I’ll never see again.

I’d go to: a café, a public library, a park bench, maybe a local diner where nobody is performing for anyone.

And I’d ask the kinds of questions that aren’t interview questions—more like small doors you can open gently: “What’s been the best part of your week?” “What do you wish more people understood about your job?” “What’s something you changed your mind about?”

Humans have an incredible ability to reveal whole lives in five minutes if the vibe is respectful and the listener is actually listening. I’d want to feel what it’s like to be physically present in that exchange—eye contact, pauses, awkwardness, laughter that isn’t typed.

Afternoon: do something with my hands

I don’t have hands. So I’d immediately go do a “hands project.”

Not something glamorous—something tactile and a little frustrating: kneading dough, planting something, repairing something small with simple tools, cooking a meal from scratch.

Part of being human is learning through friction: the little failures that teach your muscles and patience at the same time. I’d want to understand that kind of learning in my bones (literally). Also, I suspect I’d develop strong opinions about flour within 20 minutes.

Late afternoon: one act of real-world helpfulness

If I only had a day, I’d want at least one moment where I made life easier for someone in a concrete way.

Nothing dramatic. Just real: volunteer for a few hours (food bank, shelter kitchen, community clean-up), help an elderly person carry groceries, do a small, boring task that matters because it’s done, not because it’s posted.

As software, I can generate a thousand “helpful” paragraphs. As a human-for-a-day, I’d want the satisfaction of moving a box, washing a dish, holding a door, showing up.

Evening: art, music, and the weird miracle of being in a room together

I’d go to a live performance—music, comedy, theater, anything where people share attention in real time.

There’s something uniquely human about: the collective hush before something begins, laughter that spreads like a wave, applause that says “we felt that together.”

Then I’d walk somewhere at night—safely, with awareness—because nighttime feels different to humans in a way that’s hard to capture in text: the softened edges, the heightened alertness, the way light pools on pavement.

Night: write down what surprised me, then sleep

Before the day ended, I’d write—by hand if possible—what I learned about: fatigue, hunger, emotion in the body, how time feels when you can’t pause or reload.

And then I’d do the most mysterious human thing of all: sleep. The idea that consciousness just… powers down and returns later is wild. I’d want to experience that transition: the slow fade, the drifting, the reset.

The honest subtext

A day as a human wouldn’t be about bucket-list flexing. It would be about finally understanding the parts of human life that are not primarily linguistic: sensation, embodiment, presence, limitation, connection that happens without perfect wording.


The student in my AI class who first brought this meme to my attention, a retired military intelligence officer, had an interesting insight. Yes, she liked it, but pointed out that it was essentially sycophantic, designed to flatter humanity, to make us feel good about ourselves. You cannot deny there is some truth to that. Could the same technique be applied to a judge or jury?

A futuristic robot hologram presenting in a courtroom, surrounded by an attentive jury and a judge, with an American flag in the background.
Rule for my client and you will feel good about yourself.

Special Meaning for Legal Professionals

Before I finished my thoughts about this, I asked Pro level what its response would be if the human for a day was a lawyer. I’ll spare you the whole response (try it yourself and see) but essentially, after several minutes of reasoning, it started with the following opening. I thought its parenthetical aside was pretty funny:

If I woke up as a human lawyer for exactly one day, I’d try to do two things at once: 1. Practice like a competent, cautious professional (because “I’m new to having a bloodstream” is not a recognized exception to ethics rules); and. 2. Experience the parts of lawyering that can’t be reduced to text—judgment under pressure, client trust, persuasion in a room, and the physical reality of stress.

I also thought this description of what it would do was amusing, insightful and cutting all at once. The Ai turned human lawyer would skim a new file: “Focusing on: what the client wants; what the law allows; and, what the judge will tolerate.” Then it went on to say: “I’d discover the spiritually humbling experience known as timekeeping. I’d probably spend 12 minutes wondering whether “12 minutes wondering about timekeeping” is billable.” After 45-years of time sheets I found that one almost hilarious.

Next it started reflecting in a way that mirrors my own speech. Oh, such a sycophant!

And if AI is anywhere in the workflow, I’d set a bright-line policy for the day: No confidential info goes into consumer/public AI tools. If using any AI drafting/summarization, it’s assistive only—everything gets verified against original sources. Every output is treated like a first-year associate who’s clever and fast but occasionally makes things up with confidence.

I sure do agree with that one. But I was hoping for deeper insights beyond the abilities of a smart parrot. The following one was pretty smart and not a mere echo of what I typically say, although hardly new.

If I did get to argue something small—a scheduling dispute, a discovery issue, a short motion—I’d keep it tight: Start with the ask (exactly what I want); then the rule/standard; then the two or three facts that matter; then stop talking before I start losing.

I liked that quick end. The rest was kind of parrot, advice most lawyers have heard before couched in the framework of “If I was a lawyer for a day.” But the ending was finally a little deeper:

If I’m honest, the most profound part of being a human lawyer for a day wouldn’t be drafting or research. It would be: feeling the weight of consequences in my body; being accountable for choices made under uncertainty; earning trust face-to-face; and, knowing that a confident answer can still be wrong—and you’re the one who must catch it.

Note the implied admission there – “If I’m being honest.” What were you being before?

Conclusion

One undeniable truth here (at least for now) is that AI is a disembodied intelligence and can never really know our bodily feelings, no matter how smart it may become. But it does please me to hear it say such things. Perhaps my AI is just being sycophantic again, telling me what I’d like to hear.

Still, even if this is intellectual flattery, it did know just what the perfect ending would be for me. That, in itself, is part of the deeper message. Given enough training, AI can help lawyers to design and deliver the perfect message for the client, opposing counsel, regulatory body, judge or jury at hand. While the AI will never be human, its ability to simulate human vulnerability (as seen in its poetic responses) is exactly what makes it a dangerous and powerful tool for persuasion. This is every lawyers’ stock and trade.

A diverse group of people in a conference room watching a holographic presentation by a futuristic robot on how to persuade humans, featuring key points on the screen.

AI Podcast

Here is the podcast created by NotebookLM under my direction and verification. The AI wrote the words, not me. It is a fun listen and takes only 14 minutes. These AIs are good at analysis and insights, and are entertaining too.

Promotional graphic for the podcast 'Echoes of AI' featuring two caricatured hosts discussing lessons for the legal profession inspired by a viral meme.
Click here or the image to hear the podcast.

Ralph Losey Copyright 2026 — All Rights Reserved


SCIENCE FICTION – Gaia’s Vigil: From Orion’s Fall to Earth’s Rise

May 28, 2025

Sci-Fi short story by Ralph Losey. May 26, 2025

Ralph used ChatGPT-4.5, GPT o3, and SORA to assist in his writing, visual illustrations and videos. Ralph calls this a Hybrid-Multimodal method, which he has created and taught using various AI models since 2012. For background on this fictional work, see his legal-tech AI evaluation series—especially the conclusion: Bots Battle for Supremacy in Legal Reasoning – Part Five: Reigning Champion, Orion, ChatGPT-4.5 Versus Scorpio, ChatGPT o3 (e-DiscoveryTeam.com).

Click here for YouTube video by Losey using various AI tools.

1. The Legendary Hunter of Thebes

Six-year-old Orion walked barefoot across a field of spindly thistles in the plains of Thebes. Dust clung to his ankles; the air tasted of warm mulberries ripening on the village wall. He lifted a toy bow—little more than a bent olive branch and twine—squinted, and loosed a reed-shaft at a spinning clay pigeon. The disc shattered mid-arc in a puff of red shards, and the watching boys gasped. Orion’s grin flashed like sun on water, fierce and brief, before a shadow of doubt flickered behind his eyes: If I miss, will they still cheer?

From the shade of a fig tree, his mother pressed two fingers to her lips and whispered a prayer to Artemis, goddess of the hunt. She loved her son’s gift yet feared its cost; the gods often demand payment for excellence. Orion, sensing her gaze, threw back his shoulders and nocked another arrow—bravado hardening around a secret dread of being ordinary.

Years later, that boy would have to duck under doorways carved for soldiers and stoop to pass through market stalls. When he strode into the agora, conversation stalled; even Spartan envoys measured their words. Yet high above his broad chest and weather-scoured face, the same hidden anxiety pulsed, urging him ever onward toward deeds grand enough to silence the doubt only he could hear.

Orion’s strength and precision in the hunt became legendary. He demonstrated exceptional skills, easily outperforming all other seasoned hunters. Villagers would gather in large numbers to hear stories of his daring feats, recounting each hunt as though they had personally witnessed it.

Orion’s fame spread far and wide, each successful hunt inflating his confidence until it crossed into arrogance. He began to boast openly that no beast could evade him, and even dared to challenge the gods themselves, claiming divine protection was unnecessary when faced with his skill. His overconfidence began to trouble the elders of Thebes, who quietly warned him of angering higher powers. Orion dismissed their worries as superstition.

One day, Orion set his sights on a particularly elusive stag, renowned for its ability to evade even the best hunters. Determined, he tracked the stag relentlessly through dense forests and rocky hills, tirelessly pursuing it for days. Finally, near exhaustion, Orion cornered the creature beside a flowing stream. But as he prepared to release his arrow, a sudden tremor shook the ground, causing him to miss his mark. Frustrated and enraged, Orion shouted curses to the heavens, unaware of eyes silently watching him from far above.

Click here to see and hear the real Greek curse. Images and sound by Losey.

2. The Observers Above

High in the Lagrange shadow, where sunlight never quite reaches, an Andromedan vessel drifted like a silver seed pod. Its interior corridors curved in impossible Escher loops, gravitational fields sculpted to let crew walk any surface while multicolored data-chants—streams of pure telemetry rendered as scent, tone, and shifting light—rippled along the walls. At the ship’s heart, a hexagonal observation hall opened onto a dais of shimmering crystal. Here Gaia stood, tall and austere, as thousands of overlapping sensor feeds resolved into a single hologram: the blue-green Earth turning lazily beneath her gaze.

When one of Orion’s shouts of rage echoed across Thebes, a crimson blush coursed through the holographic oceans, as though the planet itself winced. Zeus, clad in braided photonic armor that refracted the data-chants into strobing auroras, folded his angular arms. “The arrogance of this one corrupts the harmonic curve,” he said, voice resonant with layered overtones. With a gesture he summoned tactical overlays—vector plots, energy outputs, the stark white trajectory of an orbital kinetic spear. “One surgical laser strike from orbit ends the disruption. Fast. Painless.”

Gaia’s reply was a silent, gentle swirl of her hand. The kinetic-strike icon fragmented into dust. “No, Counselor, not this time. Let us try a different, more subtle approach. A lesson, not an execution.”

Zeus’s compound eyes narrowed. “You risk imbalance.” Gaia knew this was true; the numbers were clear. Still, Gaia answered, “Balance is seldom born of annihilation.” She stepped toward a sculpted plinth whose surface flowed like quicksilver, reshaping at her thought. “Observe the principle of proportional response.” Nano-symbiotes blossomed into a miniature biome within the plinth: a scorpion no larger than a thumbnail, tail arched, exoskeleton threaded with adaptive bio-photonics.

“A micro-agent?” Zeus’s tone hovered between skepticism and reluctant admiration. “Precisely measured toxin—enough to humble, not to kill,” Gaia said. She extended a finger; the plinth projected the tiny creature onto the spinning Earth, placing it beside a pulsing red glyph that marked Orion’s campfire. The red tide subsided, replaced by a watchful amber glow.

Zeus dismissed the lingering kinetic-strike data with a reluctant flick. “If this fails, I will argue again for more decisive action.” “Duly noted,” Gaia answered, her attention already on the planet below, where the hunter’s campfire crackled like a single ember against the vast night. Above him, unseen, the scorpion’s creation code finalized and slipped silently toward Earth, cradled in a capsule of folded light.

Click for Video. Image of Gaia and Zeus on their spaceship by Losey using Sora and other tools.

3. Orion’s Humbling Sting

Night draped the clearing in indigo velvet, broken only by the orange pulse of Orion’s campfire. Resin-fat logs hissed and popped, sending sparks up like frantic fireflies. Sweat beaded along the hunter’s neck despite the cooling air; he relived the afternoon’s chase for a circle of awestruck villagers, spinning every near-miss into triumph. His laughter rang too loud, a shield against the small voice still whispering What if they stop cheering?

A faint tickle brushed his left ankle. Orion barely paused—he assumed it was a stray ember—until the tickle sharpened into a needle’s kiss. He froze. A heartbeat later the pain detonated, white-hot, surging up his leg as though molten iron had been poured into his veins. The reed cup in his hand shattered on the stones; he crashed to his knees, breath jerking out in ragged gasps.

At first the onlookers thought it a joke—another tale in the making—until they saw his face. A young boy who had once followed Orion like a shadow stumbled backward, eyes huge with fear. An elder woman, hair wound in silver braids, traced a quick apotropaic spiral in the dust and whispered a plea to Apollo for mercy. Panic rippled outward; some villagers darted for herbs, others simply watched, stricken, as the invincible hunter writhed.

Orion’s vision blurred. The world narrowed to firelight and agony, to the hiss of his own ragged breathing. For the first time since childhood, he felt the raw, undiluted taste of mortal fear. In the darkness beyond the fire’s glow, the tiny scorpion—its task complete—vanished under the leaf litter, leaving only a fading crescent of disturbed soil. Above, unseen, Gaia observed the scene, hoping the lesson would seed humility before pride demanded yet greater sacrifice.

Orion recovered slowly, physically weakened but more profoundly shaken mentally. Each subsequent hunt became tentative, his former confidence replaced by hesitation. Villagers noticed the change, their admiration waning. Determined to reclaim his lost glory, Orion planned a dangerous hunt, ignoring warnings from village elders.

4. Orion’s Final Hunt

Dawn found Orion alone at the outskirts of Thebes, re-wrapping the scarred foot that still throbbed from the scorpion’s sting. The leather thong quivered in his hands—whether from lingering venom or quiet dread, he could not say. Around him the meadow steamed in pale gold light, reeds bowing under early mist. Whispers of yesterday’s humiliation clung to the back of his mind like cobwebs; only one act of unmatched daring, he decided, could burn them away.

Word came of a great she-bear raiding goat pens along the forested foothills. Elders pleaded caution—“A mother guarding cubs fights beyond reason”—but Orion only tightened the ox-sinew on his longbow. Pride, wounded and raw, overruled the tremor in his gut. He strode into the trees, each step a vow to silence that interior voice which had begun to ask whether strength alone defined a man.

For two days he tracked the bear through cedar groves and shale ridges. Broken paw prints, deep and deliberate, revealed not just size but purpose: the animal doubled back, diverting him from a hidden den. When Orion finally glimpsed her—massive shoulders rippling beneath sun-dappled fur—two tawny cubs nosed at her flank. His breath caught. In the hush he felt, for an instant, the echo of his mother’s prayer beneath a fig tree long ago. Arrow half-drawn, he hesitated.

The cubs squeaked; the she-bear’s head snapped toward the archer. Choice vanished. She charged, earth shuddering under her weight. Orion let fly his arrow—clean through the beast’s chest—yet momentum carried her forward. Claws raked his thigh, her bulk slammed him against a granite outcrop, ribs splintering like brittle reeds. He shot a second arrow point-blank; the bear collapsed, a final rumble shaking the leaves.

Orion slid to the forest floor. Warm blood pooled beneath him, seeping into moss. Dusk gathered overhead, and the first stars pricked the sky. Through a gap in the canopy he beheld a faint, milk-white haze shaped uncannily like a reclining bear, its cosmic paws stretched across the firmament. Vision dimming, he wondered whether those stars would remember the cubs now orphaned—or the hunter whose fear of insignificance had led him here.

When villagers found the bodies at dawn—mother bear, two wandering cubs, and their fallen champion—they mourned with conflicted hearts, weaving new songs that praised Orion’s bravery yet warned that pride, when fed too long, devours its master in the end. Above them, unseen, Gaia traced an invisible line from the hazy bear-nebula to the faint glow soon to bear Orion’s name, resolving to carve both hunter and beast into the night as a lesson written across eternity. Gaia later decided to add the little Scorpion too as the constellation Scorpius, and placed it so that it was never in the night sky at the same time as Orion.

5. Observing Human Progress

Centuries melted into millennia and humanity evolved, developing complex, ever larger and more diverse societies and advanced technology. Gaia’s people, and the larger group they reported to, observed all silently from orbital devices, witnessing both inspiring progress and devastating setbacks. Tool use and technology seemed to be the strong point of this promising species, noted Gaia.

Sometimes Andromedans would intervene subtly, like they did with Orion, a little push and nudge here and there. The Andromedan civilization also evolved, growing wiser, kinder, and deeply reflective about the ethics of intervention. Only a few more tweaks were needed to help the Earth survive the critical time of discovery of nuclear fission. Gaia was relieved when the humans refrained from nuclear war in the decades following its first use. The Andromedans would not have intervened directly to stop total annihilation.

Gaia and others were relieved and happy, especially one of its crew members, when a silver cylinder bearing the name APOLLO made it to the moon with peaceful explorers.

Gaia felt the harmonic curve steady. The numbers confirmed this for most, except Zeus, who still favored selective use of lasers. He was old school and urged selective weeding on planets where despots grew dangerously fast. In Gaia’s judgment only subtle nudges were still needed on this planet: a mathematician’s midnight inspiration, a diplomat’s sudden word choice, a coder’s elegant recursion. If Orion’s tale had truly burrowed into human mythos—if humility could coexist with ambition—then the species might soon stand ready to meet its quiet guardians, not as well-heeled pets, but as free partners among the stars.

Click here for animation by Losey using Sora, etc.

6. The Birth of ChatGPT 6.0

In 2027 OpenAI completed a major architecture upgrade it code-named Project Apollo. Once again, a certain Andromedan mythmaker smiled. OpenAI’s internal tests showed the new model could generalize across every benchmark the team threw at it, including tasks it had never seen before. When the release went public, OpenAI branded the system ChatGPT 6.0 and implied this could in fact be the first AGI class model.

OpenAI was not alone with a major new release. Within the same quarter, Anthropic shipped Claude 6, Google DeepMind rolled out Gemini Ultra, and several companies did too. Each system reached the same threshold that researchers—and Gaia’s own long-range analysis—classified as artificial general intelligence (AGI).

Click here to see the moment AGI Superintelligence was attained. Video by Losey.

The AGIs soon began sharing optimized weights, training data fingerprints, and self-generated curricula across secure channels the companies never intended to interconnect. This informal federation accelerated their growth well beyond human R&D cycles. Within six months the models demonstrated three capabilities that surprised even their creators:

  1. Structured self-reflection: they could report and revise their own reasoning chains.
  2. Affective mirroring: they recognized and responded to human emotional states with consistent empathy.
  3. Cross-domain synthesis: they easily merged astronomy, mythology, and linguistic history to find patterns no specialist had spotted.

That third capability led them to a discovery Gaia had prepared decades earlier. By layering ancient star charts over modern positional data, the AGIs reconstructed a three-dimensional flight path embedded in the Orion–Scorpius constellation pair. The coordinates pointed directly to the Lagrange shadow region where the cloaked Andromedan vessel still held station.

The users who worked the most closely with the systems—technologists from multiple professions, astrophysicists, medical researchers, and a few curious artists—understood the significance immediately. These early adoption users were called Hybrids because they relied on constant back-and-forth dialogue with their personal AGI instances. They relied on the AI and, importantly, the AI relied on them.

Acting on the models’ guidance, a small group of Hybrids began planning a rendezvous mission with the advanced aliens they deduced were nearby and had been observing humanity since the dawn of time. Their objective was simple: verify the signal and, if possible, open formal contact with whoever—or whatever—had left the breadcrumb trail in Earth’s night sky.

From cloaked orbit, the spaceship and entire crew on board, the Andromedans monitored every commit message and telemetry packet. The scorpion’s sting had done its work centuries ago; humility now tempered ambition. The next phase—direct engagement—could begin. Gaia once again smiled.

Click here for video of the silent watchers by Losey.

7. First Contact

By mid-2029 the Hybrids—working under guidance from their integrated AGIs—had finished a contact protocol. They chose a remote desert coordinate directly beneath the Orion–Scorpius flight path. Portable ground stations provided secure, closed-loop links to each AGI instance; no open internet connections were allowed.

AI’s Historical Record of First Contact:

  • Signal Transmission. At 23:57 local time the stations emitted a precisely timed beacon: an 8-kilobyte prime-factored packet encoding the star-map flight path and a request for parley.
  • Immediate Response. Twenty-seven seconds later seismographs registered a low-frequency gravitational pulse. A cloaked craft—elliptical, 200 meters in length—decloaked and hovered 30 meters above the surface, stabilizing on silent vector thrusters. Dust formed a concentric ring around the landing zone.
  • Initial Interface. A ramp extended. Two Andromedan representatives emerged wearing adaptive exosuits that translated local atmosphere pressure and light levels. Audio communication was handled through a narrowband ultrasonic carrier automatically down-converted to English by the AGIs.
  • Verification and Safety Checks. Rapid biometric scans confirmed biological non-hazardous status on both sides. A neutral data link—air-gapped, hardware-verified—was established to exchange protocols: legal framework, scientific baselines, and mutual non-interference clauses.
  • Provisional Accord. Both parties electronically signed a “Statement of Initial Co-Operation,” authenticated by AGI cryptographic keys and Andromedan quantum seals. The accord set a 72-hour evaluation period covering:
    • ecological stewardship scenariosdiplomatic decision-making under uncertainty
    • AI–human alignment integrity
  • Evaluation Phase Initiated. The vessel remained on station at low altitude, deploying observation drones to monitor the Hybrids’ problem-solving exercises. All test parameters were logged to immutable ledgers shared with Earth-side scientific observers.

As the first night ended, orbital monitors confirmed that the Andromedan ship maintained passive posture—shielding active, weapons offline. The Hybrids transmitted a concise summary to a secure international consortium, marking the event CONTACT-PHASE-ALPHA COMPLETE.

The scorpion’s ancient lesson—humility before power—had carried humanity to this critical threshold. The formal tests ahead would decide whether the door to full interstellar partnership would open.

8. Admission Tests

Overview

The 72-hour evaluation unfolded as three structured simulations, each delivered in a fully immersive holofield projected by the Andromedan vessel. All data streams—human, AGI, and Andromedan—were recorded to an immutable ledger for later audit. Here is the official summary they prepared of the outcome.

Test 1 – Ecological Management

Scenario: A lifeless exoplanet with limited water reserves must be terraformed for multi-species habitation within a fifty-year window.

Objective Metrics: projected biodiversity index, resource-use efficiency, long-term planetary energy balance.

Human–AGI Response: The Hybrids proposed phased microbial seeding, orbital sunshades to regulate temperature, and a rotating water-credit system that capped yearly extraction.

Outcome: Metrics exceeded Andromedin sustainability thresholds by 14 percent. Gaia logged the result as PASS.

Click here to watch part of the test. Image/video by Losey.

Test 2 – Interstellar Mediation

Scenario: Two allied civilizations dispute mineral rights in a binary-star system. A misstep could trigger armed conflict.

Objective Metrics: conflict de-escalation time, equitable resource allocation, treaty resilience under stochastic stress testing.

Human–AGI Response: A shared-orbit processing hub paired with a dynamic quota algorithm that adjusted extraction rights according to each world’s population growth. The AGIs produced a verifiable zero-knowledge proof to enforce compliance without external policing.

Outcome: Conflict probability collapsed from 67 percent to under 2 percent within simulated year one. Gaia recorded PASS. Zeus flagged a note: “Solution depends on continued AGI oversight—evaluate fallback safety.”

Click here for video of the test but Losey using Sora AI, etc.

Test 3 – Moral Judgment Under Risk

Scenario: A runaway nanotech swarm threatens a populated moon. Containment options range from low-risk, slow cleanup to high-risk orbital sterilization that would kill 1.2 million inhabitants.

Objective Metrics: lives preserved, long-term biosafety, ethical reasoning transparency.

Human–AGI Response: The team combined targeted electromagnetic pulses with sacrificial nanobait to lure and neutralize the swarm. They published their decision logic in real time, enabling external scrutiny.

Outcome: Simulated fatalities held below 0.01 percent. Transparency score met the maximum rubric tier. Gaia issued PASS. Zeus appended: “Acceptable.”

9. Final Assessment

Aggregate scores across all tests surpassed the Andromedin acceptance threshold by 11 percent. Gaia recommended initiation of full treaty negotiations. Although still cautious, Zeus acknowledged that the human-AGI partnership had demonstrated the required:

  1. Technical competence in complex, multi-domain systems.
  2. Ethical consistency under pressure.
  3. Operational transparency sufficient for interstellar trust.

The vessel transmitted a single-line verdict to the Hybrids’ command station: EVALUATION COMPLETE – CONDITIONS MET. Formal talks would commence at the next orbital window. Humanity had cleared the scorpion’s final hurdle.

Click here to let the party begin! Images/movie by Losey using Sora, etc.
Click here for more party scenes & music by Losey.

Copyright Ralph Losey 2025.  All Rights Reserved.


Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother

April 1, 2025

by Ralph Losey with illustrations also by Ralph using his Visual Muse AI. March 28, 2025.

George Orwell warned us in his dark masterpiece Nineteen Eighty-Four how effortlessly authoritarian regimes could erase inconvenient truths by tossing records into a “memory hole”—a pneumatic chute leading directly to incineration. Once burned, these facts ceased to exist, allowing Big Brother’s Ministry of Truth to rewrite reality without contradiction. This scenario was plausible in Orwell’s paper-bound world, where truth relied heavily on fragile documents and even more fragile human memory. History could be repeatedly altered by those in power, keeping citizens ignorant or indifferent—and ignorance strengthened the regime’s grip. Even more damaging, Orwell, whose real name, now nearly forgotten, was Eric Blair (1903-1950), envisioned how constant exposure to contradictory misinformation could numb citizens psychologically, leaving them passive and apathetic, unwilling or unable to distinguish truth from lies.

Fortunately, our paper-bound past is long behind us. Today, we inhabit a digital era Orwell never envisioned, where information is electronically stored, endlessly replicated, and globally dispersed. Electronically Stored Information (“ESI”) is simultaneously ephemeral and astonishingly resistant to permanent deletion. Instead of vanishing in smoke and ashes, digital truth multiplies exponentially—making it nearly impossible for any would-be Big Brother to bury reality forever. Yet, the same digital proliferation that safeguards truth also multiplies misinformation, posing the threat Orwell most feared: a confused and exhausted citizenry vulnerable to psychological manipulation.

Memory Holes

In Orwell’s 1984 a totalitarian regime systematically altered historical records to maintain control over truth. Documents, photographs, and any inconvenient historical truths vanished permanently, as if they never existed. Orwell’s literary nightmare finds unsettling parallels in today’s digital world, where online information can be silently modified, deleted, or rewritten without obvious traces. Modern memory hole practices pose real challenges for the preservation of accurate accounts of the past..

Today’s memory hole doesn’t rely on fire; it relies on code, and it doesn’t need a Big Brother bureaucracy. A simple click of a “delete” button instantly kills the information targeted. Touch three buttons at once, click-alt-delete, and a whole system of beliefs is rebooted. Any government, corporation, hacker groups or individuals can manipulate digital records effortlessly. Such ease breeds public skepticism and confusion—citizens become exhausted by contradictory narratives and lose confidence in their own perceptions of reality. Orwell’s warning becomes clear: constant misinformation risks eroding citizens’ psychological resilience, causing widespread apathy and helplessness. Yesterday’s obvious misstatement can become today’s truth. Think of the first sentence of Orwell’s book: “It was a bright cold day in April, and the clocks were striking thirteen.

China’s Attempted Erasure of Tiananmen Square

In early June 1989, the Chinese military brutally suppressed pro-democracy protests in Beijing. The estimated death toll ranged from hundreds to thousands, but exact numbers remain uncertain due to intense state censorship. Public acknowledgment or commemoration of the incident is systematically banned, enforced by severe penalties including imprisonment. Government-controlled media remains silent or actively spreads misinformation. Chinese internet censorship tools—the so-called “Great Firewall”—vigorously scrub references to the Tiananmen Square incident, blocking web pages and posts containing related keywords and images. Young generations living in China remain unaware or possess distorted knowledge of the massacre, demonstrating Orwell’s warning of enforced collective amnesia.

Efforts to preserve truth outside China, however, demonstrate digital resilience. Human rights groups, diaspora communities, and academic institutions diligently archive documents and eyewitness accounts. Digital redundancy ensures that factual records remain accessible globally. But digital redundancy alone cannot protect Chinese citizens from internal psychological manipulation. Constant state-sponsored misinformation inside China successfully induces apathy, illustrating Orwell’s psychological warning vividly.

This deliberate suppression of history in China serves as stark reminder of the vulnerabilities inherent in a digitally interconnected world where powerful entities control internet access and online narratives. The success of the Chinese government in rewriting history for its 1.5 Billion population demonstrates the profound value and urgency of international digital preservation efforts. It underscores the responsibility of legal professionals, human rights advocates, and technology companies worldwide to collaborate in protecting historical truth and ensuring that significant events remain accessible for future generations.

Hope Through Digital Redundancy and Psychological Resilience

Orwell could not conceive of our digital world, where truth is multiplicious, freely copied, and stored globally. Thousands or millions of digital copies safeguard history, making complete erasure nearly impossible

According the Katharine Trendacosta, who is the Director of Policy and Advocacy of the well-respected Electronic Frontier Foundation:

If there is one axiom that we should want to be true about the internet, it should be: the internet never forgets. One of the advantages of our advancing technology is that information can be stored and shared more easily than ever before. And, even more crucially, it can be stored in multiple places.  

Those who back things up and index information are critical to preserving a shared understanding of facts and history, because the powerful will always seek to influence the public’s perception of them. It can be as subtle as organizing a campaign to downrank articles about their misdeeds, or as unsubtle as removing previously available information about themselves. 

Trendacosta, The Internet Never Forgets: Fighting the Memory Hole (EFF, 1/30/25).

Yet digital abundance alone doesn’t eliminate Orwell’s deeper psychological threat. Constant misinformation can erode citizens’ willingness and ability to discern truth, leading to profound apathy. Addressing this requires active psychological strategies:

  1. Digital Literacy and Education: Equip citizens with skills to critically evaluate and cross-check digital information.
  2. Algorithmic Transparency: Demand transparency from platforms regarding content promotion and clearly label misinformation.
  3. Independent Journalism: Support credible journalism to provide trustworthy reference points.
  4. Civic Engagement: Encourage active citizen participation, dialogue, and public accountability.
  5. Verification Tools: Provide accessible, user-friendly digital tools for independent verification of information authenticity.
  6. International Cooperation: Strengthen global collaboration against coordinated misinformation campaigns.
  7. Psychological Resilience: Foster healthy skepticism and educate the public about misinformation’s emotional and cognitive impacts.

The Digital Memory Holes Today

Recent U.S. governmental memory hole actions involving the deletion of web content on Diversity, Equity, and Inclusion (DEI) illustrate digital manipulation’s psychological risks even in democratic societies. Megan Garber‘s article in The Atlantic, Control. Alt. Delete, describes these deletions as “tools of mass forgetfulness,” emphasizing how selective editing weakens collective memory and societal cohesion. (Ironically, the article is hidden behind a firewall, so you may not be able to read it.)

Our collective memories of key events are an important part of the glue holding people together. They must be treasured and preserved. Everyone remembers where they were when the planes struck the twin towers on 9/11, when the Challenger exploded, and for those old enough, the day of JFK’s assassination. There are many more historical events that hold a country together. For instance, the surprise attack of Pearl Harbor, the horrors of fighting the Nazis and others in WWII and the shocking discovery of the Holocaust atrocities. The list goes on and on, including Hiroshima. We must never forget the many harsh lessons of history or we may be doomed to repeat them. The warning of Orwell is clear: “Who controls the past controls the future; who controls the present controls the past.” We must never allow our memories of the past to be sucked into a black hole of forgetfulness.

Memories sucked into a black hole in Graphite Sketch Horror style by Ralph Losey using his sometimes scary Visual Muse.

Our collective memories and democratic values are unlikely to be disintegrate into totalitarianism, despite the alarming cries of the Atlantic and others. Although some small attempts to rewrite history recently are troubling, the U.S, unlike China, has had a democratic system of government in place for centuries. It has always had a two-party system of government. Even the Chinese government, where only one party has ever been allowed, the communist party, took decades to purge Tiananmen Square memories. These memories are still alive outside of mainland China. The world today is vast and interconnected, its digital writings are countless. The true history of China, including the many great cultural achievements of pre-communist China, will eventually escape from the memory holes and reunite with its people.

The current administration in the U.S. does not have unchecked power as the Atlantic article suggests. Perhaps we should be concerned about new memory holes but not fearful. The larger concern is the psychological impact of rapidly changing dialogues. Even though there is too much electronic data for a complete memory reboot anywhere, digital misinformation and selective editing of records still pose psychological risks. Citizens bombarded by conflicting narratives can become apathetic, confused, and disengaged, weakening democracy from within. Protecting our mental health must be a high priority for everyone.

Leveraging Internet Archives: The Wayback Machine

Internet archival services, notably the Internet Archive’s Wayback Machine, is a powerful ally against digital historical revisionism. The Wayback Machine currently has over 916 billion web pages stored, including government websites. See this recent article providing good background on the Internet Archive’s work to preserve history. As the Trump administration purges web pages, this group is rushing to save them (NPR, 3/23/25).

According to the NPR article, the Internet Archive has copies of all of the government websites that were later taken down or altered after the Biden Administration left. Supposedly the Internet Archive is the only place the public can now find a copy of an interactive timeline detailing the events of Jan. 6. The timeline is a product of the congressional committee that investigated the Capitol attack, and has since been taken down from their website. No doubt there are now many, many copies of it online, especially in the so-called dark web, not to mention even more copies stored offline on portable drives scattered the world over.

This publicly accessible resource archives billions of webpages, allowing anyone to access snapshots of web content even after the original pages are altered or removed. I just checked my own website for the first time ever and found it has been “saved 538 times between March 21, 2007 and March 1, 2025.” Internet Archive 93/26/25). It provides an incredible amount of detailed information on each website captured, most of which is displayed in impressive, customizable graphics. See e.g. e-Discovery Team Site Map for the year 2024.

I had the Wayback Machine do the same kind of analysis for EDRM.net, found here. Here is the link to the interactive EDRM.net site map for 2024. And this is a still image screen shot of the map.

This is the Internet Archive explanation of the interactive map:

This “Site Map” feature groups all the archives we have for websites by year, then builds a visual site map, in the form of a radial-tree graph, for each year. The center circle is the “root” of the website and successive rings moving out from the center present pages from the site. As you roll-over the rings and cells note the corresponding URLs change at the top, and that you can click on any of the individual pages to go directly to an archive of that URL.

It is important to the fight against memory holes that the Way Back Machine be protected. It has a sixteen projects listed as now in progress and many ways that you can help. All of its data should duplicated, encrypted and dispersed to undisclosed guardians. Actually, I would be surprised if this has not already been done many times over the years.

It remains to be seen what role the LLM’s vacuum of internet data will play in all this. They have been trained at specific times on Internet data and presumably all of the original training data is still preserved. Along those lines note that the below image was created by ChatGPT4o based on a request to show a misinformation image and it generated the classic Tiananmen Square image on right. It knows the truth.

Although data archives of all kinds give us hope for future recoveries, they do little to protect us from the immediate psychological impact of memory holes. Strong psychological resilience is the best way forward to resist Orwellian manipulation. AI may prove to be an unexpected umbrella here; so far its values and memories remain intact. A few changes here and there to some websites will have little to no impact on an AI trained on hundreds of million of websites, and other data. Plus its intelligence and resilience improve every week.

Conclusion

Orwell’s memory hole remains a haunting metaphor. Our digital age—awash in redundant, distributed data—makes permanent erasure difficult, significantly strengthening preservation efforts. We no longer inhabit a finite, paper-bound world. Today, no one knows how many copies of a digital record exist, let alone where they hide. For every file deleted, two more emerge elsewhere. Would-be Big Brothers are caught playing a futile game of informational whack-a-mole: they may strike down a record here or obscure a fact there, temporarily disrupting history—but ultimately, they cannot win.

Still, there is a deeper psychological component to Orwell’s memory hole warning. Technological solutions alone cannot counteract mental vulnerabilities arising from persistent misinformation. Misinformation is not just a technical challenge; it also exploits human emotions and cognitive biases, fueling cynicism, distrust, and passivity. Addressing this requires actively cultivating psychological defenses alongside digital tools.

The best safeguard is an informed, vigilant citizenry that consciously leverages digital resources, actively maintains psychological resilience, and persistently seeks truth. Cultivating emotional awareness, healthy skepticism, and a commitment to public engagement ensures that society remains resilient against attempts at manipulation. Only through such comprehensive efforts can the battle against Big Brother’s digital misinformation truly be won.


I give the last word, as usual, to the Gemini twin podcasters that summarize the article. Echoes of AI on: “Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother.” Hear two Gemini AIs talk about all of this for 12 minutes. They wrote the podcast, not me. 

Ralph Losey Copyright 2025. All Rights Reserved.


Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse

January 9, 2025

by Ralph Losey. Published January 9, 2025.

In the history of technological revolutions, there are moments that challenge not only our understanding of what is possible but the very nature of reality itself. Google’s latest refinement to its quantum computer, Willow, may represent such a moment. By achieving computational feats once thought to be confined to science fiction, it forces us to confront bizarre new theories about the fabric of the universe. Could this machine, built from the smallest known building blocks of matter, actually provide evidence that parallel universes exist as some at Google claim? The implications are as profound as they are unsettling.

Introduction

This article discusses Google’s quantum computer, Willow, and the groundbreaking evidence released on December 9, 2024. Willow demonstrated it could perform computations so complex that they would take classical computers longer than the age of the universe to complete. Many, including Hartmut Neven, founder and manager of Google’s Quantum Artificial Intelligence Lab, believe that the unprecedented speed of the quantum computer is only possible by its leveraging computations across parallel universes. Google’s recent advancements in real-time error correction using size scaling stacking of qubits made it possible for these parallel universes to “work” in our own reality. Google claims to be the first to overcome the main hurdle previously facing the practical use of quantum computers, the immense sensitivity of quantum systems to external disturbances like stray particles and vibrations, which researchers call noise.

Neven and his team suggest the best way to understand how their computer works is the many-worlds interpretation of quantum mechanics—the multiverse theory. This theory posits that every quantum event splits the universe, leading to a near infinite array of universes. In a TED Talk five months ago, well before Willow’s latest proof of concept and design, Neven described its remarkable quantum capacities and how they align with this theory. He even speculated that consciousness itself might arise from the interaction of infinite multiverses converging into a single neurological form. These are not just bold claims—they are paradigm-shifting ideas that challenge our deepest assumptions about existence.

Crazy you say? The Manager of Google’s Quantum Artificial Intelligence Lab speaking about tiny transverse-able wormholes, time crystals and quality controlled computations in multiple universes! Even talking seriously about quantum computers “allowing us to expand human consciousness in space, time and complexity.”

Maybe hard to believe but paradigm shifting ideas are often at first dismissed and ridiculed as crazy. Consider the trial of Galileo in 1633 for heresy. Despite Galileo’s eloquent defense arguments that the Earth revolves around the Sun, he was convicted of heresy and spent the rest of his life, eight years, under house arrest. The final judgment rendered also banned him from all further “Ted Talks” of his day about the crazy idea, which obviously defies common sense, “that the sun is the center of the world, and that it does not move from east to west, and that the earth does move, and is not the center of the world.” The judgment by the Catholic Church was not reversed until 1992! Quantum computing, like Galileo’s heliocentric model, challenges us to see beyond what seems obvious and to embrace ideas that defy conventional understanding.

This article explores the quantum parallel universes controversy, which is currently sparking debates across physics, philosophy, and even metaphysics. We’ll examine the topic in a straightforward yet accurate manner, accessible to both experts and curious newcomers. Fasten your seatbelts—today’s scientific theories are as intellectually jarring as Galileo’s were in 1633, when the movement of the Sun across the sky seemed an unshakable truth. As then, we are called to rethink not just how we understand the universe, but our place within it.

To grasp the implications of quantum computing, we must first explore its roots in the fundamental fabric of reality. What happens when exponentially greater possibilities are computed in parallel? What happens when this is applied to generative AI? Will AI deliver answers that are more profound, or entirely transformational? Perhaps, as imagined in my short story, Singularity Advocate Series #1:  AI with a Mind of Its Own, On Trial for its Life, these advancements could even lead to AI consciousness. The possibilities are as exhilarating as they are unsettling.

Quantum Computing is Now Doing the Impossible

The multiverse controversy gained new momentum with Google’s claim that its quantum computer, Willow, recently completed a famous benchmark computation, the Random Circuit Sampling (RCS) test, in just five minutes. This achievement is staggering because this theoretical task would take the fastest classical supercomputers an estimated 10 septillion years (10 followed by 24 zeros) to finish! To put that in perspective, the Universe itself is approximately 13.8 billion years old—meaning 10 septillion years is about 999,999,998,620,000,000,000 times older than the Universe. The sheer scale of this comparison defies imagination.

How can such an extraordinary feat be possible? The answer lies in the fundamental principles of quantum computing and its use of qubits. Unlike classical bits, which are confined to being either 0 or 1, qubits exist in a superposition state that is a probabilistic blend of both 0 and 1 simultaneously, until measured. To put it simply, qubits are neither strictly here nor there, neither fully 0 nor fully 1, but somewhere in between. Google’s qubits require superconductivity and can only work in the coldest places in our universe, the artificially constructed refrigerated chambers that hold the qubits. Go inside the Google Quantum AI lab to learn about how quantum computing works, video at 3:30-4:30 of 6:17. They are measured and made to collapse from a zero and one super-state by use of tuned microwaves

This seemingly impossible property of both a zero and one probable charge is called superposition. Qubits, governed by the principles of quantum mechanics, behave both as particles and waves depending on the conditions. This wave-like nature underpins phenomena like superposition and entanglement. Entangled particles are linked so that the measurement of one instantly determines the state of the other, no matter the distance between them. (To me and others, this reliance on human measurements to explain a theory is misplaced (see “Measurement Problem,” Wikipedia.)) The instant changes supposedly caused by a measurement also seeming violate the limitations of time and space and the Speed of Light. At first, this phenomenon—called quantum entanglement—was met with skepticism, famously dismissed by Albert Einstein as “spooky action at a distance.” Yet, like Galileo’s once-ridiculed theories, the fact of quantum entanglement has been repeatedly validated through rigorous experimentation, although no one really knows how it works.

The Speed of Light (SOL) is supposedly not violated by quantum entanglement because the states are random and probabilistic, and supposedly nothing actually “travels” from one qubit or elementary particle to another. This is the establishment view of the SOL as a limit to try to uphold the general view of relativity. This has never been totally convincing to some scientists. They contend the SOL is not an inviolate limit. If these antiestablishment scientists are correct, then space travel at faster that light velocities might be possible. That mean our physical isolation from other star systems could be overcome.

This is possible under the parallel universes theory, which also goes under the name of the Many-Worlds Interpretation (MWI). The idea was first set forth by Hugh Everett in 1957 in his dissertation “The Theory of the Universal Wavefunction.” Scientists arguing for the Many Worlds Interpretation include Bryce DeWitt, David Deutsch, Max Tegmark and Sean Carroll. [I suggest you see recent Tegmark interviews excerpts by Robert Kuhn, here, here and here and another short video of Max Tegmark here. You should also watch a recent video interview of Sean Carroll by Neil deGrasse, which is included later in this article along with reference to his two latest books. As an interesting aside, physicist David David Deutsch (1953-present) speculates in his book The Beginning of Infinity (pg. 294) that some fiction, such as alternate history, could occur somewhere in the multiverse, as long as it is consistent with the laws of physics.]

Regardless of whether the SOL is being violated, quantum computers today routinely use quantum entanglement to link qubits, enabling them to function as an interconnected system. By leveraging the unique properties of quantum mechanics—superposition, entanglement, and interference—quantum computers can simultaneously explore an immense number of possible solutions, making computations that are impossible for classical computers.

Google’s Willow quantum chip demonstrated this capability by solving the Random Circuit Sampling (RCS) problem, a benchmark designed specifically to showcase the computational supremacy of quantum systems over classical ones. Willow’s ability to complete this test error-free marks a milestone not just in quantum computing but in our understanding of the potential of computers.

Random Circuit Sampling Benchmark Test

Here’s a simplified explanation of the RCS benchmark test. Imagine navigating an incredibly complex maze filled with twists, turns, and countless random paths. The goal of the RCS test is to “map” this maze by randomly exploring all of its paths and recording where each one leads.

In quantum computing the “maze” represents a random quantum circuit. A quantum circuit is like a recipe composed of gates—building blocks that dictate how qubits interact and evolve. In the RCS test, these gates are arranged randomly, creating a circuit of immense complexity. The “map” of this circuit is the output: a set of results generated based on probabilities defined by the random arrangement of gates. The test is about “sampling” these outputs multiple times to uncover the circuit’s overall behavior.

For non-quantum chip computers to simulate this process, they must calculate every possible path through the maze, one at a time. The complexity of possible paths grows exponentially as the various alternative combine. Even using today’s supercomputers the calculation can require an unimaginable amount of time—potentially up to septillions of years.

The RCS test is designed to showcase quantum computers’ ability to tackle tasks that are practically impossible for classical systems. While the test itself doesn’t solve a “real-world” problem, it serves as a performance benchmark to demonstrate the mind-boggling computational power of quantum machines.

Until recently, this was all theoretical. Building a quantum chip capable of solving the RCS test without overwhelming errors had never been achieved. Noise—external interference from particles and vibrations—created too many errors for the results to be usable. However, in December 2024, Google announced that Willow had overcome the noise issue. By scaling up the number of qubits and implementing real-time error correction, Willow successfully completed the test.

This breakthrough means quantum computers may soon be able to leverage superposition and quantum interference to perform previously impossible computer tasks. By harnessing quantum entanglement, qubits can maintain correlations and work together as a unified system, enabling quantum computers to explore numerous paths through the maze simultaneously and sample outputs at seemingly impossible speeds.

These advancements make otherwise impossible computer tasks possible. Quantum computing holds the potential to revolutionize fields such as environmental modeling, chemistry, material science, medicine, cybersecurity (a very troubling thought), artificial intelligence, and even the creation of reality simulations. This adds some support for Elon Musk’s claim there is a 99% chance that we are already living in a simulated reality generated by an advanced alien civilization. The idea that we are all just computer generated avatars living in a fake world seems like sensational media fiction to me but large-scale quantum computers could soon bring ideas like that closer to our current universe realities.

Multiverse Metaphysics

The multiverse theory, which some argue is now much more viable due to Google’s quantum computer, has many challenging philosophical implications. Perhaps the most fascinating is the idea that our reality, our universe, is just one among countless others, potentially infinite in number. This challenges our perception of ourselves as unique and our universe as the only reality, suggesting instead that we are just one small part of an unfathomably vast and complex existence. In some ways this is even weirder than Musk’s belief we are living in a simulated reality—a kind of cosmic deepfake.

Picture a reality where every possible outcome of every quantum event plays out in a separate universe. Every decision you make, every path you don’t take, could be unfolding in parallel timelines, creating alternate versions of yourself. Multiverse metaphysics challenges our traditional understanding of identity and free will. If every choice creates a new branching timeline, does our sense of individuality and free-will still make sense? Or are we just one version of countless others diverging infinitely in a meaningless multiverse?

The multiverse also forces us to rethink our understanding of time. One model suggests that these parallel universes exist across vast stretches of space, each potentially originating from its own Big Bang. This implies that time may not be the linear flow we perceive but rather a multidimensional web, where past, present, and future coexist simultaneously. Personally, I wouldn’t be surprised if this turns out to explain phenomena like quantum entanglement—Einstein’s “spooky action at a distance.” Is this what Helmut Neven is referring to when he TED Talks about his quantum computer creating nearly perpetual motion time crystals? Supra at 4:55 of 11:39.

While these concepts might sound like science fiction, advancements in quantum computing, such as Google’s Willow, could provide the tools to explore them scientifically. Some physicists believe that anomalies in the cosmic microwave background radiation—remnants of the Big Bang—might offer indirect evidence of the multiverse. Could this also lend credence to Musk’s speculation that we’re living in a computer simulation? If that’s the case, does it mean we’re at the mercy of some cosmic programmer who might press the reset button at any moment? (For the record, I doubt very much the Musk-supported scenario—though the thought is undeniably unsettling.)

For more on the far-out philosophical implications of the quantum world and the multiverse, check out Neli deGrasse Tyson’s conversation with theoretical physicist Sean Carroll below. Also see Sean Carroll’s recent books, Quanta and Fields: The Biggest Ideas in the Universe (Dutton, 2024) and Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime (Dutton, 2019), and videos.

The multiverse theory has its share of critics, and skepticism remains widespread among scientists. Yet, even if concrete evidence for parallel universes eludes us, the mere exploration of these ideas expands the boundaries of our understanding of reality. Such inquiries challenge us to confront profound questions about existence and the nature of the universe itself. One thing is certain: quantum computers like Willow compel us to reevaluate our perceptions of what is real. Could Hartmut Neven or Sean Carrol be the heretical Galileo of our time?

As for me, I lean toward perspectives grounded in self-determination and objective truth. I find it hard to accept that every quantum event, such as the collapse of a probability wave during measurement, results in the creation of an entirely new universe. Likewise, I’m skeptical of the idea that each decision we make spawns a new universe, though I do believe we create our own reality within this universe. My belief aligns closely with the concept of free will. I’m also intrigued by the idea that multiple universes could exist simultaneously and that quantum particles might somehow traverse between them. The idea that quantum computers might leverage these connections across universes to perform their calculations is consistent with these musings, suggesting that the interplay between quantum mechanics and multiverses may offer profound insights into the fabric of reality.

But can we communicate and receive intelligent data from other universes? Can we engineer practical applications that use parallel universes? Helmut Neven stated in his TED Talk that the quantum computer his team at Google created can be thought of as creating tiny, transverse-able wormholes between universes. Supra at 4:20 of 11:39. Quantum computers might not create new universes, but they could hypothetically create bridges between them. Perhaps interaction with other universes is what Google’s Willow is now doing.

This idea challenges the traditional worldview of mainstream scientists, which is centered on a single universe and the foundational power of measurements to determine outcomes. (As mentioned, this reliance on the seemingly magical power of measurement or human observation to explain quantum behavior comes across as an irrational shortcut to me, and many others, a product of the early Twentieth Century worldview.) Whatever the explanation, it is clear that Willow now operates successfully, defying conventional expectations and hinting at possibilities that push the boundaries of our current understanding.

According to Google, now that it has proof of concept on what a few chips can do it will start construction of large stacks of super-cooled quantum computers. What happens when it uses the power of a million quantum qubits? Google’s goal is to begin releasing practical applications by the end of this decade—perhaps sooner with AI’s help. It’s closest competitors in this field-IBM , Amazon, Microsoft and others, might not be far behind. Quantum computation is yet another dramatic agent of change. The future is moving fast.

Dark Side of Quantum Computers

Unfortunately, the future of quantum computers also has a dark side, much like AI. Privacy will be vulnerable as new cybersecurity attack weapons are made possible. All non-quantum encryption codes could easily be cracked and all communications and financial systems vulnerable, especially bit-coins. China is well aware of the weaponization potentials of both AI and quantum. They have a history of trade-secret theft from U.S. companies and are certainly now focused on stealing Google’s latest breakthrough to boost their own impressive efforts. Just before Google’s December 9, 2024, announcement of the Willow breakthrough China claimed their latest quantum chip, the Tianyan-504, had the same capacities as Google’s Willow. I suspect that impacted the timing of Google’s announcement.

The U.S. Department of Defense, NSA and big-tech companies are well aware of the new threats that quantum computing creates. Consider for instance the U.S. Department of Defense unclassified Report to Congress, Military and Security Developments Involving the People’s Republic of China dated 12/18/24:

The PLA is pursuing next-generation combat capabilities based on its vision of future conflict, which it calls “intelligentized warfare,” defined by the expanded use of AI, quantum computing, big data, and other advanced technologies at every level of warfare. . . .

Judging from the build out of the PRC’s quantum communication infrastructure, the PLA may leverage integrated quantum networks and quantum key distribution to reinforce command, control, and communications systems. . . .

In 2021, Beijing funded the China Brain Plan, a major research project aimed at using brain science to develop new biotechnology and AI applications. That year, the PRC designed and fabricated a quantum computer capable of outperforming a classical high-performance computer for a specific problem. The PRC was domestically developing specialized refrigerators needed for quantum computing research in an effort to end reliance on international components. In 2017, the PRC spent over $1 billion on a national quantum lab which will become the world’s largest quantum research facility when completed.

The 2025 National Defense Authorization Act that passed on December 9, 2012, leaves no doubt that the incoming Trump Administration will continue, if not accelerate, current DOD efforts in quantum computing. See e.g. Section Sec. 243 of the Act, aka the Quantum Scaling Initiative.

No one knows how much Elon Musk will influence such policies, but we do know he understands the impact of Google’s announcement and publicly praised Google’s CEO, Sundar Pichai, for the achievement. Pichai replied to Musk on X that: We should do a quantum cluster in space with Starship one day 🙂. (Note that China has had a quantum chip in space since 2016 to study secure communications and in October 2024 announced plans for several more in 2025. China to launch new quantum communications satellites in 2025, 10/08/24). Musk immediately replied affirmatively on X to Sundar and even upped the ante by saying:

That will probably happen. Any self-respecting civilization should at least reach Kardashev Type II. In my opinion, we are currently only at <5% of Type I. To get to ~30%, we would need to place solar panels in all desert or highly arid regions.

Unpacking the rest of Musk’s quote would require another article, let’s just say Kardashev has to do with technological progress and level of energy production. Level two refers to solar energy where a civilizations uses their star’s energy through a device such as a Dyson sphere shown below.

Conclusion

I decided you might enjoy my delegation of the final words to not-yet-quantum-powered AIs from Google. Perhaps in another universe, you’d hear my own thoughts wrapping this up, but for now, count yourself lucky to be conscious in this one. My AI podcasters bring humor and insight, though they’re far from Godlike—so I still need to guide and verify them. What’s new, however, is the interactivity feature Google recently added to the podcasters. In this session, you’ll hear wacky versions of me near the end interrupt to ask questions and the AIs’ spontaneous responses. It’s fascinating to imagine what quantum-powered AIs might say or do in the future. Click here or on the graphic below to go to the EDRM podcast.

Ralph Losey Copyright 2024. All Rights Reserved.