Will AI Take My Job? OpenAI’s New Policy, Rising Cybersecurity Risks, and What Comes Next

April 17, 2026

Ralph Losey, April 17 2026

Introduction: The Urgency of the Question

Will AI take my job?

A line of people in formal attire walking with somber expressions, led by a robot with a humanoid design, against a modern building backdrop.
Image by Ralph Losey using AI tools.

That question is no longer speculative. It is now front-page relevant, driven not only by rapid advances in AI, but by two recent events that reveal how quickly things are changing. On April 6, 2026, OpenAI released its Industrial Policy for the Intelligence Age, openly warning that the transition to superintelligence is already underway. Just days earlier, a human error at Anthropic briefly exposed the source code of one of the world’s most advanced AI systems. It was quickly copied and distributed before the mistake was corrected. It is reportedly now in the hands of criminal hackers and enemy states worldwide. Together, these developments make one thing clear: the future of work is arriving faster, and fas less predictably, than most expected.

In my recent article, What People Want To Know About AI: Top 10 Curiosity Index, Gemini AIs and I analyzed global search patterns and online discussions to identify the public’s most urgent concerns. The number one question, How does AI work? was addressed in my follow-up article, Five Faces of the Black Box: How AI ‘Thinks’ and Makes Decisions, where we explained the technology across five levels, from a child’s guessing game to matrix algebra.

But the second question is different.

Will AI take my job—and what should I do about it?

This accounted for roughly 18% of all inquiries. And unlike the first question, it is not driven by curiosity. It is driven by anxiety, something I hear and feel in conversations about AI with all kinds of people.

A futuristic city street with a diverse crowd looking at holographic signs displaying various career options. A robot and a humanoid figure are in the foreground, interacting with technology amidst tall buildings and flying vehicles, highlighting innovation and technology in the workforce.
All images in this article are by Ralph Losey using Gemini AI tools.

This article focuses on that anxiety: economic security and the future of work. It also confronts the issue people increasingly want answered but rarely get: the timeline. When might AI reach a level capable of performing most cognitive work better than us? Because if that point is near, and recent signals suggest ii is, then the implications are profound. Most knowledge-based jobs would be affected, and the resulting disruption to the economy and social order could be significant.

The Policy Response: OpenAI’s Industrial Blueprint


The urgency of this economic question is not limited to the public. It is also front and center for the corporations building the technology. On April 6, 2026, OpenAI released Industrial Policy for the Intelligence Age (“Policy Statement”) and it is likely that other leading AI companies will soon follow. This document moves beyond engineering into economic and social policy. It begins with a blunt premise: the transition to superintelligence is already underway and will reshape how organizations operate, how knowledge is created, and how people find meaning and opportunity.

The Policy Statement does not minimize the disruption ahead, or the speed at which it may arrive. It acknowledges that AI will disrupt jobs and reshape entire industries at a scale and pace unlike any prior technological shift. At the same time, OpenAI’s leadership emphasizes that the outcome is not predetermined. Whether this transformation leads to shared prosperity or to concentrated wealth and widespread displacement will depend on decisions made now, by governments, corporations, institutions, and individuals.

I encourage you to read the Policy Statement in full. It addresses far more than job security. My focus here is narrower: the economic implications. On pages 3 and 4, the Policy Statement explains:

The Case for a New Industrial Policy. Society has navigated major technological transitions before, but not without real disruption and dislocation along the way. While those transitions ultimately created more prosperity, they required proactive political choices to ensure that growth translated into broader opportunity and greater security. For example, following the transition to the Industrial Age, the Progressive Era and the New Deal helped modernize the social contract for a world reshaped by electricity, the combustion engine, and mass production. They did so by building new public institutions, protections, and expectations about what a fair economy should provide, including labor protections, safety standards, social safety nets, and expanded access to education. 

History shows that democratic societies can respond to technological upheaval with ambition: reimagining the social contract, mediating between capital and labor, and encouraging broad distribution of the benefits of technological progress while preserving pluralism, constitutional checks and balances, and freedom to innovate. The transition to superintelligence will require an even more ambitious form of industrial policy, one that reflects the ability of democratic societies to act collectively, at scale, to shape their economic future so that superintelligence benefits everyone.  …

On this path to superintelligence, there are clear steps we need to take today. People are already concerned about what AI will mean for their lives—whether their jobs and families will be safe, and whether data centers will disrupt their communities and raise energy prices. AI data centers should pay their own way on energy so that households aren’t subsidizing them; and they should generate local jobs and tax revenue. Governments should implement common-sense AI regulation—not to entrench incumbents through regulatory capture but to protect children, mitigate national security risks, and encourage innovation. 

OpenAI released a companion video the same day as the Policy Statement, titled Sam Altman on Building the Future of AI (“Video“). At 26:08, the discussion turns directly to jobs. Joshua Achiam, OpenAI’s Chief Futurist, addresses the issue candidly:

On getting workers involved in AI, I actually, I kind of want to back up and just acknowledge an elephant in the room, which is that a lot of workers are concerned about AI; they’re worried about what AI means for them. They are not immediately excited at the prospect of figuring out, all right, how are we going to use AI in our workplace? They’re thinking, oh my gosh, is the AI going to replace me?

The public is no longer satisfied with abstract reassurances. People want timelines. They want industry-specific forecasts. They want to know whether their job will still exist in five years. Both the Policy Statement and the Video point in the same direction: highly capable AI systems are coming quite soon, much faster than most expected. 

Better get it right Sam.

More Training Now for Job Security Tomorrow?

For many years my usual answer to the jobs question has been more training now. That answer may not cut it today for a majority of people, especially if AI advances too fast, too far. For instance, in Can AI Really Save the Future? A Lawyer’s Take on Sam Altman’s Optimistic Vision (Oct. 2024) I opined:

AI will create entirely new jobs. For instance, for lawyers, new jobs pertaining to AI regulations are emerging. AI will also change existing jobs for the better. It is already replacing the most boring parts of our work, leaving us to focus on the more rewarding and human aspects. Moreover, it is true that no worker will be replaced by an AI, they will be replaced by a human that knows how to use AI.

Now I am not so sure, and neither is Sam Altman. The prospect of superintelligence is no longer a distant future. It is a planning horizon.

To address the question of human employment in a world of increasingly powerful AI, an issue well beyond my unaided ability to resolve, I turn to a Panel of AI Experts. For this exercise, I use OpenAI-based models that I have fine-tuned for analysis across multiple disciplines. They are not superintelligent, but they are highly capable and broadly informed. They created a five AI-persona expert panel to try to answer these issues. The only persona I required is the “devil’s advocate” persona because I have found that AI type is indispensable to brainstorming exercises like this. I did not specify any other character, even the first character chosen, The “CentaurProfessional, although I must admit he sounds just like me.

An illustration depicting a central figure, representing 'Human in the Loop,' surrounded by various symbolic characters: a Centaur, a Devil's Advocate, a Sin-Eater, and the concept of 'Human Edge.' They are engaged in a digital environment filled with computers and data analytics visuals, emphasizing collaboration between humans and technology.
The Human in the Loop should remain in charge and verify AI work.

Voice 1: The “Centaur” Professional (The Hybrid Advocate)

Persona: The pragmatic professional who has fully integrated AI, but remains firmly in control. For background see my From Centaurs To Cyborgs: Our evolving relationship with generative AI (April 2024). Except for the citations that follow, all of the language from here to the Conclusion was written by the AIs, not me.

The Perspective: Let’s begin with a reality check. You’re more likely to lose your job to someone using AI than to AI itself. That single sentence cuts through most of the noise.

The fear of immediate, total automation misunderstands how work actually happens. We do not operate on smooth technological curves, we operate on what researchers call a “jagged frontier.” AI excels at certain tasks and fails at others, often unpredictably. This is why hybrid human-AI teams—Centaurs—consistently outperform both humans alone and AI alone. Recent research suggests improvements approaching 70% in certain knowledge-work domains. [See e.g.The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%Also see research and reports of top expert teams in Navigating the Jagged Technological Frontier (Working Paper 24-013, Harvard Business School, Sept. 2023) and my Navigating the AI Frontier (Oct. 2024).]

In law, AI can draft a brief in seconds. But it cannot sign it. It does not carry malpractice insurance. It does not stand before a judge. It cannot be sanctioned—or disbarred.

In medicine, AI may catch patterns a doctor misses. But patients do not sue algorithms—they sue physicians.

Sam Altman himself has described using AI to analyze medical data more effectively than his own doctor. Yet no serious observer concludes from this that doctors are obsolete. The conclusion is simpler:

Doctors who use AI will replace doctors who do not. The same applies across professions.

The future belongs to the Centaur—the professional who augments judgment with machine intelligence, but never abdicates responsibility.

Your job is not disappearing. The drudgery is.

As I explained in The Great AI Transition: From Tool to Teammate (June 2024), “the real shift is from doing the work to supervising it, humans move up the chain of responsibility, not out of the system.” [The AI hallucinated this article and cite, which obviously was supposed to refer to one of my articles. I am embarrassed to say that the title and quote sounded so plausible that I had to look it up to be sure. I then called the AI on this and it admitted to the hallucination and apologized.]

A professional in a suit stands in a courtroom with a circular holographic interface. The background features judicial elements like a gavel and a jury. The hologram includes icons for legal documents, checkmarks, and scales of justice, representing the integration of AI in legal practice.
The “Centaur” Professional (The Hybrid Advocate)

Voice 2: The “Sin-Eater” (AI Risk & Accountability Officer)

Persona: The human firewall—absorbing legal and ethical responsibility for AI outputs.

The Perspective: The Centaur is right—but incomplete. Because every gain in capability creates a parallel demand for accountability.

Wharton’s Ethan Mollick coined the term “Sin-Eater” to describe a new role: the human who vouches for AI-generated work and bears the consequences when it fails. That role is not theoretical—it is inevitable.

As AI systems scale from minutes to months-long projects, the need for verification, auditing, and compliance will explode. OpenAI’s own policy proposals emphasize the need for an “AI trust stack”—auditing regimes, validation systems, and human oversight at every layer.

And then there is cybersecurity. Our current software ecosystem is already vulnerable. AI will amplify both offense and defense—but offense often scales faster. Sam Altman has warned openly: AI will become extraordinarily good at identifying software vulnerabilities. That means bad actors will too.

This creates a massive new labor demand. Not for passive users—but for active defenders. We will need an army of human-AI teams to audit, test, and secure critical systems. This is not optional. It is civilizational maintenance.

A digital illustration depicting a corporate office setting with two figures: one in a dark cloak representing 'The Sin-Eater' and another in a business suit symbolizing the 'AI Risk & Accountability Officer'. Surrounding them are visual elements like an AI Bias Map, Accountability Audit, and concepts of risk and reward, emphasizing themes of AI accountability and mitigation.
The “Sin-Eater” (AI Risk & Accountability Officer)

Voice 3: The “Startup-in-a-Box” Entrepreneur

Persona: The solo builder with the leverage of a 100-person company.

The Perspective: Why is the conversation so focused on saving existing jobs? We are on the verge of the largest expansion of individual capability in human history.

Sam Altman has spoken repeatedly about a future where one person can build what once required an entire company. AI agents will handle coding, marketing, accounting, logistics—everything that currently creates friction.

The barriers to entry are collapsing.

Today, a brilliant nurse or mechanic might never start a business—not because of lack of skill, but because of administrative overhead. Tomorrow, that overhead disappears.

This is the rise of the micro-entrepreneurial economy.

Access to powerful AI tools—what some call a “Right to AI”—may become as foundational as access to electricity. With it, millions can create, compete, and innovate independently.

Yes, large bureaucracies may shrink. But they will be replaced by networks of highly capable individuals.

The question is not just “Will I lose my job?” It is also: “What could I build if friction disappeared?”

A young entrepreneur interacts with a digital AI dashboard featuring various tools such as market trend predictors, legal compliance AI, and revenue forecasting. The office background shows a modern workspace with colleagues engaged in discussions.
The “Startup-in-a-Box” Entrepreneur

Voice 4: The “Human Edge” Advocate

Persona: The guardian of empathy, presence, and human connection.

The Perspective: The technologists talk about efficiency. But humans do not live on efficiency alone.

There is a category of value AI cannot replicate: being human with another human.

Josh Achiam of OpenAI described visiting a fully automated café. It worked perfectly. It was fast. It was efficient. nIt was also—his word—unfulfilling. He missed the human interaction.

This is the “Human Edge.”

As AI absorbs administrative and analytical burdens, the value of human-centered work will rise dramatically. Teaching, nursing, therapy, caregiving—these are not residual categories. They are foundational. Today, these professions are underpaid and overburdened. That is a policy failure—not a market inevitability.

If AI delivers the productivity gains promised, then society must redirect that surplus toward elevating human-centered roles. The OpenAI policy proposals explicitly call for this—training pipelines, wage support, and institutional investment.

The future of work is not less human. It is more human than ever.

A professional woman holding a tablet in a modern office setting, surrounded by glowing icons representing compassion, justice, and ethics, with 'hA' and 'HUMAN CORE' highlighted in the center.
The “Human Edge” Advocate

Voice 5: The Contrarian “Devil’s Advocate”

Persona: Skeptic. Realist. Immune to hype.

The Perspective: Let’s stop romanticizing. Disruption is coming—and it will be uneven, painful, and fast. Telling displaced workers to “become entrepreneurs” or “pivot to caregiving” is not a plan. It is a slogan. We need structural reform.

If AI concentrates wealth in capital rather than labor, then the tax system must adapt. That means shifting the burden toward capital gains, corporate profits, and possibly automated labor.

We must also decouple basic security from employment. Healthcare, retirement, and income stability cannot depend entirely on whether your job survives automation.

And yes—let’s talk about infrastructure. We have a decaying physical world and a fragile digital one. Meanwhile, AI companies are generating enormous wealth. That wealth must be reinvested.

A modern New Deal—focused on infrastructure, cybersecurity, and energy—is not just desirable. It is necessary.

This is not anti-capitalist. It is pro-stability.

An elderly businessman in a suit stands in a modern office setting, holding a digital tablet with a serious expression. He is surrounded by glowing symbols, including scales of justice, hands, and a devilish figure labeled 'Devil's Advocate.' The scene is framed by a circular neon border with 'hA' and 'The Contrarian View' inscribed.
The Contrarian “Devil’s Advocate”

Conclusion: Responsibility at the Edge of Superintelligence

This panel reveals a truth that resists simplification: the future of work in the age of AI is difficult to predict. At this point it could go either way.

Personally, I am now more inclined to agree with the curmudgeon Contrarian than the mini-me Hybrid Advocate. That is a change for me. It reflects a growing concern that the risks may be advancing faster than the benefits. The real question is whether we, and our institutions, can adapt quickly enough.

The practical advice is straightforward. Begin serious AI training now. At the same time, explore work where the human edge still matters. You may find not only greater security, but greater satisfaction.

Above all, hold the new centers of power, economic and technological, to their obligations. Stand for both human rights and progress. We should be able to do both. In today’s world, we have no choice. It is too dangerous to stand still.

Superintelligence may drive the engine of the future. But I continue to insist that humanity must remain firmly and responsibly at the wheel.

A business presentation scene featuring five diverse characters at a panel discussion. Each character represents a different role: a stern older man, a confident woman, a professional in a suit, a figure in a dark cloak, and a relaxed entrepreneur. Behind them, large screens display icons related to AI, risk management, and funding, suggesting a technology-focused theme.

Ralph Losey Copyright 2026. All Rights Reserved.


Five Faces of the Black Box: How AI ‘Thinks’ and Makes Decisions

March 29, 2026

Ralph Losey, March 29, 2026.

We are currently living through a “Gutenberg Moment,” but with a complex, digital twist: our new printing press is alive, probabilistic, and prone to “confident delusions.” While AI may be humanity’s most transformative invention, it remains an enigma to most.

For many legal professionals, the outputs of Generative AI feel like a digital seance—words appearing out of the ether with no visible logic. This “Black Box” is not just a technical curiosity; it is a professional liability. If you cannot at least partially understand and explain how your “assistant” reached a conclusion, you are effectively practicing in the dark. To move from being a passenger to a pilot, you must understand the mechanical soul of the machine and learn how to make it sing with the voices you command.

A futuristic scene depicting four individuals interacting with a multi-faceted display in a modern office environment, showcasing advanced technology and data visualization concepts.
Five Faces of the Black Box. My choices. My direction. Writing and images assisted by Gemini AI.

My recent article, What People Want To Know About AI: Top 10 Curiosity Index, revealed that the primary thing people want to know is how the machine actually works. They are asking the most difficult question in the field: How does AI “think” or make decisions?

This article answers that question by providing a structured understanding of Large Language Models (LLMs) across five levels of technical complexity:

  1. The Smart Child: The world’s best guessing game.
  2. The High School Graduate: Statistical probability at a global scale.
  3. The College Graduate: Mapping meaning in Latent Space.
  4. The Computer Scientist: The logic of the Transformer and Self-Attention.
  5. The Tech-Minded Legal Professional: Navigating probabilistic advocacy.
A visual representation of five individuals at different life stages: a young boy labeled 'The Smart Child,' a high school student labeled 'High Schooler,' a college graduate in a cap and gown, a computer scientist in a lab coat, and a lawyer in business attire labeled 'The Tech-Minded Lawyer.' Each character is surrounded by digital elements and diagrams that represent technology and education.

There is a meta-lesson here too that goes beyond the words on this page. Some of my favorite explanations of complex subjects emulate the fresh, clear speech of fifth graders. You will often find deep creativity when AI models parrot their language.

I chose five kinds of speech to describe how AI works. There are hundreds more that I could have picked. I also could have asked for explanations that use story or humor, much like Abraham Lincoln liked to do. It is fun to learn to tell AI what to do so that you can better communicate. It empowers a level of creativity never before possible. Maybe next time I will use comedy or poetry. For now, let’s peel back the curtain using these five.

1. The Smart Child Level: The World’s Best Guessing Game

Definition: Generative AI is like a magic “Fill-in-the-Blank” machine that has played the game trillions of times with almost every book ever written.

Imagine you are playing a game. If I say, “The peanut butter and…”, you immediately think of the word “jelly.” You don’t need to look at a jar of jelly to know that word fits. You’ve heard those words together so many times that your brain just knows they belong together.

An AI is a computer that has “listened” to almost everyone in the world talk and “read” almost every story ever told. It doesn’t “know” what a sandwich is, and it doesn’t have a stomach that feels hungry. It simply knows that in the history of human writing, the word “jelly” follows “peanut butter” more than almost any other word.

But it’s even smarter than that. If you say, “I am at the library and I am reading a…”, the AI knows that “book” is a much better guess than “sandwich”. It looks at all the words you give it—the “clues”—to narrow down the billions of possibilities into one likely answer. It makes decisions by picking the word that is most likely to come next to complete a pattern that makes sense to us. It isn’t “thinking” about the story; it’s just very, very good at predicting the next piece of the puzzle.

A robotic hand holds a piece of jelly on a keyboard with the words 'SUN PEANUT BUTTER AND.' set against a backdrop of bookshelves.

2. The High School Level: Statistical Probability at Global Scale

Definition: AI is a Prediction Engine. It uses “Big Data” to calculate the statistical likelihood of the next piece of information.

Most of us use the “Autofill” feature on our smartphones every day. As you type a text, the phone suggests the next likely word based on your past habits. If you often text “I’m on my way,” the phone learns that “way” usually follows “my.” Generative AI—specifically Large Language Models—is essentially Autofill scaled to include the vast majority of digitized human knowledge.

During its “training” phase, the model does not “memorize” facts like a traditional database. If you ask it for the date of the Magna Carta, it isn’t looking it up in a digital encyclopedia. Instead, it has learned through billions of examples that the words “Magna Carta” and “1215” have a very high statistical correlation.

This explains why AI can sometimes be “confidently wrong.” It isn’t “lying” in the human sense; it is simply following a statistical path that leads to a mistake. If the data it was trained on contains a common error, the AI will repeat that error because, in its mathematical world, that error is the “most likely” next word. It recognizes the “shape” of human thought without actually having a human mind.

A person holding a smartphone displaying a messaging app titled 'Global AI Team', with a conversation about scaling processing. The background features a digital world map with binary code overlay.
High School Graduate Level Speech Using Statistical Probabilities.

3. The College Graduate Level: Mapping the Latent Space

Definition: AI organizes information using Vector Embeddings, which convert words into numerical coordinates on a massive, multi-dimensional map called Latent Space.

To understand how AI moves beyond mere word-matching, we have to look at how it “maps” meaning. In a physical library, books are organized by a 1D system (the spine) or 2D (the shelf). AI organizes information in a “map” that has thousands of dimensions.

  • Vectoring (The Coordinate System): Every word or concept is assigned a “Coordinate”—a long string of numbers. For example, the word “Stealing” is mathematically plotted very close to “Larceny” but far away from “Charity”.
  • Conceptual Proximity: Think of this as the “Relativity” of language. If you ask the AI about “theft,” it doesn’t look for that specific word. It navigates to those coordinates in Latent Space and finds all the “neighboring” concepts like “property,” “intent,” and “deprivation.”
  • Vector Arithmetic: Researchers discovered that you can actually perform “logic” using these numbers. A famous example is: King – Man + Woman = Queen. The model “understands” the relationship between these concepts because the mathematical distance between “King” and “Man” is the same as the distance between “Queen” and “Woman.”

When you provide a prompt, the AI identifies the coordinates of your request. It then “walks” through the nearby clusters of meaning to synthesize an answer. The “Black Box” is the result of the sheer scale of this map. With hundreds of billions of dimensions, the path the AI takes is so complex that no human can trace the logic of a single output back to a single “rule.”

A visual representation of legal terms and criminal acts, featuring nodes and connections depicting concepts like larceny, fraud, contract law, and violent crimes.
College Graduate Level Speech Mapping Latent Space.

4. The Computer Scientist Level: The Decoder-Only Transformer

Definition: Generative AI is a system powered by neural network architectures—most notably the Decoder-only Transformer—that is specifically tuned to generate the next piece of information by mathematically looking back at everything that came before it. Rather than relying on rigid rules, these models evaluate entire inputs using a mathematical weighting system called Self-Attention to determine the contextual relationship between every element.

To achieve this generative capability, the architecture relies on several complex mathematical mechanisms:

A. The “Query, Key, and Value” System: To decide how much “weight” to give a word, the AI creates three numerical identities for every token. The Query represents what the token is looking for (like a pronoun searching for a subject), the Key represents what the token offers (like a subject offering its identity), and the Value represents the token’s actual semantic meaning.

A digital illustration depicting a data processing concept with labeled elements: Query, Token, Key, and Value, featuring glowing lines and binary code in a dark background.
AI Sytem to decides hew much Weight to give a word.

B. The Logic of Self-Attention: The AI establishes context by comparing the Query of one word against the Keys of all other words in the sequence. Imagine a judge sitting through a long trial. When a witness says the word ‘It,’ the judge immediately looks back at previous exhibits to see what ‘It’ refers to. The AI does this mathematically by comparing the Query of one word against the Keys of every other word in the sequence. For example, in the sentence “The court sanctioned the attorney because his motion was meritless,” the AI mathematically calculates the relationship between “his” and the surrounding words. The Query for “his” finds a high match with the Key for “attorney,” allowing the model to assign a high Attention Weight to “attorney” so the word “his” inherits the correct context.

A futuristic courtroom scene featuring a humanoid robot analyzing data from a holographic interface while a woman presents evidence at the witness stand, with an audience observing.
Futuristic courtroom where a cyborg judge Queries one word to the Keys of all others to build context,

C. Multi-Head Attention (Parallel Deliberation): The model doesn’t just evaluate the text once; it runs these calculations dozens of times in parallel. Different “Heads” focus on different aspects simultaneously—one might evaluate syntax and grammar, another focuses on technical legal definitions, and a third assesses the overall tone or sentiment.

A futuristic illustration of a brain divided into three sections labeled 'Left', 'Middle', and 'Right'. The 'Left' side features symbols related to grammar and linguistic algorithms. The 'Middle' section displays scales symbolizing law and fairness. The 'Right' side shows diverse facial expressions, representing emotions and mental processing.
AI brain split into three parallel sections working simultaneously. Left side scans floating grammar and punctuation. Middle analyzes justice definations. Right side evaluates holographic floating masks of human emotions.

D. The Decision Layer (Feed-Forward Networks): After attention weights are settled, the data moves into a decision-making layer consisting of billions of Weights (connection strengths) and Biases (baseline leanings). These act as the model’s “institutional knowledge,” which was grown during training to satisfy the objective of predicting the next token.

Illustration of an AI feed-forward network with labeled layers, neurons, weights, and data flow, depicted through vibrant interconnected lines and nodes.
FFN where thickness of neural connections represents weights.

E. The Softmax Verdict: Finally, the model uses a Softmax function to produce a probability list of every possible word in its vocabulary. It calculates the exact odds—for example, assigning “Court” an 85% probability and “Sandwich” a 0.01% probability—and then mathematically samples the winner to generate the next word. Since the Softmax Verdict generates words based on statistical odds rather than verified facts, it is crucial for lawyers to verify the output, which we will also discuss in more detail later in this article.

Digital display of court-related statistics showing a confidence level of 85% with various legal terms and corresponding percentages listed alongside.
Softmax Verdict predicts “Court” to be the most likely next word.

5. The Tech-Minded Legal Professional Level: Probabilistic Advocacy

Definition: For the legal professional, Generative AI is not a database, but a Probabilistic Inference Engine. It does not “find” data in the traditional sense; it infers the most likely response based on the conceptual coordinates of your request and the mathematical “gravity” of the language it was trained on.

A. From Search to Inference

For fifty years, the legal industry’s relationship with technology was deterministic. Traditional legal databases use rigid logic gates: Does Document A contain Word X AND Word Y? If the words are present, it is a ‘hit’; if not, it is ignored, functioning as a simple ‘On/Off’ switch. The Transformer changes this completely. It is not a search database, but a Probabilistic Inference Engine. When you ask it to ‘analyze a witness’s credibility,’ it doesn’t just look for the word ‘credibility’; it infers a conclusion by weighing the context of every word in the record.

An image depicting a metallic switch labeled 'OFF' for 'Deterministic Keyword Search' alongside a graphic illustrating 'Probabilistic Inference (Intent)' with clusters of keywords such as 'Payment', 'Influence', 'Bribe', and 'Arrangement' indicating varying probability connections.
Legal Tech Tools and Search Based on AI Probabilistic Analysis.

B. Navigating the Latent Space

To perform this analysis, the model navigates the Latent Space coordinates of your query. It uses the Self-Attention weights discussed in Level 4 to “infer” a conclusion by weighing the context of every word in the record. It identifies the “Intent” and “Sentiment” within millions of documents in a second. Such tasks were previously impossible for deterministic software.

C. The Weight of the Legal Oath

While the machine provides the “Magic Guesses” of a child and the “Neural Weights” of a scientist, it lacks the professional standing to be an advocate.

  • The Black Box as an Invitation: The “Black Box” is not an excuse for ignorance; it is an invitation to a higher level of legal practice.
  • The Human Validator: We use the machine to find the “needle” (the insight), but we use our human judgment to prove it is evidence and not a hallucination.
  • The Ultimate Weight: In this new era, the most important “Weight” in the entire system is the one held by the human professional.
A digital representation of a scale of justice balancing a black box labeled 'BLACK BOX' with data elements like 'EVIDENCE DATA', 'LOGIC MAP', and 'NEURAL WEIGHTS' on one side, and a gavel representing 'HUMAN JUDGMENT' on the other side. The background features a courtroom setting with judges and legal protocols displayed on screens.
Heavy Weight of the Legal Oath.

6. The “Growing, not Building” Concept: The Genesis of the Black Box

To understand why even the creators of these models cannot always explain a specific output, we have to understand that AI is trained into complexity, rather than just hard-coded with logic.

  • The Old World of Software: In the past, we built programs based on rigid, transparent logic. If the code said “If X, then Y,” but it did something else, it was a “bug” to be corrected within a deterministic machine.
  • The New World of Generative AI: This technology is created through Self-Supervised Learning. We don’t provide the model with logic blueprints (corrected spelling from “bluepritns”); instead, we provide an ocean of data and a single objective: “Predict the next piece of information.”
  • The “Growth” of Intelligence: The model then “grows” its own internal pathways—billions of connections known as Weights and Biases—to satisfy that objective.

Think of it like a massive vine growing through a lattice. As engineers, we provide the lattice (the Transformer architecture), but the vine (the intelligence) grows itself. By the time training is finished, there are hundreds of billions of connections. There is no “Master Code” for a human to read or audit. The “Black Box” is not a wall; it is a forest so dense that no human can map every leaf.

In the era of AI Entanglement, we must judge the AI by its results (the fruit) rather than its process (the roots).

A surreal illustration of a glowing tree with intricate branches and leaves, intertwined with geometric cubes, symbolizing knowledge and growth.

7. The “Context Window” as a Trial Record

In the computer scientist level we discussed the Transformer’s ability to look at a whole document simultaneously. In practice, this capability is governed by the Context Window. In AI, the Context Window is the specific amount of data the model can “Attend” to at any one time. When you upload a 100-page contract, the AI holds that text in a temporary “workspace.”

The Judicial Analogy: Think of the Context Window as a judge’s Active Memory during a hearing.

The Risk of Loss: If a trial lasts for ten days, but the judge can only remember the last two hours of testimony, they will lose the thread of the case.

Hallucination via Omission: They might “hallucinate” a fact not because they are lying, but because they have lost the beginning of the record.

Legal Strategy: For the tech-minded lawyer, you must manage the “Active Record” of your conversation to ensure the model maintains access to critical early facts. In a similar way, a judge relies on a court reporter who makes a transcript of the record to ensure nothing is lost to the passage of time.

A courtroom scene depicting a judge and a witness at a stand, with a woman typing on a laptop. Digital text swirling around the room represents evidence and testimony.

8. Anatomy of a Hallucination

A “Case Study” of a hallucination through the lens of Latent Space will help us to understand them.

Suppose you ask an AI for a case supporting a specific point of Florida law. The AI navigates to the “Neighborhood” of Florida Law and the “Street” of that specific legal issue. It sees a cluster of real cases—Smith v. Jones and Doe v. Roe.

Because it is a Probabilistic Inference Engine, the AI doesn’t naturally “check” a verified list of real cases. Instead, it follows the mathematical pattern of how Florida cases are typically named and cited.

The AI then “generates” Brown v. State—a case that sounds perfectly correct because its coordinates are exactly where a real case should be based on the surrounding patterns. It has followed the statistical “gravity” of the neighborhood, but it has drifted into a sequence of words that is factually untethered from reality.

It is a perfectly logical mathematical guess that happens to be a factual lie. This is the primary reason why we must cross-examine our assistants. We use our human judgment to prove the output is a needle of truth and not a hallucination of the “Black Box.” Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations (12/17/25).

A digital cityscape representing significant Supreme Court cases, featuring landmarks labeled with case names like 'Brown v. State,' 'Roe v. Wade,' and 'Miranda v. Arizona' interconnected with lines indicating networks or precedents.
Latent Space Can Generate AI Hallucinations.

Conclusion: A Symphony of Five Understandings

We have traveled from the magic toy box to the multi-dimensional math of the Transformer. To close, let’s look at the “Black Box” one last time through all five lenses.

The Smart Child sees a magic friend who is the best guesser in the world. To the child, the lesson is simple: the magic friend is fun, but sometimes they make up stories. Enjoy the story, but don’t bet your lunch money on it.

The High Schooler sees a massive “Autocomplete” engine. They understand that the AI is just a mirror of everything we’ve ever written. The lesson: the mirror is only as good as the light you shine into it.

The College Graduate sees the “Latent Space”—a map of human culture turned into math. They realize that meaning is not found in isolated words, but in the mathematical distance and relationship between them.

The Computer Scientist sees the Decoder-only Transformer—a masterpiece of matrix multiplication and Self-Attention weights. They know that “thinking” is just the sound of billions of Query and Key vectors finding their mathematical match.

The Tech-Minded Legal Professional—the “Human in the Loop”—sees a revolution. We see a tool that can navigate the “Intent” and “Sentiment” of millions of documents in a heartbeat using Probabilistic Inference. But we also see the weight of our professional oath.

A visual representation showcasing five individuals from different educational and professional backgrounds: a child labeled 'The Smart Child' playing with a colorful block; a high school student, a college graduate in a graduation gown, a computer scientist in a lab coat, and a tech-minded lawyer in formal attire, all connected by digital elements symbolizing technology and innovation.
Five Faces of the Black Box. My choices. My direction. Writing and images assisted by Gemini AI.

Our New Role: From Searcher to Validator. Electronic discovery professionals are no longer just “Searchers” of data; we are the Validators of a new, probabilistic reality.

We are the ones who must take the “Magic Guesses” of the child, the “Statistical Patterns” of the high schooler, the “Latent Map” of the college graduate, and the “Neural Weights” of the scientist, and forge them into Evidence.

The “Black Box” is not an excuse for ignorance; it is an invitation to a higher level of practice. We use the machine to find the needle, but we use our human judgment to prove it is a needle and not a hallucination.

In the era of AI Entanglement, the most important “Weight” in the entire system is the human in charge: You.

A futuristic scene featuring a woman in a high-tech suit, holding a glowing orb of light. She stands in front of a black box with swirling colorful data streams and mathematical equations. In the background, scientists and a judge observe. Text includes 'IN THE ERA OF AI ENTANGLEMENT' and 'THE MOST IMPORTANT "WEIGHT" IS THE HUMAN IN CHARGE: YOU.'
Assume your place in the AI command chair.

Ralph Losey Copyright 2026 — All Rights Reserved


What People Want To Know About AI: Top 10 Curiosity Index (with interactive graphic)

March 18, 2026

Ralph Losey, March 18, 2026

A digital illustration of a brain with gears, surrounded by various topics related to artificial intelligence, including job security, data privacy, misinformation, and environmental impact.
Top Ten Information Needs about AI per Gemini research. All images by Ralph Losey using Nano Banana 2, except for the graphs by Gemini Pro.

Gemini 3.1 Pro Surprises: Synthesizing the Top 10 AI Questions of 2024–2026

I was recently struck by a capability in the pro version of Gemini that I hadn’t encountered before. Quite by accident, I discovered the model’s ability to do more than just scour the web for data. It can synthesize thousands of disparate data points, from workshop reports to tangential polls, to provide a coherent answer to a complex “meta” question.

My inquiry was specific: What do people actually want to know about AI? I wasn’t interested in usage statistics, but in conceptual gaps. When Gemini (and my own “trust but verify” follow-up) found no single poll on point, the AI pivoted. It inferred a top-ten ranking by analyzing the collective “curiosity” found across the web. The result is what Gemini called, perhaps with a smile, the “Top 10 Curiosity Index,” a list of the concepts that people are most “desperate to understand.

A diverse group of professionals engaged in a brainstorming session in a modern office. Some are using smartphones while sitting around tables with laptops, and others are writing on whiteboards. The environment features brick walls and large windows, creating a collaborative atmosphere focused on AI topics.
Weekend Law Firm Study to Satisfy Top Ten Information Needs about AI.

From Synthesis to Software: Gemini’s Surprising Coding Capabilities

Beyond the data synthesis, Gemini 3.1 Pro surprised me by generating several hundred lines of custom code—largely unprompted—to facilitate sharing these findings. While I was aware of the Pro version’s coding reputation, I was unprepared for this level of sophistication. The AI didn’t just present the information; it built the visual infrastructure to host it, producing complex HTML and JavaScript in a matter of seconds.

The centerpiece of this technical feat is an interactive graph that allows readers to engage with the data directly. Gemini didn’t stop at the code; it acted as a technical consultant, guiding me through the WordPress installation and handling the inevitable troubleshooting with ease. The result is a level of user interactivity on my blog that I previously thought would require a dedicated developer.

Click on the ten bar graph to see analysis of each question.

AI Information Demand 2024-2026

An interactive analysis of what the public most wants explained about Artificial Intelligence. We bypassed *how* people use AI, focusing purely on what concepts they are most desperate to understand.

The Top 10 Curiosity Index

This section visualizes the definitive ranking of AI-related questions based on global search demand, forum discussions, and media inquiries. Interact with the chart: Click on any bar to instantly reveal a deep dive into why that specific question is driving so much interest.

Relative Popularity of AI Queries

Deep Dive Analysis

Select a topic from the chart to view detailed insights.

Thematic Breakdown of Demand

When we aggregate the specific questions, distinct macro-themes emerge. This categorizes the top 10 queries into four main domains to show where the center of gravity lies in public consciousness.

What Dominates the Conversation?

Static Form Presentation of the Top 10 Curiosity Index

A ranking of what the public is actively seeking to understand about AI, synthesized by Gemini 3.1 Pro from global search trends, forum discussions, and media inquiries. The words in this section are by Gemini.

1. How AI Thinks – 25%

How does AI actually ‘think’ or make decisions?
The ‘Black Box’ Enigma: Users are unsettled by outputs they cannot trace back to a logical process. They want analogies that bridge the gap between human reasoning and neural networks.

2. Job Security – 18%

Will AI take my job, and how do I prepare?
Economic Anxiety: The public is demanding highly practical, industry-specific explanations. People want to know the exact timeline of automation for their specific roles.

3. Data Privacy – 15%

How is my personal data being used to train AI?
The Privacy Paradox: Users want explained, in plain terms, whether their emails, private photos, or voice recordings are being scraped to train models.

4. Consciousness – 12%

Can AI develop consciousness or true emotions?
The Sentience Question: A large segment of the public is seeking a philosophical and technical breakdown of consciousness, questioning the line between simulated and real empathy.

5. Terminology – 9%

What is the difference between LLMs, Machine Learning, and GenAI?
Clearing the Jargon Fog: People want a foundational glossary that explains these terms without using more jargon.

6. Misinformation – 7%

How can we prevent AI from spreading misinformation?
Algorithmic Truth: The public wants to know how developers are addressing AI ‘hallucinations’ and what tools exist to verify synthetic text.

7. Copyright & IP – 5%

Who owns the copyright to AI-generated content?
IP Crisis: Creators and users are demanding legal clarity. If an AI trains on my art, am I owed royalties?

8. Environmental Impact – 4%

How much energy does AI consume?
Carbon Footprint: A growing niche is asking for the hidden environmental costs of cloud-based AI to be explained.

9. Prompting Skills – 3%

How do I effectively prompt an AI?
The ‘How-To’: Users want to understand the linguistic rules the AI prefers to generate better outcomes.

10. The Singularity – 2%

What happens when AI gets smarter than humans?
Superintelligence: The public wants to know if experts have a plan for containment if models surpass human cognitive abilities.

My Take: Bridging the Gap Between Vision and Inquiry

What struck me most about this data was the surprisingly low ranking of The Singularity and AGI. While my focus often rests on the profound, long-term implications of superintelligence, the index reveals a public currently focused on the immediate and negative. It is a classic case of the “here and now” overshadowing the “what’s next.”

A surreal illustration of a humanoid figure with tree-like branches and a galaxy swirling above, representing a fusion of nature and technology. The figure's chest is open, emitting light and data, while one hand holds a glowing pyramid with a question mark and leaves.
Image of a Singularity interpretation. See Can AI Really Save the Future?

Similarly, seeing Prompting Skills in second to last place in information needs with only 3% is disappointing. To me, this remains the critical lever for success with AI. This data doesn’t change my mission, but it certainly highlights the conceptual hurdles.

The others on the list and rankings were pretty much what I expected. They correspond to the types of questions I usually get when lecturing on AI.

Thematic Breakdown of the Information Demands

When we aggregate the specific questions, distinct macro-themes emerge. The following is Gemini’s categorization of the top 10 queries into four main domains. This is designed to show where the center of gravity lies in public consciousness.

What Dominates the Conversation?

1. Technical Mechanics:
Demystifying the ‘magic.’ People want the underlying architecture explained.

2. Socio-Economic:
Fear and planning regarding real-world consequences on careers and laws.

3. Ethics & Trust:
Concerns regarding data harvesting and the spread of unchecked bias.

4. Existential:
Philosophical inquiries regarding consciousness and humanity’s place.

A circular diagram divided into four segments representing different categories: Technical Mechanics (green), Socio-Economic (red), Ethics & Trust (blue), and Existential (gold).

Analysis of the Four Information Need Themes

The data confirms that people primarily want to understand the “how” of AI. This isn’t surprising, given that the major AI labs have been intentionally opaque. However, the landscape is shifting rapidly; as I noted in Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand (Feb. 2025), the competition is finally forcing a level of transparency. Yet, even when pressed, top scientists admit they do not fully understand the internal mechanics of these models. This sentiment was echoed in Dario Amodei Warns of the Danger of Black Box AI that No One Understands (May 2025).

A digital illustration representing artificial intelligence, with a central brain surrounded by icons symbolizing various concepts like security, collaboration, education, creativity, law, sustainability, and data analysis.
How Does AI Work? Better learn the basics.

The second “hot zone” is socio-economic. The anxiety here is well-founded. The financial and environmental costs are staggering—the power consumption of modern AI is almost incomprehensible when you consider that the human brain operates on a mere 200 watts. As reported in AI Is Eating Data Center Power Demand—and It’s Only Getting Worse (May 2025, Wired), the strain on our infrastructure is only getting worse.

An infographic illustrating the impact of AI on various sectors, with icons representing finance, law, healthcare, technology, and automation around a central globe.
Many Socio Economic Concerns Are Well Founded.

Regarding jobs, history shows a pattern: initial displacement followed by a surge in new roles. While many remain skeptical, I lean toward the optimism of voices like Wharton Professor Ethan Mollick, who predicts a new era of human-centric roles, including “Sin-Eaters” tasked with managing AI errors. Demonstration by analysis of an article predicting new jobs created by AI (July 2025). We must also acknowledge a historical first: this is the first revolutionary technology made freely available to the masses from day one, not just an elite few. This accessibility should make retraining easier, but whether the displaced will successfully pivot remains to be seen.

An imaginative scene depicting creativity and technology, featuring people engaged in various activities such as art, music, education, and agriculture. Key elements include a woman holding a key, a child being guided, individuals painting and working on laptops, and a drone hovering over a colorful field.
New Types of Meaningful Work Emerge.

Finally, “Ethics and Trust” holds a strong third place. In the legal world, “Trust but Verify” has become the mantra. Whether it’s identifying the Seven Cardinal Dangers or learning to Cross-Examine Your AI to cure hallucinations, these are the questions that dominate my lectures. With AI companies largely self-regulating, the burden of verification remains firmly on the user.

A digital illustration depicting various concepts related to artificial intelligence, data security, and analysis. Central to the image is a globe surrounded by icons, including robotic hands, diverse people, data analytics, and a lockbox symbolizing security, with arrows connecting these elements.
There is much more to AI Ethics and Trust than Verification. Lawyers are needed here.

As for the “Existential” category—the lowest ranked—the fear of AI consciousness is certainly fun to talk about, but I find it largely unfounded. See From Ships to Silicon: Personhood and Evidence in the Age of AI. The real existential threat is not a sentient machine, but human users and over-delegation to AI, including critical “kill decisions.” As Jensen Huang (NVIDIA) aptly put it, we must keep a human in the loop to prevent AI from self-evolving “out in the wild” without oversight. Jensen Huang’s Life and Company – NVIDIA (Dec. 2023).

A digital illustration showing a globe with interconnected graphics related to artificial intelligence (AI), including a brain, a heart on a scale, various professionals, and technological elements.
Identifying and preventing real existential risks.

Conclusion

The “Curiosity Index” provides a rare look into the collective mind of a society in transition. It shows us that while the experts are looking at the horizon, the public is still trying to find its footing on the ground. My goal remains unchanged: to lead you through the “jargon fog” and past the conceptual hurdles of the present so that you are prepared for the “what’s next.” Whether you are a lawyer verifying an AI-generated brief or a professional worried about your role, remember that the most powerful tool in this new era isn’t the AI itself, it’s your ability to ask the right questions and maintain your place as the “human in the loop.”

A hand placing a transparent pyramid with a question mark on a maze-like structure, set against a scenic background of rolling hills and a golden sky.
Keeping humans in the loop. That means you!

Ralph Losey Copyright 2026 — All Rights Reserved


Something Big Is Happening — But Not What You Think

February 23, 2026

Ralph Losey. February 23, 2026

A Response to Matt Shumer’s Viral Essay on AI Acceleration

A high-speed train in motion on railway tracks during sunset, creating a dynamic sense of speed with blurred background.
Acceleration without control is dangerous. Acceleration with judgment is transformative.

I. Something big is happening. On that much Matt Shumer and I agree.

The essay Something Big Is Happening was published on Matt Shumer’s personal blog on February 9, 2026. After he shared it widely on X, it drew more than 80 million views within days, rapidly becoming a focal point in public debates about AI and the future of work. Few essays about artificial intelligence have traveled that far, that fast.

Shumer’s central claim is straightforward: AI capability is accelerating so rapidly that large-scale displacement of white-collar work is imminent, perhaps within one to five years. He argues that recursive improvement loops are already underway, that benchmark curves are steepening, and that most people are underestimating what is about to happen.

It is a powerful narrative. It is also incomplete, and that matters more than its popularity suggests. So take a breath.

Before I explain why, a brief word of context. I have practiced law for over 45 years and have worked hands-on with AI in litigation for more than 14. I was involved in the first case approving predictive coding for e-discovery in federal court. Since 2023, I have written extensively about generative AI, hybrid human-machine workflows, and the emerging governance challenges of AI and quantum convergence. I am not skeptical of AI — I use it daily, teach it, and advocate its responsible adoption.

Acceleration is real. But acceleration demands adults – a calm, measured approach. That is why I take Shumer seriously, even as I disagree with his conclusions.

II. What Shumer Gets Right (and What He Exaggerates)

Let us begin where we agree. AI models have improved rapidly. Coding autonomy has advanced in ways that would have seemed implausible just a few years ago. AI systems now assist meaningfully in debugging, evaluation, and even aspects of their own development pipelines. Benchmarks measuring the duration of tasks that models can complete without human intervention have indeed increased.

There is rapid acceleration, but it is not a smooth, universal climb. It is jagged.

A. The Bar Exam Myth: Top 10% or Bottom 15%?

humer states: “By 2023, [AI] could pass the bar exam.” This has become a foundational myth in the AI-acceleration narrative. However, a rigorous study by Eric Martinez showed the truth of the vendor study. Re-evaluating GPT-4’s bar exam performance. Artif Intell Law (2024) (presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4.0’s Uniform Bar Exam percentile are overinflated). Martinez found that when you limit the sample to those who actually passed the bar (qualified attorneys), the model’s percentile drops off a cliff. On the essay and performance test portions (MEE + MPT), GPT-4 scored in the ~15th percentile. In other words, bottom 15% among those who passed.

B. AI Hallucinations Are Not Ancient History

Shumer claims that the “this makes stuff up” phase of AI is “ancient history” and that current models are unrecognizable from six months ago. My daily use and objective tests tell a different story. Yes, it is getting better but we are not there yet, especially for most legal users.

Hallucination remains the number one concern for the Bench and Bar. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations (December 2025). Generative AI still has a persistent tendency to fabricate facts and law, leading to serious court sanctions. See e.g. Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024). Also see French legal scholar Damien Charlotin‘s catalogue of almost one thousand similar decisions worldwide in his AI Hallucination Cases.

Shumer’s claims that modern AIs no longer hallucinate and outperform most attorneys reflect optimism more than sustained exposure to legal work. After researching tens of thousands of legal issues over the course of my career, I can tell you that verification is not optional — it is the job.

C. The “Jagged Frontier” of AI Progress

Shumer envisions a wall of fast, inevitable advance. Research and personal experience of many experts suggests otherwise. The progress is jagged and uneven. See e.g.The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%. Also see research and reports of top expert teams in Navigating the Jagged Technological Frontier (Working Paper 24-013, Harvard Business School, Sept. 2023) and my humble papers, From Centaurs To Cyborgs, and Navigating the AI Frontier.

The New Stanford–Carnegie Study (November 2025) confirmed what Harvard researchers call the “Jagged Technological Frontier”. This research found that AI excels at specific programmable tasks but fails at messy, human-centric reality. In fact, the Stanford-Carnegie study showed that fully autonomous AI agents were significantly less reliable than hybrid human-AI teams, which outperformed solo agents by 68.7%.

D. “Team of Human Associates” or “Untested Sycophantic AI Experts”?

Shumer recounts a managing partner in a law firm who feels AI is like “having a team of associates available instantly.” I agree that every professional should be integrating AI into their daily workflow. But they must do so skeptically. Plus, it is nowhere near the same as having trained human associates. AIs are cheaper, sure, until they screw up and you are the one left cleaning it up.

In my 45-years of legal practice I have had the privilege of working with many excellent associates. They significantly exceed today’s AIs in many respects, so I must respectfully disagree with Shumer’s quote of an anonymous partner. There are many things that AI will never be able to do that all good professionals now do without thinking. The Human Edge: How AI Can Assist But Never Replace. I prefer humans with AIs – the hybrid approach – over AIs alone, even though, unlike humans, AI associates are always pleasant and they tend to agree with everything you say. Lessons for Legal Profession from the Latest Viral Meme: ‘Ask an AI What It Would Do If It Became Human For a Day? (Jan. 2026).

My testing of AI since 2023 has focused on the legal reasoning ability of AI, as opposed to general reasoning. For a full explanation of the difference, see Breaking New Ground: Evaluating the Top AI Reasoning Models of 2025. I have also spent hundreds of hours in hands-on independent testing of the AI legal reasoning abilities. See e.g., Bar Battle of the Bots, parts one, two, three and four. These articles reported multiple tests of Open AI and Google models in 2025, including tests using actual Bar exam questions, which they again failed. I have not seen substantial improvements in AI since then.

It is, in my experience a poor trade to use an AI instead of an associate-Ai team, and without extensive supervision, an invitation to sanctions and malpractice.

III. Benchmark Curves Are Not Civilization

Shumer relies heavily on task-duration benchmarks and exponential trend lines. The implication is clear: if models can complete longer and longer tasks autonomously, then large-scale displacement is imminent.

The problem with that is benchmark extrapolation is not societal destiny. In law, evidence does not decide the case. People do.

Most current autonomy benchmarks are domain-constrained. They focus heavily on software engineering and other structured digital tasks. Coding is not law. It is not medicine. It is not fiduciary duty. It is not governance.

Even when capability expands inside a benchmark, that does not mean institutions will move at the same speed. Courts, regulators, insurers, boards of directors, and compliance departments slow, shape, and channel technology. That is not inertia; it is risk management.

And assistance in model development is not the same as autonomous recursive self-governance. Humans remain deeply embedded in training, validation, and deployment. “AI helping build AI” makes for a compelling headline. It does not mean an intelligence explosion has detached from human control. AI extends cognition, but it does not replace stewardship.

That is the part Shumer’s curve does not capture: acceleration of capability is real but this increases the need for adult supervision. It does not eliminate the human role. It intensifies it. Just as it always has.

IV. Why Fear Travels Faster Than Wisdom

The viral success of Shumer’s essay is not accidental. It was designed to activate powerful psychological mechanisms.

It invokes the COVID analogy, reminding readers how quickly life changed in early 2020. It frames the reader personally: “you’re next.” It emphasizes exponential growth, which humans are notoriously poor at intuitively processing. It adopts insider authority: “I live in this world; I see what you don’t.”

Fear spreads faster than nuance because we evolved that way. A possible threat demands immediate attention. Social media algorithms amplify high-emotion content. Urgency increases engagement velocity. None of this necessarily makes Shumer insincere but it does explain why his article went viral. Acceleration narratives travel at super-fast computer speeds. Wisdom still travels at human speed.

V. Incentives Shape Narratives

It is also important to understand context. Shumer is a very young builder who lives in the code. His perspective is shaped by the possibility of the technology. My perspective, and the perspective of governance, is shaped by the consequences of the technology. Startup culture rewards speed; legal culture rewards survivability. These are different risk environments.

Recognizing that difference is not an attack. It is transparency. His incentives don’t invalidate his argument, but they do shape his narrative.

VI. A Structural Irony

Here is another irony worth reflecting on. We are now in an era where AI systems assist in drafting almost all persuasive content. Many viral essays, legal briefs, and opinion pieces share a similar highly optimized narrative arc—a cadence and structure that Large Language Models excel at producing.

If an AI is optimizing for “popularity” – to become the next great flash meme – it will naturally drift toward alarmism, because alarmism travels faster than nuance. It is entirely plausible that AI systems are increasingly shaping the very rhetoric used to warn us about AI. That is not necessarily a deception, but it is a reminder: persuasion optimization is not the same as civilizational wisdom.

VII. The Category Mistake: Doing the Task Is Not Being the Lawyer

Here is the deeper mistake in many inevitability arguments. They confuse task performance with personhood.

Yes, AI completes tasks. Sometimes very well. It predicts the next word, the next clause, the next block of code. At scale and at speed. But practicing law is not just completing text.

Human reasoning is not happening in a vacuum. It happens inside a body that can lose a license. Inside a reputation built over decades. Inside an ethical framework enforced by courts and bar regulators. Inside institutions that impose consequences.

AI does not stand in a courtroom or sign pleadings. AI does not carry malpractice insurance.

Law makes this distinction painfully clear. AI can draft a brief in seconds. I use it for that to start a review and verify process. But drafting is not signing. When a lawyer signs a motion, that signature attaches a human name, a bar number, a reputation, and a career to every word on the page.

If the brief is reckless, the AI does not get sanctioned. If the citation is fabricated, the AI does not face discipline. If the argument crosses an ethical line, the AI does not stand before a grievance committee. A probabilistic system cannot be disbarred.

Automation can transform tasks. It cannot assume moral agency. That distinction matters. And it will continue to matter, no matter how fast the models improve.

A close-up of a person's finger hovering over a laptop keyboard while signing a document electronically. The screen displays a signature field with the name 'John Smith' and a 'Confirm Signature' button.
Drafting is not signing. Accountability remains human.

VIII. Quantum Convergence Raises the Stakes

The need for adult supervision of accelerating technology becomes even more critical as we look at what is coming next. We are entering a new period where AI intersects with quantum computing. If AI is a race car, Quantum is the nitrous oxide. You do not put a novice driver behind that wheel.

Quantum-scale compute raises national security questions, cryptographic vulnerabilities, and governance complexity. More powerful systems require more sophisticated oversight frameworks. Power without governance is destabilizing; power with governance is transformative. The question is not whether capability grows—it is whether wisdom keeps pace.

The greatest short-term danger is not AI superintelligence overthrowing society, whether enhanced by quantum or not. It is over-delegation. It is professionals putting systems on autopilot. It is institutions adopting tools without supervision, audit trails, and verification. The solution is not panic. It is disciplined integration. Trust but verify.

IX. What Responsible Adoption Looks Like

Use AI seriously. Experiment daily. Adopt paid tools where appropriate. Automate repetitive tasks. I agree with Shumer on this.

But at the same time: Maintain human review. Preserve accountability. Document workflows. Understand limits. Teach younger professionals hybrid reasoning working with AI, not dependency.

The future belongs to those who combine human judgment with machine capability. Not to those who surrender to inevitability narratives

We have made this error before. We mistake acceleration for autonomy. We mistake tools for replacements. And each time, we rediscover that human responsibility does not disappear when machines improve. It intensifies.

X. Something Big Is Happening

Shumer is right that “something big is happening.” AI capability is advancing. Workflows are changing. New economic pressures are emerging. But history teaches us that technological acceleration does not eliminate the need for human beings. It heightens it.

This is where law and governance have to re-enter the conversation. Society should not allow its economic and moral direction to be set by the most amplified voices in tech, especially when those voices operate within incentive structures that reward urgency. We need engineers, not promoters. We need experience, not exuberance. We need wisdom, not just information.

Above all, we need adults in the room. Acceleration does not remove the human role. It demands judgment, accountability, and institutional memory.

A group of four professionals engaged in a discussion around a conference table, with laptops open and documents spread out, in a modern office setting.
Capability accelerates. Responsibility must keep pace.

Something big is happening. What happens next depends on whether we meet it with fear, or calm skepticism.

Ralph Losey Copyright 2026 — All Rights Reserved


e-Discovery Team

LAW and TECHNOLOGY - Ralph Losey © 2006-2026

Skip to content ↓