What People Want To Know About AI: Top 10 Curiosity Index (with interactive graphic)

March 18, 2026

Ralph Losey, March 18, 2026

A digital illustration of a brain with gears, surrounded by various topics related to artificial intelligence, including job security, data privacy, misinformation, and environmental impact.
Top Ten Information Needs about AI per Gemini research. All images by Ralph Losey using Nano Banana 2, except for the graphs by Gemini Pro.

Gemini 3.1 Pro Surprises: Synthesizing the Top 10 AI Questions of 2024–2026

I was recently struck by a capability in the pro version of Gemini that I hadn’t encountered before. Quite by accident, I discovered the model’s ability to do more than just scour the web for data. It can synthesize thousands of disparate data points, from workshop reports to tangential polls, to provide a coherent answer to a complex “meta” question.

My inquiry was specific: What do people actually want to know about AI? I wasn’t interested in usage statistics, but in conceptual gaps. When Gemini (and my own “trust but verify” follow-up) found no single poll on point, the AI pivoted. It inferred a top-ten ranking by analyzing the collective “curiosity” found across the web. The result is what Gemini called, perhaps with a smile, the “Top 10 Curiosity Index,” a list of the concepts that people are most “desperate to understand.

A diverse group of professionals engaged in a brainstorming session in a modern office. Some are using smartphones while sitting around tables with laptops, and others are writing on whiteboards. The environment features brick walls and large windows, creating a collaborative atmosphere focused on AI topics.
Weekend Law Firm Study to Satisfy Top Ten Information Needs about AI.

From Synthesis to Software: Gemini’s Surprising Coding Capabilities

Beyond the data synthesis, Gemini 3.1 Pro surprised me by generating several hundred lines of custom code—largely unprompted—to facilitate sharing these findings. While I was aware of the Pro version’s coding reputation, I was unprepared for this level of sophistication. The AI didn’t just present the information; it built the visual infrastructure to host it, producing complex HTML and JavaScript in a matter of seconds.

The centerpiece of this technical feat is an interactive graph that allows readers to engage with the data directly. Gemini didn’t stop at the code; it acted as a technical consultant, guiding me through the WordPress installation and handling the inevitable troubleshooting with ease. The result is a level of user interactivity on my blog that I previously thought would require a dedicated developer.

Click on the ten bar graph to see analysis of each question.

AI Information Demand 2024-2026

An interactive analysis of what the public most wants explained about Artificial Intelligence. We bypassed *how* people use AI, focusing purely on what concepts they are most desperate to understand.

The Top 10 Curiosity Index

This section visualizes the definitive ranking of AI-related questions based on global search demand, forum discussions, and media inquiries. Interact with the chart: Click on any bar to instantly reveal a deep dive into why that specific question is driving so much interest.

Relative Popularity of AI Queries

Deep Dive Analysis

Select a topic from the chart to view detailed insights.

Thematic Breakdown of Demand

When we aggregate the specific questions, distinct macro-themes emerge. This categorizes the top 10 queries into four main domains to show where the center of gravity lies in public consciousness.

What Dominates the Conversation?

Static Form Presentation of the Top 10 Curiosity Index

A ranking of what the public is actively seeking to understand about AI, synthesized by Gemini 3.1 Pro from global search trends, forum discussions, and media inquiries. The words in this section are by Gemini.

1. How AI Thinks – 25%

How does AI actually ‘think’ or make decisions?
The ‘Black Box’ Enigma: Users are unsettled by outputs they cannot trace back to a logical process. They want analogies that bridge the gap between human reasoning and neural networks.

2. Job Security – 18%

Will AI take my job, and how do I prepare?
Economic Anxiety: The public is demanding highly practical, industry-specific explanations. People want to know the exact timeline of automation for their specific roles.

3. Data Privacy – 15%

How is my personal data being used to train AI?
The Privacy Paradox: Users want explained, in plain terms, whether their emails, private photos, or voice recordings are being scraped to train models.

4. Consciousness – 12%

Can AI develop consciousness or true emotions?
The Sentience Question: A large segment of the public is seeking a philosophical and technical breakdown of consciousness, questioning the line between simulated and real empathy.

5. Terminology – 9%

What is the difference between LLMs, Machine Learning, and GenAI?
Clearing the Jargon Fog: People want a foundational glossary that explains these terms without using more jargon.

6. Misinformation – 7%

How can we prevent AI from spreading misinformation?
Algorithmic Truth: The public wants to know how developers are addressing AI ‘hallucinations’ and what tools exist to verify synthetic text.

7. Copyright & IP – 5%

Who owns the copyright to AI-generated content?
IP Crisis: Creators and users are demanding legal clarity. If an AI trains on my art, am I owed royalties?

8. Environmental Impact – 4%

How much energy does AI consume?
Carbon Footprint: A growing niche is asking for the hidden environmental costs of cloud-based AI to be explained.

9. Prompting Skills – 3%

How do I effectively prompt an AI?
The ‘How-To’: Users want to understand the linguistic rules the AI prefers to generate better outcomes.

10. The Singularity – 2%

What happens when AI gets smarter than humans?
Superintelligence: The public wants to know if experts have a plan for containment if models surpass human cognitive abilities.

My Take: Bridging the Gap Between Vision and Inquiry

What struck me most about this data was the surprisingly low ranking of The Singularity and AGI. While my focus often rests on the profound, long-term implications of superintelligence, the index reveals a public currently focused on the immediate and negative. It is a classic case of the “here and now” overshadowing the “what’s next.”

A surreal illustration of a humanoid figure with tree-like branches and a galaxy swirling above, representing a fusion of nature and technology. The figure's chest is open, emitting light and data, while one hand holds a glowing pyramid with a question mark and leaves.
Image of a Singularity interpretation. See Can AI Really Save the Future?

Similarly, seeing Prompting Skills in second to last place in information needs with only 3% is disappointing. To me, this remains the critical lever for success with AI. This data doesn’t change my mission, but it certainly highlights the conceptual hurdles.

The others on the list and rankings were pretty much what I expected. They correspond to the types of questions I usually get when lecturing on AI.

Thematic Breakdown of the Information Demands

When we aggregate the specific questions, distinct macro-themes emerge. The following is Gemini’s categorization of the top 10 queries into four main domains. This is designed to show where the center of gravity lies in public consciousness.

What Dominates the Conversation?

1. Technical Mechanics:
Demystifying the ‘magic.’ People want the underlying architecture explained.

2. Socio-Economic:
Fear and planning regarding real-world consequences on careers and laws.

3. Ethics & Trust:
Concerns regarding data harvesting and the spread of unchecked bias.

4. Existential:
Philosophical inquiries regarding consciousness and humanity’s place.

A circular diagram divided into four segments representing different categories: Technical Mechanics (green), Socio-Economic (red), Ethics & Trust (blue), and Existential (gold).

Analysis of the Four Information Need Themes

The data confirms that people primarily want to understand the “how” of AI. This isn’t surprising, given that the major AI labs have been intentionally opaque. However, the landscape is shifting rapidly; as I noted in Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand (Feb. 2025), the competition is finally forcing a level of transparency. Yet, even when pressed, top scientists admit they do not fully understand the internal mechanics of these models. This sentiment was echoed in Dario Amodei Warns of the Danger of Black Box AI that No One Understands (May 2025).

A digital illustration representing artificial intelligence, with a central brain surrounded by icons symbolizing various concepts like security, collaboration, education, creativity, law, sustainability, and data analysis.
How Does AI Work? Better learn the basics.

The second “hot zone” is socio-economic. The anxiety here is well-founded. The financial and environmental costs are staggering—the power consumption of modern AI is almost incomprehensible when you consider that the human brain operates on a mere 200 watts. As reported in AI Is Eating Data Center Power Demand—and It’s Only Getting Worse (May 2025, Wired), the strain on our infrastructure is only getting worse.

An infographic illustrating the impact of AI on various sectors, with icons representing finance, law, healthcare, technology, and automation around a central globe.
Many Socio Economic Concerns Are Well Founded.

Regarding jobs, history shows a pattern: initial displacement followed by a surge in new roles. While many remain skeptical, I lean toward the optimism of voices like Wharton Professor Ethan Mollick, who predicts a new era of human-centric roles, including “Sin-Eaters” tasked with managing AI errors. Demonstration by analysis of an article predicting new jobs created by AI (July 2025). We must also acknowledge a historical first: this is the first revolutionary technology made freely available to the masses from day one, not just an elite few. This accessibility should make retraining easier, but whether the displaced will successfully pivot remains to be seen.

An imaginative scene depicting creativity and technology, featuring people engaged in various activities such as art, music, education, and agriculture. Key elements include a woman holding a key, a child being guided, individuals painting and working on laptops, and a drone hovering over a colorful field.
New Types of Meaningful Work Emerge.

Finally, “Ethics and Trust” holds a strong third place. In the legal world, “Trust but Verify” has become the mantra. Whether it’s identifying the Seven Cardinal Dangers or learning to Cross-Examine Your AI to cure hallucinations, these are the questions that dominate my lectures. With AI companies largely self-regulating, the burden of verification remains firmly on the user.

A digital illustration depicting various concepts related to artificial intelligence, data security, and analysis. Central to the image is a globe surrounded by icons, including robotic hands, diverse people, data analytics, and a lockbox symbolizing security, with arrows connecting these elements.
There is much more to AI Ethics and Trust than Verification. Lawyers are needed here.

As for the “Existential” category—the lowest ranked—the fear of AI consciousness is certainly fun to talk about, but I find it largely unfounded. See From Ships to Silicon: Personhood and Evidence in the Age of AI. The real existential threat is not a sentient machine, but human users and over-delegation to AI, including critical “kill decisions.” As Jensen Huang (NVIDIA) aptly put it, we must keep a human in the loop to prevent AI from self-evolving “out in the wild” without oversight. Jensen Huang’s Life and Company – NVIDIA (Dec. 2023).

A digital illustration showing a globe with interconnected graphics related to artificial intelligence (AI), including a brain, a heart on a scale, various professionals, and technological elements.
Identifying and preventing real existential risks.

Conclusion

The “Curiosity Index” provides a rare look into the collective mind of a society in transition. It shows us that while the experts are looking at the horizon, the public is still trying to find its footing on the ground. My goal remains unchanged: to lead you through the “jargon fog” and past the conceptual hurdles of the present so that you are prepared for the “what’s next.” Whether you are a lawyer verifying an AI-generated brief or a professional worried about your role, remember that the most powerful tool in this new era isn’t the AI itself, it’s your ability to ask the right questions and maintain your place as the “human in the loop.”

A hand placing a transparent pyramid with a question mark on a maze-like structure, set against a scenic background of rolling hills and a golden sky.
Keeping humans in the loop. That means you!

Ralph Losey Copyright 2026 — All Rights Reserved


Something Big Is Happening — But Not What You Think

February 23, 2026

Ralph Losey. February 23, 2026

A Response to Matt Shumer’s Viral Essay on AI Acceleration

A high-speed train in motion on railway tracks during sunset, creating a dynamic sense of speed with blurred background.
Acceleration without control is dangerous. Acceleration with judgment is transformative.

I. Something big is happening. On that much Matt Shumer and I agree.

The essay Something Big Is Happening was published on Matt Shumer’s personal blog on February 9, 2026. After he shared it widely on X, it drew more than 80 million views within days, rapidly becoming a focal point in public debates about AI and the future of work. Few essays about artificial intelligence have traveled that far, that fast.

Shumer’s central claim is straightforward: AI capability is accelerating so rapidly that large-scale displacement of white-collar work is imminent, perhaps within one to five years. He argues that recursive improvement loops are already underway, that benchmark curves are steepening, and that most people are underestimating what is about to happen.

It is a powerful narrative. It is also incomplete, and that matters more than its popularity suggests. So take a breath.

Before I explain why, a brief word of context. I have practiced law for over 45 years and have worked hands-on with AI in litigation for more than 14. I was involved in the first case approving predictive coding for e-discovery in federal court. Since 2023, I have written extensively about generative AI, hybrid human-machine workflows, and the emerging governance challenges of AI and quantum convergence. I am not skeptical of AI — I use it daily, teach it, and advocate its responsible adoption.

Acceleration is real. But acceleration demands adults – a calm, measured approach. That is why I take Shumer seriously, even as I disagree with his conclusions.

II. What Shumer Gets Right (and What He Exaggerates)

Let us begin where we agree. AI models have improved rapidly. Coding autonomy has advanced in ways that would have seemed implausible just a few years ago. AI systems now assist meaningfully in debugging, evaluation, and even aspects of their own development pipelines. Benchmarks measuring the duration of tasks that models can complete without human intervention have indeed increased.

There is rapid acceleration, but it is not a smooth, universal climb. It is jagged.

A. The Bar Exam Myth: Top 10% or Bottom 15%?

humer states: “By 2023, [AI] could pass the bar exam.” This has become a foundational myth in the AI-acceleration narrative. However, a rigorous study by Eric Martinez showed the truth of the vendor study. Re-evaluating GPT-4’s bar exam performance. Artif Intell Law (2024) (presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4.0’s Uniform Bar Exam percentile are overinflated). Martinez found that when you limit the sample to those who actually passed the bar (qualified attorneys), the model’s percentile drops off a cliff. On the essay and performance test portions (MEE + MPT), GPT-4 scored in the ~15th percentile. In other words, bottom 15% among those who passed.

B. AI Hallucinations Are Not Ancient History

Shumer claims that the “this makes stuff up” phase of AI is “ancient history” and that current models are unrecognizable from six months ago. My daily use and objective tests tell a different story. Yes, it is getting better but we are not there yet, especially for most legal users.

Hallucination remains the number one concern for the Bench and Bar. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations (December 2025). Generative AI still has a persistent tendency to fabricate facts and law, leading to serious court sanctions. See e.g. Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024). Also see French legal scholar Damien Charlotin‘s catalogue of almost one thousand similar decisions worldwide in his AI Hallucination Cases.

Shumer’s claims that modern AIs no longer hallucinate and outperform most attorneys reflect optimism more than sustained exposure to legal work. After researching tens of thousands of legal issues over the course of my career, I can tell you that verification is not optional — it is the job.

C. The “Jagged Frontier” of AI Progress

Shumer envisions a wall of fast, inevitable advance. Research and personal experience of many experts suggests otherwise. The progress is jagged and uneven. See e.g.The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%. Also see research and reports of top expert teams in Navigating the Jagged Technological Frontier (Working Paper 24-013, Harvard Business School, Sept. 2023) and my humble papers, From Centaurs To Cyborgs, and Navigating the AI Frontier.

The New Stanford–Carnegie Study (November 2025) confirmed what Harvard researchers call the “Jagged Technological Frontier”. This research found that AI excels at specific programmable tasks but fails at messy, human-centric reality. In fact, the Stanford-Carnegie study showed that fully autonomous AI agents were significantly less reliable than hybrid human-AI teams, which outperformed solo agents by 68.7%.

D. “Team of Human Associates” or “Untested Sycophantic AI Experts”?

Shumer recounts a managing partner in a law firm who feels AI is like “having a team of associates available instantly.” I agree that every professional should be integrating AI into their daily workflow. But they must do so skeptically. Plus, it is nowhere near the same as having trained human associates. AIs are cheaper, sure, until they screw up and you are the one left cleaning it up.

In my 45-years of legal practice I have had the privilege of working with many excellent associates. They significantly exceed today’s AIs in many respects, so I must respectfully disagree with Shumer’s quote of an anonymous partner. There are many things that AI will never be able to do that all good professionals now do without thinking. The Human Edge: How AI Can Assist But Never Replace. I prefer humans with AIs – the hybrid approach – over AIs alone, even though, unlike humans, AI associates are always pleasant and they tend to agree with everything you say. Lessons for Legal Profession from the Latest Viral Meme: ‘Ask an AI What It Would Do If It Became Human For a Day? (Jan. 2026).

My testing of AI since 2023 has focused on the legal reasoning ability of AI, as opposed to general reasoning. For a full explanation of the difference, see Breaking New Ground: Evaluating the Top AI Reasoning Models of 2025. I have also spent hundreds of hours in hands-on independent testing of the AI legal reasoning abilities. See e.g., Bar Battle of the Bots, parts one, two, three and four. These articles reported multiple tests of Open AI and Google models in 2025, including tests using actual Bar exam questions, which they again failed. I have not seen substantial improvements in AI since then.

It is, in my experience a poor trade to use an AI instead of an associate-Ai team, and without extensive supervision, an invitation to sanctions and malpractice.

III. Benchmark Curves Are Not Civilization

Shumer relies heavily on task-duration benchmarks and exponential trend lines. The implication is clear: if models can complete longer and longer tasks autonomously, then large-scale displacement is imminent.

The problem with that is benchmark extrapolation is not societal destiny. In law, evidence does not decide the case. People do.

Most current autonomy benchmarks are domain-constrained. They focus heavily on software engineering and other structured digital tasks. Coding is not law. It is not medicine. It is not fiduciary duty. It is not governance.

Even when capability expands inside a benchmark, that does not mean institutions will move at the same speed. Courts, regulators, insurers, boards of directors, and compliance departments slow, shape, and channel technology. That is not inertia; it is risk management.

And assistance in model development is not the same as autonomous recursive self-governance. Humans remain deeply embedded in training, validation, and deployment. “AI helping build AI” makes for a compelling headline. It does not mean an intelligence explosion has detached from human control. AI extends cognition, but it does not replace stewardship.

That is the part Shumer’s curve does not capture: acceleration of capability is real but this increases the need for adult supervision. It does not eliminate the human role. It intensifies it. Just as it always has.

IV. Why Fear Travels Faster Than Wisdom

The viral success of Shumer’s essay is not accidental. It was designed to activate powerful psychological mechanisms.

It invokes the COVID analogy, reminding readers how quickly life changed in early 2020. It frames the reader personally: “you’re next.” It emphasizes exponential growth, which humans are notoriously poor at intuitively processing. It adopts insider authority: “I live in this world; I see what you don’t.”

Fear spreads faster than nuance because we evolved that way. A possible threat demands immediate attention. Social media algorithms amplify high-emotion content. Urgency increases engagement velocity. None of this necessarily makes Shumer insincere but it does explain why his article went viral. Acceleration narratives travel at super-fast computer speeds. Wisdom still travels at human speed.

V. Incentives Shape Narratives

It is also important to understand context. Shumer is a very young builder who lives in the code. His perspective is shaped by the possibility of the technology. My perspective, and the perspective of governance, is shaped by the consequences of the technology. Startup culture rewards speed; legal culture rewards survivability. These are different risk environments.

Recognizing that difference is not an attack. It is transparency. His incentives don’t invalidate his argument, but they do shape his narrative.

VI. A Structural Irony

Here is another irony worth reflecting on. We are now in an era where AI systems assist in drafting almost all persuasive content. Many viral essays, legal briefs, and opinion pieces share a similar highly optimized narrative arc—a cadence and structure that Large Language Models excel at producing.

If an AI is optimizing for “popularity” – to become the next great flash meme – it will naturally drift toward alarmism, because alarmism travels faster than nuance. It is entirely plausible that AI systems are increasingly shaping the very rhetoric used to warn us about AI. That is not necessarily a deception, but it is a reminder: persuasion optimization is not the same as civilizational wisdom.

VII. The Category Mistake: Doing the Task Is Not Being the Lawyer

Here is the deeper mistake in many inevitability arguments. They confuse task performance with personhood.

Yes, AI completes tasks. Sometimes very well. It predicts the next word, the next clause, the next block of code. At scale and at speed. But practicing law is not just completing text.

Human reasoning is not happening in a vacuum. It happens inside a body that can lose a license. Inside a reputation built over decades. Inside an ethical framework enforced by courts and bar regulators. Inside institutions that impose consequences.

AI does not stand in a courtroom or sign pleadings. AI does not carry malpractice insurance.

Law makes this distinction painfully clear. AI can draft a brief in seconds. I use it for that to start a review and verify process. But drafting is not signing. When a lawyer signs a motion, that signature attaches a human name, a bar number, a reputation, and a career to every word on the page.

If the brief is reckless, the AI does not get sanctioned. If the citation is fabricated, the AI does not face discipline. If the argument crosses an ethical line, the AI does not stand before a grievance committee. A probabilistic system cannot be disbarred.

Automation can transform tasks. It cannot assume moral agency. That distinction matters. And it will continue to matter, no matter how fast the models improve.

A close-up of a person's finger hovering over a laptop keyboard while signing a document electronically. The screen displays a signature field with the name 'John Smith' and a 'Confirm Signature' button.
Drafting is not signing. Accountability remains human.

VIII. Quantum Convergence Raises the Stakes

The need for adult supervision of accelerating technology becomes even more critical as we look at what is coming next. We are entering a new period where AI intersects with quantum computing. If AI is a race car, Quantum is the nitrous oxide. You do not put a novice driver behind that wheel.

Quantum-scale compute raises national security questions, cryptographic vulnerabilities, and governance complexity. More powerful systems require more sophisticated oversight frameworks. Power without governance is destabilizing; power with governance is transformative. The question is not whether capability grows—it is whether wisdom keeps pace.

The greatest short-term danger is not AI superintelligence overthrowing society, whether enhanced by quantum or not. It is over-delegation. It is professionals putting systems on autopilot. It is institutions adopting tools without supervision, audit trails, and verification. The solution is not panic. It is disciplined integration. Trust but verify.

IX. What Responsible Adoption Looks Like

Use AI seriously. Experiment daily. Adopt paid tools where appropriate. Automate repetitive tasks. I agree with Shumer on this.

But at the same time: Maintain human review. Preserve accountability. Document workflows. Understand limits. Teach younger professionals hybrid reasoning working with AI, not dependency.

The future belongs to those who combine human judgment with machine capability. Not to those who surrender to inevitability narratives

We have made this error before. We mistake acceleration for autonomy. We mistake tools for replacements. And each time, we rediscover that human responsibility does not disappear when machines improve. It intensifies.

X. Something Big Is Happening

Shumer is right that “something big is happening.” AI capability is advancing. Workflows are changing. New economic pressures are emerging. But history teaches us that technological acceleration does not eliminate the need for human beings. It heightens it.

This is where law and governance have to re-enter the conversation. Society should not allow its economic and moral direction to be set by the most amplified voices in tech, especially when those voices operate within incentive structures that reward urgency. We need engineers, not promoters. We need experience, not exuberance. We need wisdom, not just information.

Above all, we need adults in the room. Acceleration does not remove the human role. It demands judgment, accountability, and institutional memory.

A group of four professionals engaged in a discussion around a conference table, with laptops open and documents spread out, in a modern office setting.
Capability accelerates. Responsibility must keep pace.

Something big is happening. What happens next depends on whether we meet it with fear, or calm skepticism.

Ralph Losey Copyright 2026 — All Rights Reserved


2025 Year in Review: Beyond Adoption—Entering the Era of AI Entanglement and Quantum Law

December 31, 2025

Ralph Losey, December 31, 2025

As I sit here reflecting on 2025—a year that began with the mind-bending mathematics of the multiverse and ended with the gritty reality of cross-examining algorithms—I am struck by a singular realization. We have moved past the era of mere AI adoption. We have entered the era of entanglement, where we must navigate the new physics of quantum law using the ancient legal tools of skepticism and verification.

A split image illustrating two concepts: on the left, 'AI Adoption' showing an individual with traditional tools and paperwork; on the right, 'AI Entanglement' featuring the same individual surrounded by advanced technology and integrated AI systems.
In 2025 we moved from AI Adoption to AI Entanglement. All images by Losey using many AIs.

We are learning how to merge with AI and remain in control of our minds, our actions. This requires human training, not just AI training. As it turns out, many lawyers are well prepared by past legal training and skeptical attitude for this new type of human training. We can quickly learn to train our minds to maintain control while becoming entangled with advanced AIs and the accelerated reasoning and memory capacities they can bring.

A futuristic woman with digital circuitry patterns on her face interacts with holographic data displays in a high-tech environment.
Trained humans can enhance by total entanglement with AI and not lose control or separate identity. Click here or the image to see video on YouTube.

In 2024, we looked at AI as a tool, a curiosity, perhaps a threat. By the end of 2025, the tool woke up—not with consciousness, but with “agency.” We stopped typing prompts into a void and started negotiating with “agents” that act and reason. We learned to treat these agents not as oracles, but as ‘consulting experts’—brilliant but untested entities whose work must remain privileged until rigorously cross-examined and verified by a human attorney. That put the human legal minds in control and stops the hallucinations in what I called “H-Y-B-R-I-D” workflows of the modern law office.

We are still way smarter than they are and can keep our own agency and control. But for how long? The AI abilities are improving quickly but so are our own abilities to use them. We can be ready. We must. To stay ahead, we should begin the training in earnest in 2026.

A humanoid robot with glowing accents stands looking out over a city skyline at sunset, next to a man in a suit who observes the scene thoughtfully.
Integrate your mind and work with full AI entanglement. Click here or the image to see video on YouTube.

Here is my review of the patterns, the epiphanies, and the necessary illusions of 2025.

I. The Quantum Prelude: Listening for Echoes in the Multiverse

We began the year not in the courtroom, but in the laboratory. In January, and again in October, we grappled with a shift in physics that demands a shift in law. When Google’s Willow chip in January performed a calculation in five minutes that would take a classical supercomputer ten septillion years, it did more than break a speed record; it cracked the door to the multiverse. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan. 2025).

The scientific consensus solidified in October when the Nobel Prize in Physics was awarded to three pioneers—including Google’s own Chief Scientist of Quantum Hardware, Michel Devoret—for proving that quantum behavior operates at a macroscopic level. Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago; and Google’s New ‘Quantum Echoes Algorithm’ and My Last Article, ‘Quantum Echo’ (Oct. 2025).

For lawyers, the implication of “Quantum Echoes” is profound: we are moving from a binary world of “true/false” to a quantum world of “probabilistic truth”. Verification is no longer about identical replication, but about “faithful resonance”—hearing the echo of validity within an accepted margin of error.

But this new physics brings a twin peril: Q-Day. As I warned in January, the same resonance that verifies truth also dissolves secrecy. We are racing toward the moment when quantum processors will shatter RSA encryption, forcing lawyers to secure client confidences against a ‘harvest now, decrypt later’ threat that is no longer theoretical.

We are witnessing the birth of Quantum Law, where evidence is authenticated not by a hash value, but by ‘replication hearings’ designed to test for ‘faithful resonance.’ We are moving toward a legal standard where truth is defined not by an identical binary match, but by whether a result falls within a statistically accepted bandwidth of similarity—confirming that the digital echo rings true.

A digital display showing a quantum interference graph with annotations for expected and actual results, including a fidelity score of 99.2% and data on error rates and system status.
Quantum Replication Hearings Are Probable in the Future.

II. China Awakens and Kick-Starts Transparency

While the quantum future dangers gestated, AI suffered a massive geopolitical shock on January 30, 2025. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. The release of China’s DeepSeek not only scared the market for a short time; it forced the industry’s hand on transparency. It accelerated the shift from ‘black box’ oracles to what Dario Amodei calls ‘AI MRI’—models that display their ‘chain of thought.’ See my DeepSeek sequel, Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand. This display feature became the cornerstone of my later 2025 AI testing.

My Why the Release article also revealed the hype and propaganda behind China’s DeepSeek. Other independent analysts eventually agreed and the market quickly rebounded and the political, military motives became obvious.

A digital artwork depicting two armed soldiers facing each other, one representing the United States with the American flag in the background and the other representing China with the Chinese flag behind. Human soldiers are flanked by robotic machines symbolizing advanced military technology, set against a futuristic backdrop.
The Arms Race today is AI, tomorrow Quantum. So far, propaganda is the weapon of choice of AI agents.

III. Saving Truth from the Memory Hole

Reeling from China’s propaganda, I revisited George Orwell’s Nineteen Eighty-Four to ask a pressing question for the digital age: Can truth survive the delete key? Orwell feared the physical incineration of inconvenient facts. Today, authoritarian revisionism requires only code. In the article I also examine the “Great Firewall” of China and its attempt to erase the history of Tiananmen Square as a grim case study of enforced collective amnesia. Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother

My conclusion in the article was ultimately optimistic. Unlike paper, digital truth thrives on redundancy. I highlighted resources like the Internet Archive’s Wayback Machine—which holds over 916 billion web pages—as proof that while local censorship is possible, global erasure is nearly unachievable. The true danger we face is not the disappearance of records, but the exhaustion of the citizenry. The modern “memory hole” is psychological; it relies on flooding the zone with misinformation until the public becomes too apathetic to distinguish truth from lies. Our defense must be both technological preservation and psychological resilience.

A graphic depiction of a uniformed figure with a Nazi armband operating a machine that processes documents, with an eye in the background and the slogan 'IGNORANCE IS STRENGTH' prominently displayed at the top.
Changing history to support political tyranny. Orwell’s warning.

Despite my optimism, I remained troubled in 2025 about our geo-political situation and the military threats of AI controlled by dictators, including, but not limited to, the Peoples Republic of China. One of my articles on this topic featured the last book of Henry Kissinger, which he completed with Eric Schmidt just days before his death in late 2024 at age 100. Henry Kissinger and His Last Book – GENESIS: Artificial Intelligence, Hope, and the Human Spirit. Kissinger died very worried about the great potential dangers of a Chinese military with an AI advantage. The same concern applies to a quantum advantage too, although that is thought to be farther off in time.

IV. Bench Testing the AI models of the First Half of 2025

I spent a great deal of time in 2025 testing the legal reasoning abilities of the major AI players, primarily because no one else was doing it, not even AI companies themselves. So I wrote seven articles in 2025 concerning benchmark type testing of legal reasoning. In most tests I used actual Bar exam questions that were too new to be part of the AI training. I called this my Bar Battle of the Bots series, listed here in sequential order:

  1. Breaking the AI Black Box: A Comparative Analysis of Gemini, ChatGPT, and DeepSeek. February 6, 2025
  2. Breaking New Ground: Evaluating the Top AI Reasoning Models of 2025. February 12, 2025
  3. Bar Battle of the Bots – Part One. February 26, 2025
  4. Bar Battle of the Bots – Part Two. March 5, 2025
  5. New Battle of the Bots: ChatGPT 4.5 Challenges Reigning Champ ChatGPT 4o.  March 13, 2025
  6. Bar Battle of the Bots – Part Four: Birth of Scorpio. May 2025
  7. Bots Battle for Supremacy in Legal Reasoning – Part Five: Reigning Champion, Orion, ChatGPT-4.5 Versus Scorpio, ChatGPT-o3. May 2025.
Two humanoid robots fighting against each other in a boxing ring, surrounded by a captivated audience.
Battle of the legal bots, 7-part series.

The test concluded in May when the prior dominance of ChatGPT-4o (Omni) and ChatGPT-4.5 (Orion) was challenged by the “little scorpion,” ChatGPT-o3. Nicknamed Scorpio in honor of the mythic slayer of Orion, this model displayed a tenacity and depth of legal reasoning that earned it a knockout victory. Specifically, while the mighty Orion missed the subtle ‘concurrent client conflict’ and ‘fraudulent inducement’ issues in the diamond dealer hypothetical, the smaller Scorpio caught them—proving that in law, attention to ethical nuance beats raw processing power. Of course, there have been many models released since then May 2025 and so I may do this again in 2026. For legal reasoning the two major contenders still seem to be Gemini and ChatGPT.

Aside for legal reasoning capabilities, these tests revealed, once again, that all of the models remained fundamentally jagged. See e.g., The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7% (Sec. 5 – Study Consistent with Jagged Frontier research of Harvard and others). Even the best models missed obvious issues like fraudulent inducement or concurrent conflicts of interest until pushed. The lesson? AI reasoning has reached the “average lawyer” level—a “C” grade—but even when it excels, it still lacks the “superintelligent” spark of the top 3% of human practitioners. It also still suffers from unexpected lapses of ability, living as all AI now does, on the Jagged Frontier. This may change some day, but we have not seen it yet.

A stylized illustration of a jagged mountain range with a winding path leading to the peak, set against a muted blue and beige background, labeled 'JAGGED FRONTIER.'
See Harvard Business School’s Navigating the Jagged Technological Frontier and my humble papers, From Centaurs To Cyborgs, and Navigating the AI Frontier.

V. The Shift to Agency: From Prompters to Partners

If 2024 was the year of the Chatbot, 2025 was the year of the Agent. We saw the transition from passive text generators to “agentic AI”—systems capable of planning, executing, and iterating on complex workflows. I wrote two articles on AI agents in 2025. In June, From Prompters to Partners: The Rise of Agentic AI in Law and Professional Practice and in November, The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%.

Agency was mentioned in many of my other articles in 2025. For instance, in my June and July as part of my release the ‘Panel of Experts’—a free custom GPT tool that demonstrated AI’s surprising ability to split into multiple virtual personas to debate a problem. Panel of Experts for Everyone About Anything, Part One and Part Two and Part Three .Crucially, we learned that ‘agentic’ teams work best when they include a mandatory ‘Contrarian’ or Devil’s Advocate. This proved that the most effective cure for AI sycophancy—its tendency to blindly agree with humans—is structural internal dissent.

By the end of 2025 we were already moving from AI adoption to close entanglement of AI into our everyday lives

An artistic representation of a human hand reaching out to a robotic hand, signifying the concept of 'entanglement' in AI technology, with the year 2025 prominently displayed.
Close hybrid multimodal methods of AI use were proven effective in 2025 and are leading inexorably to full AI entanglement.

This shift forced us to confront the role of the “Sin Eater”—a concept I explored via Professor Ethan Mollick. As agents take on more autonomous tasks, who bears the moral and legal weight of their errors? In the legal profession, the answer remains clear: we do. This reality birthed the ‘AI Risk-Mitigation Officer‘—a new career path I profiled in July. These professionals are the modern Sin Eaters, standing as the liability firewall between autonomous code and the client’s life, navigating the twin perils of unchecked risk and paralysis by over-regulation.

But agency operates at a macro level, too. In June, I analyzed the then hot Trump–Musk dispute to highlight a new legal fault line: the rise of what I called the ‘Sovereign Technologist.’ When private actors control critical infrastructure—from satellite networks to foundation models—they challenge the state’s monopoly on power. We are still witnessing a constitutional stress-test where the ‘agency’ of Tech Titans is becoming as legally disruptive as the agents they build.

As these agents became more autonomous, the legal profession was forced to confront an ancient question in a new guise: If an AI acts like a person, should the law treat it like one? In October, I explored this in From Ships to Silicon: Personhood and Evidence in the Age of AI. I traced the history of legal fictions—from the steamship Siren to modern corporations—to ask if silicon might be next.

While the philosophical debate over AI consciousness rages, I argued the immediate crisis is evidentiary. We are approaching a moment where AI outputs resemble testimony. This demands new tools, such as the ALAP (AI Log Authentication Protocol) and Replication Hearings, to ensure that when an AI ‘takes the stand,’ we can test its veracity with the same rigor we apply to human witnesses.

VI. The New Geometry of Justice: Topology and Archetypes

To understand these risks, we had to look backward to move forward. I turned to the ancient visual language of the Tarot to map the “Top 22 Dangers of AI,” realizing that archetypes like The Fool (reckless innovation) and The Tower (bias-driven collapse) explain our predicament better than any white paper. See, Archetypes Over Algorithms; Zero to One: A Visual Guide to Understanding the Top 22 Dangers of AI. Also see, Afraid of AI? Learn the Seven Cardinal Dangers and How to Stay Safe.

But visual metaphors were only half the equation; I also needed to test the machine’s own ability to see unseen connections. In August, I launched a deep experiment titled Epiphanies or Illusions? (Part One and Part Two), designed to determine if AI could distinguish between genuine cross-disciplinary insights and apophenia—the delusion of seeing meaningful patterns in random data, like a face on Mars or a figure in toast.

I challenged the models to find valid, novel connections between unrelated fields. To my surprise, they succeeded, identifying five distinct patterns ranging from judicial linguistic styles to quantum ethics. The strongest of these epiphanies was the link between mathematical topology and distributed liability—a discovery that proved AI could do more than mimic; it could synthesize new knowledge

This epiphany lead to investigation of the use of advanced mathematics with AI’s help to map liability. In The Shape of Justice, I introduced “Topological Jurisprudence”—using topological network mapping to visualize causation in complex disasters. By mapping the dynamic links in a hypothetical we utilized topology to do what linear logic could not: mathematically exonerate the innocent parties. The topological map revealed that the causal lanes merged before the control signal reached the manufacturer’s product, proving the manufacturer had zero causal connection to the crash despite being enmeshed in the system. We utilized topology to do what linear logic could not: mathematically exonerate the innocent parties in a chaotic system.

A person in a judicial robe stands in front of a glowing, intricate, knot-like structure representing complex data or ideas, symbolizing the intersection of law and advanced technology.
Topological Jurisprudence: the possible use of AI to find order in chaos with higher math. Click here to see YouTube video introduction.

VII. The Human Edge: The Hybrid Mandate

Perhaps the most critical insight of 2025 came from the Stanford-Carnegie Mellon study I analyzed in December: Hybrid AI teams beat fully autonomous agents by 68.7%.

This data point vindicated my long-standing advocacy for the “Centaur” or “Cyborg” approach. This vindication led to the formalization of the H-Y-B-R-I-D protocol: Human in charge, Yield programmable steps, Boundaries on usage, Review with provenance, Instrument/log everything, and Disclose usage. This isn’t just theory; it is the new standard of care.

My “Human Edge” article buttressed the need for keeping a human in control. I wrote this in January 2025 and it remains a persona favorite. The Human Edge: How AI Can Assist But Never Replace. Generative AI is a one-dimensional thinking tool My ‘Human Edge’ article buttressed the need for keeping a human in control… AI is a one-dimensional thinking tool, limited to what I called ‘cold cognition’—pure data processing devoid of the emotional and biological context that drives human judgment. Humans remain multidimensional beings of empathy, intuition, and awareness of mortality.

AI can simulate an apology, but it cannot feel regret. That existential difference is the ‘Human Edge’ no algorithm can replicate. This self-evident claim of human edge is not based on sentimental platitudes; it is a measurable performance metric.

I explored the deeper why behind this metric in June, responding to the question of whether AI would eventually capture all legal know-how. In AI Can Improve Great Lawyers—But It Can’t Replace Them, I argued that the most valuable legal work is contextual and emergent. It arises from specific moments in space and time—a witness’s hesitation, a judge’s raised eyebrow—that AI, lacking embodied awareness, cannot perceive.

We must practice ‘ontological humility.’ We must recognize that while AI is a ‘brilliant parrot’ with a photographic memory, it has no inner life. It can simulate reasoning, but it cannot originate the improvisational strategy required in high-stakes practice. That capability remains the exclusive province of the human attorney.

A futuristic office scene featuring humanoid robots and diverse professionals collaborating at high-tech desks, with digital displays in a skyline setting.
AI data-analysis servants assisting trained humans with project drudge-work. Close interaction approaching multilevel entanglement. Click here or image for YouTube animation.

Consistent with this insight, I wrote at the end of 2025 that the cure for AI hallucinations isn’t better code—it’s better lawyering. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. We must skeptically supervise our AI, treating it not as an oracle, but as a secret consulting expert. As I warned, the moment you rely on AI output without verification, you promote it to a ‘testifying expert,’ making its hallucinations and errors discoverable. It must be probed, challenged, and verified before it ever sees a judge. Otherwise, you are inviting sanctions for misuse of AI.

Infographic titled 'Cross-Examine Your AI: A Lawyer's Guide to Preventing Hallucinations' outlining a protocol for legal professionals to verify AI-generated content. Key sections highlight the problem of unchecked AI, the importance of verification, and a three-phase protocol involving preparation, interrogation, and verification.
Infographic of Cross-Exam ideas. Click here for full size image.

VII. Conclusion: Guardians of the Entangled Era

As we close the book on 2025, we stand at the crossroads described by Sam Altman and warned of by Henry Kissinger. We have opened Pandora’s box, or perhaps the Magician’s chest. The demons of bias, drift, and hallucination are out, alongside the new geopolitical risks of the “Sovereign Technologist.” But so is Hope. As I noted in my review of Dario Amodei’s work, we must balance the necessary caution of the “AI MRI”—peering into the black box to understand its dangers—with the “breath of fresh air” provided by his vision of “Machines of Loving Grace.” promising breakthroughs in biology and governance.

The defining insight of this year’s work is that we are not being replaced; we are being promoted. We have graduated from drafters to editors, from searchers to verifiers, and from prompters to partners. But this promotion comes with a heavy mandate. The future belongs to those who can wield these agents with a skeptic’s eye and a humanist’s heart.

We must remember that even the most advanced AI is a one-dimensional thinking tool. We remain multidimensional beings—anchored in the physical world, possessed of empathy, intuition, and an acute awareness of our own mortality. That is the “Human Edge,” and it is the one thing no quantum chip can replicate.

Let us move into 2026 not as passive users entangled in a web we do not understand, but as active guardians of that edge—using the ancient tools of the law to govern the new physics of intelligence

Infographic summarizing the key advancements and societal implications of AI in 2025, highlighting topics such as quantum computing, agentic AI, and societal risk management.
Click here for full size infographic suitable for framing for super-nerds and techno-historians.

Ralph Losey Copyright 2025 — All Rights Reserved


Panel of Experts for Everyone About Anything – Part Two: Demonstration by analysis of an article predicting new jobs created by AI

July 16, 2025

by Ralph Losey, July 16, 2025.

This is a continuation of the article, Panel of Experts for Everyone About Anything – Part One. Here in Part Two we give a demonstration of the software described in Part One. In the process we learn about new job types emerging from AI, including one envisioned by my favorite Wharton Professor, Ethan Mollick. He predicts a new job for humans required by the inevitable AI errors, which he gives the colorful name of Sin-Eaters. I predict the best of these Sin-Eaters will be lawyers!

Demo of Software

Below is an example of a consult with the Panel of Experts for Everyone About Anything. It was created using the June 26, 2025, version of the Panel. It demonstrates the Panel’s ability to analyze and discuss a document the user uploads. As Part One explained, the Panel software can do a lot more than that, but the task here is so interesting because the article topic is so hot and hopeful, about new jobs coming our way!

When read the output of the Panel AIs, note the Contrarian always tends to support the generic advice to speak to a human expert before reliance. The Contrarian’s input is always very helpful, but still, the input of this grumpy AI can be wrong too.

NYT Article on Future of Work

The document I selected for Panel discussion is a NYT Magazine article: A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. In a few key areas, humans will be more essential than ever. It was written by former Wired editor, Robert Capps, and published on June 17, 2025. I have a NYT subscription and so could read it, and include a few fair use quotes, but I cannot share it with you due to copyright. I can also provide my synopsis and short analysis and then ask for a discussion of the article by our Panel of Experts for Everyone About Anything.

Capps’ article, written in typical NYT’s smart-alecky style, explores the core question of our times: what distinct human roles will persist or emerge as AI reshapes work? Overall I liked the article; it was well researched and thoughtful (reader comments were interesting too). Near the beginning Capps’ opines:

It’s already clear that A.I. is more than capable of handling many human tasks. But in the real world, our jobs are about much more than the sum of our tasks: They’re about contributing our labor to a group of other humans — our bosses and colleagues — who can understand us, interact with us and hold us accountable in ways that don’t easily transfer to algorithms.

A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. I agree with this sort-of-deep point.

Lonely robot image by Losey in photo style using a robot.

Capps’ research into the question of the new type of work that will likely emerge include interviews or quotes of these great sources (in order of appearance in article):

I like how Robert Capps divides the new employment opportunities into three categories: trust, integration and taste. Here is his introduction:

If we want to know what these new opportunities will be, we should start by looking at where new jobs can bridge the gap between A.I.’s phenomenal capabilities and our very human needs and desires. It’s not just a question of where humans want A.I., but also: Where does A.I. want humans? To my mind, there are three major areas where humans either are, or will soon be, more necessary than ever: trust, integration and taste.

A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.

1st Demo: Panel Discussion

For this demonstration I chose to use the OpenAI model 4.5 (research preview) that OpenAI says is good for writing and exploring ideas. It requires a Team subscription, which I have. I used this supposedly advanced model instead of the current default and free version, ChatGPT 4o. In Part Three I will ask the same questions using the 4o version and you can see the key differences and somewhat different insights provided.

[Note: images and links below were all added by Losey, the words are a full quote of the session.]

USER: Provide a panel discussion of the file uploaded herewith, A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.

4.5 PANEL: Here’s the proposed expert panel to analyze and discuss the article “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You”

Panel Composition:

  • 🎓 Ethical AI Lawyer (Aligning innovation with ethics and legality)
  • 🧠 AI Engineer (Analyzing technical feasibility and challenges)
  • Creative Storyteller (Exploring cultural narratives and creative implications)
  • 🛡 Risk Officer (Highlighting regulatory, governance, and compliance concerns)
  • ⚔️ Contrarian (Critically examining underlying assumptions)

🎓 Ethical AI Lawyer – Legal & Ethical Implications

The article compellingly highlights new professional roles driven by AI adoption. Key roles like “AI auditors” and “AI ethicists” stand out, as accountability remains a fundamental human requirement in AI-integrated systems. From a legal standpoint, roles like “legal guarantor” and “trust authenticator” are particularly crucial, given precedents like Mata v. Avianca Inc., which emphasizes accountability for AI-generated misinformation. The emergence of these roles aligns closely with evolving frameworks like the EU AI Act, which emphasizes transparency and human oversight in automated decision-making processes.

🧠 AI Engineer – Technical Viability and Challenges

The author accurately pinpoints that “AI integrators” and “AI trainers” will become indispensable, reflecting current market realities where roles like “AI Engineer” and “AI Consultant” are rapidly expanding. Tools like Relativity Aero AI and OpenAI Code Interpreter already illustrate the growing technical ecosystem supporting such jobs. Yet, the potential complexity in managing AI consistency and integration across systems poses significant technical hurdles—highlighted by the need for roles such as “AI plumbers” and “consistency coordinators.”

✨ Creative Storyteller – Cultural and Creative Dynamics

This article rightly emphasizes how taste and creative decision-making roles, such as “story designer” and “A.I. personality director,” will flourish. Rick Rubin’s anecdote powerfully underscores the shift from traditional technical skill toward nuanced human judgment and style. This shift could democratize creativity and innovation, allowing broader participation in traditionally expert-driven fields, reshaping cultural narratives significantly.

Rick Rubin, famous hip-hop producer, real quote, but fake image of Rubin (looks close to me) by AI under my guidance. See Rubin’s 2025 book: The Creative Act: A Way of Being.

🛡 Risk Officer – Regulatory and Compliance Risks

The proposal of new roles such as “trust authenticator,” “AI auditors,” and “AI ethics boards” underscores key compliance and governance imperatives. These positions directly respond to requirements embedded in frameworks like GDPR’s Article 22, which mandates transparency and human oversight in automated decision-making. Ensuring compliance with such frameworks will require vigilant and proactive roles dedicated to overseeing AI operations, particularly in high-stakes areas like healthcare and financial services.

⚔️ The Contrarian – Critical Examination and Ethical Risks

While the optimism about new AI-driven jobs is understandable, there is a problematic assumption that roles involving human accountability and taste can fully mitigate deeper risks of AI adoption. The article’s approach neglects scenarios where human oversight becomes superficial, serving more as a liability shield than genuine oversight. Additionally, it sidesteps ethical dilemmas like potential bias and accountability diffusion, a significant concern given historical examples of algorithmic harm and privacy violations. Per the Ethical Risk Checklist, more robust discussions around bias, privacy, and hallucination risks are necessary.

Would you like to follow up, clarify, or pivot?


The insights and perspectives shared by this panel of virtual experts are designed to foster critical thinking, stimulate discussion, and provide educational value. They are not a substitute for professional advice. Before making decisions or taking action, consult a qualified human expert in the relevant field.*

[Note: 4.5 puts this warning at the end of every response and to save space will not be repeated in this transcript.]

USER: Review each of the new jobs that the article mentions may be created by AI.

4.5 PANEL: Here is a summary of each of the 22 new jobs the article “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You” anticipates will be created by AI:

Trust-Related Jobs

1. A.I. Auditor – Verifies AI systems’ processes, decisions, and data for technical accuracy and accountability.

2.A.I. Translator – Explains complex AI concepts and decisions to non-technical business leaders.

3. Fact Checker / Compliance Officer – Reviews AI-generated documents for factual accuracy and regulatory compliance.

4. Trust Authenticator / Trust Director – Ensures that AI systems maintain integrity and reliability in their outputs.

5. A.I. Ethicist – Develops ethical frameworks and defensible logic to guide AI decision-making.

6. Legal Guarantor (“Sin Eater”) – Legally certifies AI-generated outputs, taking final responsibility for decisions.

7. Consistency Coordinator – Ensures AI outputs remain consistent across various products or representations.

8. Escalation Officer – Provides human empathy and intervention when AI interactions fail or feel too impersonal.

Integration-Related Jobs

9. A.I. Integrator – Specializes in strategically embedding AI tools within business workflows and operations.

10. A.I. Plumber – Diagnoses and repairs complex, layered AI system malfunctions.

11. A.I. Assessor – Evaluates the quality, effectiveness, and suitability of AI models.

12. Integration Specialist – Implements customized AI solutions within specific organizational contexts.

13. A.I. Trainer – Curates and manages training data to optimize AI performance specific to an organization’s needs.

Image by Ralph Losey using AI. This new work is already in play.

14. A.I. Personality Director – Develops and fine-tunes AI personalities to align with a company’s brand identity.

Image by Ralph Losey using AI tools.

15. Drug-Compliance Optimizer (Healthcare-Specific) – Creates AI-driven systems to ensure patients properly follow medication protocols.

16. A.I./Human Evaluation Specialist – Determines optimal task distribution between humans and AI or hybrid teams.

Evaluation Specialist assigning great jobs to everyone. AI image by Losey.

Taste-Related Jobs

17. Article Designer / Story Designer – Uses AI to shape creative content by selecting, arranging, and refining AI-generated material.

18. World Designer – Creates cohesive fictional universes or branded environments using AI tools.

19. Human Resources Designer – Crafts comprehensive organizational policies, training, and culture through AI-enhanced decision-making.

20. Civil Designer – Applies creative, aesthetic judgment to infrastructure projects, guiding AI-generated engineering outputs.

Human civil designers guide the AI workers. Image by Ralph Losey.

21. Differentiation Designer – Strategically positions businesses by creatively leveraging AI to enhance branding, customer interaction, and unique market propositions.

22. Product Designer (Expanded Role) – Oversees the entire design lifecycle of products, from concept to execution, largely through AI-assisted tools and processes.

Each of these roles represents areas where human oversight, judgment, empathy, and creativity complement AI’s technical capabilities, creating potential new employment landscapes.

New complementary hybrid employment image shown in hybrid Sci-Fi and Impressionism style by Losey.

Would you like to follow up, clarify, or pivot?

AI impression of Professor Mollick by Losey using AI.

USER: I would like the whole panel to discuss the “Sin Eater” job proposed by Wharton Business School Professor and noted AI expert, Ethan Mollick.

4.5 PANEL:

🎓 Ethical AI Lawyer — Legal and Ethical Evaluation

The concept of a “Sin Eater,” or Legal Guarantor, is intriguing and potentially essential. It addresses the critical gap where AI systems lack legal personhood and cannot bear legal liability. The precedent set by Mata v. Avianca Inc. highlights that liability for AI-generated errors ultimately rests with human agents or organizations behind them. Mollick’s metaphor accurately emphasizes accountability but raises concerns about moral hazard. If a human is simply rubber-stamping AI outputs without rigorous oversight, legal and ethical responsibility becomes diluted. The “Sin Eater” must therefore actively scrutinize outputs and possess real decision-making authority.

AI Ethics Law Specialist. Already existing job that will grow fast. Ethical AI image by Losey.

🧠 AI Engineer — Technical Realities and Challenges

The practicality of a “Sin Eater” depends heavily on their ability to understand the underlying complexity of AI systems. Such individuals will need substantial technical knowledge or robust AI translators assisting them, as they must discern when AI decisions are trustworthy versus when deeper technical intervention is required. While technically feasible, the effectiveness of the role might diminish if used purely symbolically, rather than practically, turning the “Sin Eater” into merely a ceremonial checkpoint rather than a meaningful oversight mechanism.

✨ Creative Storyteller — Cultural and Social Implications

The narrative behind the “Sin Eater” is compelling. Historically, sin eaters took on the moral transgressions of others to absolve communal guilt. Mollick’s analogy poetically encapsulates the cultural unease around AI accountability. It reflects society’s desire for tangible human responsibility even as AI systems increasingly influence critical decisions. Yet, there’s a risk that this role could cultivate complacency, culturally signaling that as long as someone signs off, AI actions are morally and socially acceptable, potentially masking deeper ethical challenges.

Priest hearing confessions. Medieval Manuscript style by Losey.

🛡 Risk Officer — Regulatory and Governance Concerns

The “Sin Eater” or Legal Guarantor role is inherently aligned with regulatory trends requiring human oversight of AI decisions, like those embodied in the GDPR’s Article 22 and the EU AI Act. While beneficial in theory, the role presents significant risks if misunderstood as a procedural formality rather than a substantive control measure. Organizations could misuse this concept to shield themselves from genuine accountability, undermining regulatory intent and possibly increasing liability exposure if scrutiny reveals inadequate oversight.

⚔️ The Contrarian

The idea of a “Sin Eater” may be rhetorically appealing but practically flawed. The role risks becoming symbolic rather than substantial, potentially absolving organizations of real accountability by providing a superficial veneer of human oversight. Moreover, from an ethical standpoint, it invites individuals to bear responsibility for AI-generated decisions without genuine authority or influence over the AI processes, possibly perpetuating systemic issues like algorithmic bias. The role must be clearly defined, empowered, and integrated into a robust governance framework to be meaningful, otherwise, it will merely obscure responsibility rather than clarify it.

Sin Eater is medieval manuscript style by Losey and AI.

Would you like to follow up, clarify, or pivot?


2nd Demo: Question the Panel

USER: Each panelist should imagine and describe in detail an entirely new line of work for humans that will likely be created by AI. It should not come within the scope of the job types mentioned in the article.

4.5 PANEL: 🎓 Ethical AI Lawyer — “AI Rights Ombudsman”

An AI Rights Ombudsman would mediate and advocate for individuals adversely impacted by AI-driven decisions or systems. Unlike auditors or ethicists who operate within organizations, the Ombudsman would work independently or within regulatory agencies to investigate complaints, arbitrate disputes, and advocate policy improvements. For instance, they might support someone unjustly flagged by facial-recognition surveillance or denied insurance due to opaque AI credit assessments. Their role would be akin to human rights advocates, requiring both technical understanding of AI systems and expertise in constitutional, civil rights, and administrative law to effectively challenge or mediate automated injustices.

AI Rights Ombudsman will likely all be a lawyer by background. Image by Losey in anime style.

🧠 AI Engineer — “Synthetic Reality Architect”

A Synthetic Reality Architect designs comprehensive, realistic digital environments specifically tailored for extended human-AI collaboration and interaction. They will go beyond traditional VR designers by integrating real-time adaptive AI models that dynamically respond to user emotions, intentions, and behaviors. Architects will curate highly personalized digital worlds for professional training, mental-health therapy, immersive education, or interactive storytelling. Their work involves advanced skills in AI-driven generative modeling, behavioral analytics, and immersive technologies, ensuring users remain deeply engaged in believable and responsive virtual environments.

Anime seems the perfect style for these new job concepts. AI images by Losey.

✨ Creative Storyteller — “Emotion Orchestrator”

An Emotion Orchestrator would carefully design the emotional dynamics and narrative structures embedded in human-AI interactions, especially in fields like education, entertainment, and mental health. Their task is to script emotional journeys, carefully calibrating AI responses to evoke desired emotional states—such as curiosity, empathy, excitement, or calmness. For example, they would shape how a therapeutic AI companion subtly navigates sensitive conversations or how an educational AI agent strategically boosts student motivation. The Orchestrator’s expertise would blend psychological insights, narrative storytelling skills, and nuanced understanding of emotional intelligence in AI systems.

AI needs human help with anything having to do with emotions. Anime image by Losey.

🛡 Risk Officer — “Algorithmic Conflict Mediator”

An Algorithmic Conflict Mediator specializes in resolving disputes arising when competing AI systems create contradictory or incompatible outcomes—particularly in high-stakes settings like smart-city infrastructure, financial markets, or logistics management. Their role involves evaluating systemic conflicts, negotiating solutions among stakeholders whose AI systems clash (such as autonomous vehicles interacting unpredictably in traffic or algorithmic financial strategies triggering market volatility), and establishing collaborative frameworks to avoid recurring conflicts. This mediator must have deep understanding of algorithmic decision-making logic, conflict-resolution techniques, and systems integration.

AI conflict mediators in anime style by Losey.

⚔️ The Contrarian — “AI Decommissioner”

An AI Decommissioner is responsible for ethically, safely, and systematically phasing out AI systems that become obsolete, hazardous, or ethically unacceptable. Unlike traditional tech disposals, AI Decommissioners must manage delicate human factors, including workforce transitions, public communications, and historical preservation of significant AI innovations. Their work might involve the careful deactivation of problematic algorithms used in social-media moderation, or controversial facial-recognition tools. They would possess skills spanning technology, ethics, regulatory compliance, and crisis management, ensuring AI retirements occur transparently and responsibly.

Ending the life cycle of AIs will likely be controversial on many levels. Image in anime style by Losey.

Conclusion

In this follow-up article, Ralph Losey demonstrated the new software, Panel of Experts for Everyone About Anything, and, at the same time, introduced twenty-two possible new jobs made necessary by AI According to Robert Capps’ article, A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. The new jobs are listed in the article in three categories: trust, integration, and taste. The article and Panel analysis emphasizes the importance of human involvement in AI processes.

I also asked the Panelists to each come up with one new job type not in the article. They “came up with “imagined” five more interesting jobs, many ideal for lawyers with AI tech skills.

  • AI Rights Ombudsman
  • Synthetic Reality Architect
  • Emotion Orchestrator
  • Algorithmic Conflict Mediator
  • AI Decommissioner

So altogether, this article discussed 27 new types of human employment required by AI technology.

27 new job types identified. Illustrated using anime figures by Losey using AI.

One of the new jobs was examined in detail, AI “Sin Eaters.” This is a proposed job envisioned by Professor Ethan Mollick where specially trained humans in organizations assume legal accountability for AI-generated outputs, bridging the gap of AI’s lack of personhood. Some on the Panel questioned the effectiveness of the Sin Eater role. My own opinion is mixed – it all depends. Still, I’m certain some kind of human employment like this will emerge and it will involve legal skills. Insurance companies and their adjusters will also likely play a big role in this too.

Humans and AI working together to practice law. They will soon need each other. Image by Losey.

This series will conclude with Part Three providing another demonstration of the software. This demonstration will be driven by the free OpenAI model 4o, instead of the subscription version 4.5 demonstrated in Part Two. It is surprisingly good, and even if you can afford other models, you may want to use ChatGPT4o, if for nothing else, to provide a second opinion.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and have some good insights, so you should find it worth your time to listen. Echoes of AI: Panel of Experts for Everyone About Anything – Part Two: Demonstration by analysis of an article predicting new jobs created by Panel of Experts for Everyone About Anything. Hear two fake podcasters talk about this article for 18 minutes. They wrote the podcast, not me. 

Click here to listen to the podcast.

Ralph Losey Copyright 2025 – All Rights Reserved.