Will AI Take My Job? OpenAI’s New Policy, Rising Cybersecurity Risks, and What Comes Next

April 17, 2026

Ralph Losey, April 17 2026

Introduction: The Urgency of the Question

Will AI take my job?

A line of people in formal attire walking with somber expressions, led by a robot with a humanoid design, against a modern building backdrop.
Image by Ralph Losey using AI tools.

That question is no longer speculative. It is now front-page relevant, driven not only by rapid advances in AI, but by two recent events that reveal how quickly things are changing. On April 6, 2026, OpenAI released its Industrial Policy for the Intelligence Age, openly warning that the transition to superintelligence is already underway. Just days earlier, a human error at Anthropic briefly exposed the source code of one of the world’s most advanced AI systems. It was quickly copied and distributed before the mistake was corrected. It is reportedly now in the hands of criminal hackers and enemy states worldwide. Together, these developments make one thing clear: the future of work is arriving faster, and fas less predictably, than most expected.

In my recent article, What People Want To Know About AI: Top 10 Curiosity Index, Gemini AIs and I analyzed global search patterns and online discussions to identify the public’s most urgent concerns. The number one question, How does AI work? was addressed in my follow-up article, Five Faces of the Black Box: How AI ‘Thinks’ and Makes Decisions, where we explained the technology across five levels, from a child’s guessing game to matrix algebra.

But the second question is different.

Will AI take my job—and what should I do about it?

This accounted for roughly 18% of all inquiries. And unlike the first question, it is not driven by curiosity. It is driven by anxiety, something I hear and feel in conversations about AI with all kinds of people.

A futuristic city street with a diverse crowd looking at holographic signs displaying various career options. A robot and a humanoid figure are in the foreground, interacting with technology amidst tall buildings and flying vehicles, highlighting innovation and technology in the workforce.
All images in this article are by Ralph Losey using Gemini AI tools.

This article focuses on that anxiety: economic security and the future of work. It also confronts the issue people increasingly want answered but rarely get: the timeline. When might AI reach a level capable of performing most cognitive work better than us? Because if that point is near, and recent signals suggest ii is, then the implications are profound. Most knowledge-based jobs would be affected, and the resulting disruption to the economy and social order could be significant.

The Policy Response: OpenAI’s Industrial Blueprint


The urgency of this economic question is not limited to the public. It is also front and center for the corporations building the technology. On April 6, 2026, OpenAI released Industrial Policy for the Intelligence Age (“Policy Statement”) and it is likely that other leading AI companies will soon follow. This document moves beyond engineering into economic and social policy. It begins with a blunt premise: the transition to superintelligence is already underway and will reshape how organizations operate, how knowledge is created, and how people find meaning and opportunity.

The Policy Statement does not minimize the disruption ahead, or the speed at which it may arrive. It acknowledges that AI will disrupt jobs and reshape entire industries at a scale and pace unlike any prior technological shift. At the same time, OpenAI’s leadership emphasizes that the outcome is not predetermined. Whether this transformation leads to shared prosperity or to concentrated wealth and widespread displacement will depend on decisions made now, by governments, corporations, institutions, and individuals.

I encourage you to read the Policy Statement in full. It addresses far more than job security. My focus here is narrower: the economic implications. On pages 3 and 4, the Policy Statement explains:

The Case for a New Industrial Policy. Society has navigated major technological transitions before, but not without real disruption and dislocation along the way. While those transitions ultimately created more prosperity, they required proactive political choices to ensure that growth translated into broader opportunity and greater security. For example, following the transition to the Industrial Age, the Progressive Era and the New Deal helped modernize the social contract for a world reshaped by electricity, the combustion engine, and mass production. They did so by building new public institutions, protections, and expectations about what a fair economy should provide, including labor protections, safety standards, social safety nets, and expanded access to education. 

History shows that democratic societies can respond to technological upheaval with ambition: reimagining the social contract, mediating between capital and labor, and encouraging broad distribution of the benefits of technological progress while preserving pluralism, constitutional checks and balances, and freedom to innovate. The transition to superintelligence will require an even more ambitious form of industrial policy, one that reflects the ability of democratic societies to act collectively, at scale, to shape their economic future so that superintelligence benefits everyone.  …

On this path to superintelligence, there are clear steps we need to take today. People are already concerned about what AI will mean for their lives—whether their jobs and families will be safe, and whether data centers will disrupt their communities and raise energy prices. AI data centers should pay their own way on energy so that households aren’t subsidizing them; and they should generate local jobs and tax revenue. Governments should implement common-sense AI regulation—not to entrench incumbents through regulatory capture but to protect children, mitigate national security risks, and encourage innovation. 

OpenAI released a companion video the same day as the Policy Statement, titled Sam Altman on Building the Future of AI (“Video“). At 26:08, the discussion turns directly to jobs. Joshua Achiam, OpenAI’s Chief Futurist, addresses the issue candidly:

On getting workers involved in AI, I actually, I kind of want to back up and just acknowledge an elephant in the room, which is that a lot of workers are concerned about AI; they’re worried about what AI means for them. They are not immediately excited at the prospect of figuring out, all right, how are we going to use AI in our workplace? They’re thinking, oh my gosh, is the AI going to replace me?

The public is no longer satisfied with abstract reassurances. People want timelines. They want industry-specific forecasts. They want to know whether their job will still exist in five years. Both the Policy Statement and the Video point in the same direction: highly capable AI systems are coming quite soon, much faster than most expected. 

Better get it right Sam.

More Training Now for Job Security Tomorrow?

For many years my usual answer to the jobs question has been more training now. That answer may not cut it today for a majority of people, especially if AI advances too fast, too far. For instance, in Can AI Really Save the Future? A Lawyer’s Take on Sam Altman’s Optimistic Vision (Oct. 2024) I opined:

AI will create entirely new jobs. For instance, for lawyers, new jobs pertaining to AI regulations are emerging. AI will also change existing jobs for the better. It is already replacing the most boring parts of our work, leaving us to focus on the more rewarding and human aspects. Moreover, it is true that no worker will be replaced by an AI, they will be replaced by a human that knows how to use AI.

Now I am not so sure, and neither is Sam Altman. The prospect of superintelligence is no longer a distant future. It is a planning horizon.

To address the question of human employment in a world of increasingly powerful AI, an issue well beyond my unaided ability to resolve, I turn to a Panel of AI Experts. For this exercise, I use OpenAI-based models that I have fine-tuned for analysis across multiple disciplines. They are not superintelligent, but they are highly capable and broadly informed. They created a five AI-persona expert panel to try to answer these issues. The only persona I required is the “devil’s advocate” persona because I have found that AI type is indispensable to brainstorming exercises like this. I did not specify any other character, even the first character chosen, The “CentaurProfessional, although I must admit he sounds just like me.

An illustration depicting a central figure, representing 'Human in the Loop,' surrounded by various symbolic characters: a Centaur, a Devil's Advocate, a Sin-Eater, and the concept of 'Human Edge.' They are engaged in a digital environment filled with computers and data analytics visuals, emphasizing collaboration between humans and technology.
The Human in the Loop should remain in charge and verify AI work.

Voice 1: The “Centaur” Professional (The Hybrid Advocate)

Persona: The pragmatic professional who has fully integrated AI, but remains firmly in control. For background see my From Centaurs To Cyborgs: Our evolving relationship with generative AI (April 2024). Except for the citations that follow, all of the language from here to the Conclusion was written by the AIs, not me.

The Perspective: Let’s begin with a reality check. You’re more likely to lose your job to someone using AI than to AI itself. That single sentence cuts through most of the noise.

The fear of immediate, total automation misunderstands how work actually happens. We do not operate on smooth technological curves, we operate on what researchers call a “jagged frontier.” AI excels at certain tasks and fails at others, often unpredictably. This is why hybrid human-AI teams—Centaurs—consistently outperform both humans alone and AI alone. Recent research suggests improvements approaching 70% in certain knowledge-work domains. [See e.g.The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%Also see research and reports of top expert teams in Navigating the Jagged Technological Frontier (Working Paper 24-013, Harvard Business School, Sept. 2023) and my Navigating the AI Frontier (Oct. 2024).]

In law, AI can draft a brief in seconds. But it cannot sign it. It does not carry malpractice insurance. It does not stand before a judge. It cannot be sanctioned—or disbarred.

In medicine, AI may catch patterns a doctor misses. But patients do not sue algorithms—they sue physicians.

Sam Altman himself has described using AI to analyze medical data more effectively than his own doctor. Yet no serious observer concludes from this that doctors are obsolete. The conclusion is simpler:

Doctors who use AI will replace doctors who do not. The same applies across professions.

The future belongs to the Centaur—the professional who augments judgment with machine intelligence, but never abdicates responsibility.

Your job is not disappearing. The drudgery is.

As I explained in The Great AI Transition: From Tool to Teammate (June 2024), “the real shift is from doing the work to supervising it, humans move up the chain of responsibility, not out of the system.” [The AI hallucinated this article and cite, which obviously was supposed to refer to one of my articles. I am embarrassed to say that the title and quote sounded so plausible that I had to look it up to be sure. I then called the AI on this and it admitted to the hallucination and apologized.]

A professional in a suit stands in a courtroom with a circular holographic interface. The background features judicial elements like a gavel and a jury. The hologram includes icons for legal documents, checkmarks, and scales of justice, representing the integration of AI in legal practice.
The “Centaur” Professional (The Hybrid Advocate)

Voice 2: The “Sin-Eater” (AI Risk & Accountability Officer)

Persona: The human firewall—absorbing legal and ethical responsibility for AI outputs.

The Perspective: The Centaur is right—but incomplete. Because every gain in capability creates a parallel demand for accountability.

Wharton’s Ethan Mollick coined the term “Sin-Eater” to describe a new role: the human who vouches for AI-generated work and bears the consequences when it fails. That role is not theoretical—it is inevitable.

As AI systems scale from minutes to months-long projects, the need for verification, auditing, and compliance will explode. OpenAI’s own policy proposals emphasize the need for an “AI trust stack”—auditing regimes, validation systems, and human oversight at every layer.

And then there is cybersecurity. Our current software ecosystem is already vulnerable. AI will amplify both offense and defense—but offense often scales faster. Sam Altman has warned openly: AI will become extraordinarily good at identifying software vulnerabilities. That means bad actors will too.

This creates a massive new labor demand. Not for passive users—but for active defenders. We will need an army of human-AI teams to audit, test, and secure critical systems. This is not optional. It is civilizational maintenance.

A digital illustration depicting a corporate office setting with two figures: one in a dark cloak representing 'The Sin-Eater' and another in a business suit symbolizing the 'AI Risk & Accountability Officer'. Surrounding them are visual elements like an AI Bias Map, Accountability Audit, and concepts of risk and reward, emphasizing themes of AI accountability and mitigation.
The “Sin-Eater” (AI Risk & Accountability Officer)

Voice 3: The “Startup-in-a-Box” Entrepreneur

Persona: The solo builder with the leverage of a 100-person company.

The Perspective: Why is the conversation so focused on saving existing jobs? We are on the verge of the largest expansion of individual capability in human history.

Sam Altman has spoken repeatedly about a future where one person can build what once required an entire company. AI agents will handle coding, marketing, accounting, logistics—everything that currently creates friction.

The barriers to entry are collapsing.

Today, a brilliant nurse or mechanic might never start a business—not because of lack of skill, but because of administrative overhead. Tomorrow, that overhead disappears.

This is the rise of the micro-entrepreneurial economy.

Access to powerful AI tools—what some call a “Right to AI”—may become as foundational as access to electricity. With it, millions can create, compete, and innovate independently.

Yes, large bureaucracies may shrink. But they will be replaced by networks of highly capable individuals.

The question is not just “Will I lose my job?” It is also: “What could I build if friction disappeared?”

A young entrepreneur interacts with a digital AI dashboard featuring various tools such as market trend predictors, legal compliance AI, and revenue forecasting. The office background shows a modern workspace with colleagues engaged in discussions.
The “Startup-in-a-Box” Entrepreneur

Voice 4: The “Human Edge” Advocate

Persona: The guardian of empathy, presence, and human connection.

The Perspective: The technologists talk about efficiency. But humans do not live on efficiency alone.

There is a category of value AI cannot replicate: being human with another human.

Josh Achiam of OpenAI described visiting a fully automated café. It worked perfectly. It was fast. It was efficient. nIt was also—his word—unfulfilling. He missed the human interaction.

This is the “Human Edge.”

As AI absorbs administrative and analytical burdens, the value of human-centered work will rise dramatically. Teaching, nursing, therapy, caregiving—these are not residual categories. They are foundational. Today, these professions are underpaid and overburdened. That is a policy failure—not a market inevitability.

If AI delivers the productivity gains promised, then society must redirect that surplus toward elevating human-centered roles. The OpenAI policy proposals explicitly call for this—training pipelines, wage support, and institutional investment.

The future of work is not less human. It is more human than ever.

A professional woman holding a tablet in a modern office setting, surrounded by glowing icons representing compassion, justice, and ethics, with 'hA' and 'HUMAN CORE' highlighted in the center.
The “Human Edge” Advocate

Voice 5: The Contrarian “Devil’s Advocate”

Persona: Skeptic. Realist. Immune to hype.

The Perspective: Let’s stop romanticizing. Disruption is coming—and it will be uneven, painful, and fast. Telling displaced workers to “become entrepreneurs” or “pivot to caregiving” is not a plan. It is a slogan. We need structural reform.

If AI concentrates wealth in capital rather than labor, then the tax system must adapt. That means shifting the burden toward capital gains, corporate profits, and possibly automated labor.

We must also decouple basic security from employment. Healthcare, retirement, and income stability cannot depend entirely on whether your job survives automation.

And yes—let’s talk about infrastructure. We have a decaying physical world and a fragile digital one. Meanwhile, AI companies are generating enormous wealth. That wealth must be reinvested.

A modern New Deal—focused on infrastructure, cybersecurity, and energy—is not just desirable. It is necessary.

This is not anti-capitalist. It is pro-stability.

An elderly businessman in a suit stands in a modern office setting, holding a digital tablet with a serious expression. He is surrounded by glowing symbols, including scales of justice, hands, and a devilish figure labeled 'Devil's Advocate.' The scene is framed by a circular neon border with 'hA' and 'The Contrarian View' inscribed.
The Contrarian “Devil’s Advocate”

Conclusion: Responsibility at the Edge of Superintelligence

This panel reveals a truth that resists simplification: the future of work in the age of AI is difficult to predict. At this point it could go either way.

Personally, I am now more inclined to agree with the curmudgeon Contrarian than the mini-me Hybrid Advocate. That is a change for me. It reflects a growing concern that the risks may be advancing faster than the benefits. The real question is whether we, and our institutions, can adapt quickly enough.

The practical advice is straightforward. Begin serious AI training now. At the same time, explore work where the human edge still matters. You may find not only greater security, but greater satisfaction.

Above all, hold the new centers of power, economic and technological, to their obligations. Stand for both human rights and progress. We should be able to do both. In today’s world, we have no choice. It is too dangerous to stand still.

Superintelligence may drive the engine of the future. But I continue to insist that humanity must remain firmly and responsibly at the wheel.

A business presentation scene featuring five diverse characters at a panel discussion. Each character represents a different role: a stern older man, a confident woman, a professional in a suit, a figure in a dark cloak, and a relaxed entrepreneur. Behind them, large screens display icons related to AI, risk management, and funding, suggesting a technology-focused theme.

Ralph Losey Copyright 2026. All Rights Reserved.


Panel of Experts for Everyone About Anything – Part Two: Demonstration by analysis of an article predicting new jobs created by AI

July 16, 2025

by Ralph Losey, July 16, 2025.

This is a continuation of the article, Panel of Experts for Everyone About Anything – Part One. Here in Part Two we give a demonstration of the software described in Part One. In the process we learn about new job types emerging from AI, including one envisioned by my favorite Wharton Professor, Ethan Mollick. He predicts a new job for humans required by the inevitable AI errors, which he gives the colorful name of Sin-Eaters. I predict the best of these Sin-Eaters will be lawyers!

Demo of Software

Below is an example of a consult with the Panel of Experts for Everyone About Anything. It was created using the June 26, 2025, version of the Panel. It demonstrates the Panel’s ability to analyze and discuss a document the user uploads. As Part One explained, the Panel software can do a lot more than that, but the task here is so interesting because the article topic is so hot and hopeful, about new jobs coming our way!

When read the output of the Panel AIs, note the Contrarian always tends to support the generic advice to speak to a human expert before reliance. The Contrarian’s input is always very helpful, but still, the input of this grumpy AI can be wrong too.

NYT Article on Future of Work

The document I selected for Panel discussion is a NYT Magazine article: A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. In a few key areas, humans will be more essential than ever. It was written by former Wired editor, Robert Capps, and published on June 17, 2025. I have a NYT subscription and so could read it, and include a few fair use quotes, but I cannot share it with you due to copyright. I can also provide my synopsis and short analysis and then ask for a discussion of the article by our Panel of Experts for Everyone About Anything.

Capps’ article, written in typical NYT’s smart-alecky style, explores the core question of our times: what distinct human roles will persist or emerge as AI reshapes work? Overall I liked the article; it was well researched and thoughtful (reader comments were interesting too). Near the beginning Capps’ opines:

It’s already clear that A.I. is more than capable of handling many human tasks. But in the real world, our jobs are about much more than the sum of our tasks: They’re about contributing our labor to a group of other humans — our bosses and colleagues — who can understand us, interact with us and hold us accountable in ways that don’t easily transfer to algorithms.

A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. I agree with this sort-of-deep point.

Lonely robot image by Losey in photo style using a robot.

Capps’ research into the question of the new type of work that will likely emerge include interviews or quotes of these great sources (in order of appearance in article):

I like how Robert Capps divides the new employment opportunities into three categories: trust, integration and taste. Here is his introduction:

If we want to know what these new opportunities will be, we should start by looking at where new jobs can bridge the gap between A.I.’s phenomenal capabilities and our very human needs and desires. It’s not just a question of where humans want A.I., but also: Where does A.I. want humans? To my mind, there are three major areas where humans either are, or will soon be, more necessary than ever: trust, integration and taste.

A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.

1st Demo: Panel Discussion

For this demonstration I chose to use the OpenAI model 4.5 (research preview) that OpenAI says is good for writing and exploring ideas. It requires a Team subscription, which I have. I used this supposedly advanced model instead of the current default and free version, ChatGPT 4o. In Part Three I will ask the same questions using the 4o version and you can see the key differences and somewhat different insights provided.

[Note: images and links below were all added by Losey, the words are a full quote of the session.]

USER: Provide a panel discussion of the file uploaded herewith, A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.

4.5 PANEL: Here’s the proposed expert panel to analyze and discuss the article “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You”

Panel Composition:

  • 🎓 Ethical AI Lawyer (Aligning innovation with ethics and legality)
  • 🧠 AI Engineer (Analyzing technical feasibility and challenges)
  • Creative Storyteller (Exploring cultural narratives and creative implications)
  • 🛡 Risk Officer (Highlighting regulatory, governance, and compliance concerns)
  • ⚔️ Contrarian (Critically examining underlying assumptions)

🎓 Ethical AI Lawyer – Legal & Ethical Implications

The article compellingly highlights new professional roles driven by AI adoption. Key roles like “AI auditors” and “AI ethicists” stand out, as accountability remains a fundamental human requirement in AI-integrated systems. From a legal standpoint, roles like “legal guarantor” and “trust authenticator” are particularly crucial, given precedents like Mata v. Avianca Inc., which emphasizes accountability for AI-generated misinformation. The emergence of these roles aligns closely with evolving frameworks like the EU AI Act, which emphasizes transparency and human oversight in automated decision-making processes.

🧠 AI Engineer – Technical Viability and Challenges

The author accurately pinpoints that “AI integrators” and “AI trainers” will become indispensable, reflecting current market realities where roles like “AI Engineer” and “AI Consultant” are rapidly expanding. Tools like Relativity Aero AI and OpenAI Code Interpreter already illustrate the growing technical ecosystem supporting such jobs. Yet, the potential complexity in managing AI consistency and integration across systems poses significant technical hurdles—highlighted by the need for roles such as “AI plumbers” and “consistency coordinators.”

✨ Creative Storyteller – Cultural and Creative Dynamics

This article rightly emphasizes how taste and creative decision-making roles, such as “story designer” and “A.I. personality director,” will flourish. Rick Rubin’s anecdote powerfully underscores the shift from traditional technical skill toward nuanced human judgment and style. This shift could democratize creativity and innovation, allowing broader participation in traditionally expert-driven fields, reshaping cultural narratives significantly.

Rick Rubin, famous hip-hop producer, real quote, but fake image of Rubin (looks close to me) by AI under my guidance. See Rubin’s 2025 book: The Creative Act: A Way of Being.

🛡 Risk Officer – Regulatory and Compliance Risks

The proposal of new roles such as “trust authenticator,” “AI auditors,” and “AI ethics boards” underscores key compliance and governance imperatives. These positions directly respond to requirements embedded in frameworks like GDPR’s Article 22, which mandates transparency and human oversight in automated decision-making. Ensuring compliance with such frameworks will require vigilant and proactive roles dedicated to overseeing AI operations, particularly in high-stakes areas like healthcare and financial services.

⚔️ The Contrarian – Critical Examination and Ethical Risks

While the optimism about new AI-driven jobs is understandable, there is a problematic assumption that roles involving human accountability and taste can fully mitigate deeper risks of AI adoption. The article’s approach neglects scenarios where human oversight becomes superficial, serving more as a liability shield than genuine oversight. Additionally, it sidesteps ethical dilemmas like potential bias and accountability diffusion, a significant concern given historical examples of algorithmic harm and privacy violations. Per the Ethical Risk Checklist, more robust discussions around bias, privacy, and hallucination risks are necessary.

Would you like to follow up, clarify, or pivot?


The insights and perspectives shared by this panel of virtual experts are designed to foster critical thinking, stimulate discussion, and provide educational value. They are not a substitute for professional advice. Before making decisions or taking action, consult a qualified human expert in the relevant field.*

[Note: 4.5 puts this warning at the end of every response and to save space will not be repeated in this transcript.]

USER: Review each of the new jobs that the article mentions may be created by AI.

4.5 PANEL: Here is a summary of each of the 22 new jobs the article “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You” anticipates will be created by AI:

Trust-Related Jobs

1. A.I. Auditor – Verifies AI systems’ processes, decisions, and data for technical accuracy and accountability.

2.A.I. Translator – Explains complex AI concepts and decisions to non-technical business leaders.

3. Fact Checker / Compliance Officer – Reviews AI-generated documents for factual accuracy and regulatory compliance.

4. Trust Authenticator / Trust Director – Ensures that AI systems maintain integrity and reliability in their outputs.

5. A.I. Ethicist – Develops ethical frameworks and defensible logic to guide AI decision-making.

6. Legal Guarantor (“Sin Eater”) – Legally certifies AI-generated outputs, taking final responsibility for decisions.

7. Consistency Coordinator – Ensures AI outputs remain consistent across various products or representations.

8. Escalation Officer – Provides human empathy and intervention when AI interactions fail or feel too impersonal.

Integration-Related Jobs

9. A.I. Integrator – Specializes in strategically embedding AI tools within business workflows and operations.

10. A.I. Plumber – Diagnoses and repairs complex, layered AI system malfunctions.

11. A.I. Assessor – Evaluates the quality, effectiveness, and suitability of AI models.

12. Integration Specialist – Implements customized AI solutions within specific organizational contexts.

13. A.I. Trainer – Curates and manages training data to optimize AI performance specific to an organization’s needs.

Image by Ralph Losey using AI. This new work is already in play.

14. A.I. Personality Director – Develops and fine-tunes AI personalities to align with a company’s brand identity.

Image by Ralph Losey using AI tools.

15. Drug-Compliance Optimizer (Healthcare-Specific) – Creates AI-driven systems to ensure patients properly follow medication protocols.

16. A.I./Human Evaluation Specialist – Determines optimal task distribution between humans and AI or hybrid teams.

Evaluation Specialist assigning great jobs to everyone. AI image by Losey.

Taste-Related Jobs

17. Article Designer / Story Designer – Uses AI to shape creative content by selecting, arranging, and refining AI-generated material.

18. World Designer – Creates cohesive fictional universes or branded environments using AI tools.

19. Human Resources Designer – Crafts comprehensive organizational policies, training, and culture through AI-enhanced decision-making.

20. Civil Designer – Applies creative, aesthetic judgment to infrastructure projects, guiding AI-generated engineering outputs.

Human civil designers guide the AI workers. Image by Ralph Losey.

21. Differentiation Designer – Strategically positions businesses by creatively leveraging AI to enhance branding, customer interaction, and unique market propositions.

22. Product Designer (Expanded Role) – Oversees the entire design lifecycle of products, from concept to execution, largely through AI-assisted tools and processes.

Each of these roles represents areas where human oversight, judgment, empathy, and creativity complement AI’s technical capabilities, creating potential new employment landscapes.

New complementary hybrid employment image shown in hybrid Sci-Fi and Impressionism style by Losey.

Would you like to follow up, clarify, or pivot?

AI impression of Professor Mollick by Losey using AI.

USER: I would like the whole panel to discuss the “Sin Eater” job proposed by Wharton Business School Professor and noted AI expert, Ethan Mollick.

4.5 PANEL:

🎓 Ethical AI Lawyer — Legal and Ethical Evaluation

The concept of a “Sin Eater,” or Legal Guarantor, is intriguing and potentially essential. It addresses the critical gap where AI systems lack legal personhood and cannot bear legal liability. The precedent set by Mata v. Avianca Inc. highlights that liability for AI-generated errors ultimately rests with human agents or organizations behind them. Mollick’s metaphor accurately emphasizes accountability but raises concerns about moral hazard. If a human is simply rubber-stamping AI outputs without rigorous oversight, legal and ethical responsibility becomes diluted. The “Sin Eater” must therefore actively scrutinize outputs and possess real decision-making authority.

AI Ethics Law Specialist. Already existing job that will grow fast. Ethical AI image by Losey.

🧠 AI Engineer — Technical Realities and Challenges

The practicality of a “Sin Eater” depends heavily on their ability to understand the underlying complexity of AI systems. Such individuals will need substantial technical knowledge or robust AI translators assisting them, as they must discern when AI decisions are trustworthy versus when deeper technical intervention is required. While technically feasible, the effectiveness of the role might diminish if used purely symbolically, rather than practically, turning the “Sin Eater” into merely a ceremonial checkpoint rather than a meaningful oversight mechanism.

✨ Creative Storyteller — Cultural and Social Implications

The narrative behind the “Sin Eater” is compelling. Historically, sin eaters took on the moral transgressions of others to absolve communal guilt. Mollick’s analogy poetically encapsulates the cultural unease around AI accountability. It reflects society’s desire for tangible human responsibility even as AI systems increasingly influence critical decisions. Yet, there’s a risk that this role could cultivate complacency, culturally signaling that as long as someone signs off, AI actions are morally and socially acceptable, potentially masking deeper ethical challenges.

Priest hearing confessions. Medieval Manuscript style by Losey.

🛡 Risk Officer — Regulatory and Governance Concerns

The “Sin Eater” or Legal Guarantor role is inherently aligned with regulatory trends requiring human oversight of AI decisions, like those embodied in the GDPR’s Article 22 and the EU AI Act. While beneficial in theory, the role presents significant risks if misunderstood as a procedural formality rather than a substantive control measure. Organizations could misuse this concept to shield themselves from genuine accountability, undermining regulatory intent and possibly increasing liability exposure if scrutiny reveals inadequate oversight.

⚔️ The Contrarian

The idea of a “Sin Eater” may be rhetorically appealing but practically flawed. The role risks becoming symbolic rather than substantial, potentially absolving organizations of real accountability by providing a superficial veneer of human oversight. Moreover, from an ethical standpoint, it invites individuals to bear responsibility for AI-generated decisions without genuine authority or influence over the AI processes, possibly perpetuating systemic issues like algorithmic bias. The role must be clearly defined, empowered, and integrated into a robust governance framework to be meaningful, otherwise, it will merely obscure responsibility rather than clarify it.

Sin Eater is medieval manuscript style by Losey and AI.

Would you like to follow up, clarify, or pivot?


2nd Demo: Question the Panel

USER: Each panelist should imagine and describe in detail an entirely new line of work for humans that will likely be created by AI. It should not come within the scope of the job types mentioned in the article.

4.5 PANEL: 🎓 Ethical AI Lawyer — “AI Rights Ombudsman”

An AI Rights Ombudsman would mediate and advocate for individuals adversely impacted by AI-driven decisions or systems. Unlike auditors or ethicists who operate within organizations, the Ombudsman would work independently or within regulatory agencies to investigate complaints, arbitrate disputes, and advocate policy improvements. For instance, they might support someone unjustly flagged by facial-recognition surveillance or denied insurance due to opaque AI credit assessments. Their role would be akin to human rights advocates, requiring both technical understanding of AI systems and expertise in constitutional, civil rights, and administrative law to effectively challenge or mediate automated injustices.

AI Rights Ombudsman will likely all be a lawyer by background. Image by Losey in anime style.

🧠 AI Engineer — “Synthetic Reality Architect”

A Synthetic Reality Architect designs comprehensive, realistic digital environments specifically tailored for extended human-AI collaboration and interaction. They will go beyond traditional VR designers by integrating real-time adaptive AI models that dynamically respond to user emotions, intentions, and behaviors. Architects will curate highly personalized digital worlds for professional training, mental-health therapy, immersive education, or interactive storytelling. Their work involves advanced skills in AI-driven generative modeling, behavioral analytics, and immersive technologies, ensuring users remain deeply engaged in believable and responsive virtual environments.

Anime seems the perfect style for these new job concepts. AI images by Losey.

✨ Creative Storyteller — “Emotion Orchestrator”

An Emotion Orchestrator would carefully design the emotional dynamics and narrative structures embedded in human-AI interactions, especially in fields like education, entertainment, and mental health. Their task is to script emotional journeys, carefully calibrating AI responses to evoke desired emotional states—such as curiosity, empathy, excitement, or calmness. For example, they would shape how a therapeutic AI companion subtly navigates sensitive conversations or how an educational AI agent strategically boosts student motivation. The Orchestrator’s expertise would blend psychological insights, narrative storytelling skills, and nuanced understanding of emotional intelligence in AI systems.

AI needs human help with anything having to do with emotions. Anime image by Losey.

🛡 Risk Officer — “Algorithmic Conflict Mediator”

An Algorithmic Conflict Mediator specializes in resolving disputes arising when competing AI systems create contradictory or incompatible outcomes—particularly in high-stakes settings like smart-city infrastructure, financial markets, or logistics management. Their role involves evaluating systemic conflicts, negotiating solutions among stakeholders whose AI systems clash (such as autonomous vehicles interacting unpredictably in traffic or algorithmic financial strategies triggering market volatility), and establishing collaborative frameworks to avoid recurring conflicts. This mediator must have deep understanding of algorithmic decision-making logic, conflict-resolution techniques, and systems integration.

AI conflict mediators in anime style by Losey.

⚔️ The Contrarian — “AI Decommissioner”

An AI Decommissioner is responsible for ethically, safely, and systematically phasing out AI systems that become obsolete, hazardous, or ethically unacceptable. Unlike traditional tech disposals, AI Decommissioners must manage delicate human factors, including workforce transitions, public communications, and historical preservation of significant AI innovations. Their work might involve the careful deactivation of problematic algorithms used in social-media moderation, or controversial facial-recognition tools. They would possess skills spanning technology, ethics, regulatory compliance, and crisis management, ensuring AI retirements occur transparently and responsibly.

Ending the life cycle of AIs will likely be controversial on many levels. Image in anime style by Losey.

Conclusion

In this follow-up article, Ralph Losey demonstrated the new software, Panel of Experts for Everyone About Anything, and, at the same time, introduced twenty-two possible new jobs made necessary by AI According to Robert Capps’ article, A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. The new jobs are listed in the article in three categories: trust, integration, and taste. The article and Panel analysis emphasizes the importance of human involvement in AI processes.

I also asked the Panelists to each come up with one new job type not in the article. They “came up with “imagined” five more interesting jobs, many ideal for lawyers with AI tech skills.

  • AI Rights Ombudsman
  • Synthetic Reality Architect
  • Emotion Orchestrator
  • Algorithmic Conflict Mediator
  • AI Decommissioner

So altogether, this article discussed 27 new types of human employment required by AI technology.

27 new job types identified. Illustrated using anime figures by Losey using AI.

One of the new jobs was examined in detail, AI “Sin Eaters.” This is a proposed job envisioned by Professor Ethan Mollick where specially trained humans in organizations assume legal accountability for AI-generated outputs, bridging the gap of AI’s lack of personhood. Some on the Panel questioned the effectiveness of the Sin Eater role. My own opinion is mixed – it all depends. Still, I’m certain some kind of human employment like this will emerge and it will involve legal skills. Insurance companies and their adjusters will also likely play a big role in this too.

Humans and AI working together to practice law. They will soon need each other. Image by Losey.

This series will conclude with Part Three providing another demonstration of the software. This demonstration will be driven by the free OpenAI model 4o, instead of the subscription version 4.5 demonstrated in Part Two. It is surprisingly good, and even if you can afford other models, you may want to use ChatGPT4o, if for nothing else, to provide a second opinion.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and have some good insights, so you should find it worth your time to listen. Echoes of AI: Panel of Experts for Everyone About Anything – Part Two: Demonstration by analysis of an article predicting new jobs created by Panel of Experts for Everyone About Anything. Hear two fake podcasters talk about this article for 18 minutes. They wrote the podcast, not me. 

Click here to listen to the podcast.

Ralph Losey Copyright 2025 – All Rights Reserved.