Will AI Take My Job? OpenAI’s New Policy, Rising Cybersecurity Risks, and What Comes Next

Ralph Losey, April 17 2026

Introduction: The Urgency of the Question

Will AI take my job?

A line of people in formal attire walking with somber expressions, led by a robot with a humanoid design, against a modern building backdrop.
Image by Ralph Losey using AI tools.

That question is no longer speculative. It is now front-page relevant, driven not only by rapid advances in AI, but by two recent events that reveal how quickly things are changing. On April 6, 2026, OpenAI released its Industrial Policy for the Intelligence Age, openly warning that the transition to superintelligence is already underway. Just days earlier, a human error at Anthropic briefly exposed the source code of one of the world’s most advanced AI systems. It was quickly copied and distributed before the mistake was corrected. It is reportedly now in the hands of criminal hackers and enemy states worldwide. Together, these developments make one thing clear: the future of work is arriving faster, and fas less predictably, than most expected.

In my recent article, What People Want To Know About AI: Top 10 Curiosity Index, Gemini AIs and I analyzed global search patterns and online discussions to identify the public’s most urgent concerns. The number one question, How does AI work? was addressed in my follow-up article, Five Faces of the Black Box: How AI ‘Thinks’ and Makes Decisions, where we explained the technology across five levels, from a child’s guessing game to matrix algebra.

But the second question is different.

Will AI take my job—and what should I do about it?

This accounted for roughly 18% of all inquiries. And unlike the first question, it is not driven by curiosity. It is driven by anxiety, something I hear and feel in conversations about AI with all kinds of people.

A futuristic city street with a diverse crowd looking at holographic signs displaying various career options. A robot and a humanoid figure are in the foreground, interacting with technology amidst tall buildings and flying vehicles, highlighting innovation and technology in the workforce.
All images in this article are by Ralph Losey using Gemini AI tools.

This article focuses on that anxiety: economic security and the future of work. It also confronts the issue people increasingly want answered but rarely get: the timeline. When might AI reach a level capable of performing most cognitive work better than us? Because if that point is near, and recent signals suggest ii is, then the implications are profound. Most knowledge-based jobs would be affected, and the resulting disruption to the economy and social order could be significant.

The Policy Response: OpenAI’s Industrial Blueprint


The urgency of this economic question is not limited to the public. It is also front and center for the corporations building the technology. On April 6, 2026, OpenAI released Industrial Policy for the Intelligence Age (“Policy Statement”) and it is likely that other leading AI companies will soon follow. This document moves beyond engineering into economic and social policy. It begins with a blunt premise: the transition to superintelligence is already underway and will reshape how organizations operate, how knowledge is created, and how people find meaning and opportunity.

The Policy Statement does not minimize the disruption ahead, or the speed at which it may arrive. It acknowledges that AI will disrupt jobs and reshape entire industries at a scale and pace unlike any prior technological shift. At the same time, OpenAI’s leadership emphasizes that the outcome is not predetermined. Whether this transformation leads to shared prosperity or to concentrated wealth and widespread displacement will depend on decisions made now, by governments, corporations, institutions, and individuals.

I encourage you to read the Policy Statement in full. It addresses far more than job security. My focus here is narrower: the economic implications. On pages 3 and 4, the Policy Statement explains:

The Case for a New Industrial Policy. Society has navigated major technological transitions before, but not without real disruption and dislocation along the way. While those transitions ultimately created more prosperity, they required proactive political choices to ensure that growth translated into broader opportunity and greater security. For example, following the transition to the Industrial Age, the Progressive Era and the New Deal helped modernize the social contract for a world reshaped by electricity, the combustion engine, and mass production. They did so by building new public institutions, protections, and expectations about what a fair economy should provide, including labor protections, safety standards, social safety nets, and expanded access to education. 

History shows that democratic societies can respond to technological upheaval with ambition: reimagining the social contract, mediating between capital and labor, and encouraging broad distribution of the benefits of technological progress while preserving pluralism, constitutional checks and balances, and freedom to innovate. The transition to superintelligence will require an even more ambitious form of industrial policy, one that reflects the ability of democratic societies to act collectively, at scale, to shape their economic future so that superintelligence benefits everyone.  …

On this path to superintelligence, there are clear steps we need to take today. People are already concerned about what AI will mean for their lives—whether their jobs and families will be safe, and whether data centers will disrupt their communities and raise energy prices. AI data centers should pay their own way on energy so that households aren’t subsidizing them; and they should generate local jobs and tax revenue. Governments should implement common-sense AI regulation—not to entrench incumbents through regulatory capture but to protect children, mitigate national security risks, and encourage innovation. 

OpenAI released a companion video the same day as the Policy Statement, titled Sam Altman on Building the Future of AI (“Video“). At 26:08, the discussion turns directly to jobs. Joshua Achiam, OpenAI’s Chief Futurist, addresses the issue candidly:

On getting workers involved in AI, I actually, I kind of want to back up and just acknowledge an elephant in the room, which is that a lot of workers are concerned about AI; they’re worried about what AI means for them. They are not immediately excited at the prospect of figuring out, all right, how are we going to use AI in our workplace? They’re thinking, oh my gosh, is the AI going to replace me?

The public is no longer satisfied with abstract reassurances. People want timelines. They want industry-specific forecasts. They want to know whether their job will still exist in five years. Both the Policy Statement and the Video point in the same direction: highly capable AI systems are coming quite soon, much faster than most expected. 

Better get it right Sam.

More Training Now for Job Security Tomorrow?

For many years my usual answer to the jobs question has been more training now. That answer may not cut it today for a majority of people, especially if AI advances too fast, too far. For instance, in Can AI Really Save the Future? A Lawyer’s Take on Sam Altman’s Optimistic Vision (Oct. 2024) I opined:

AI will create entirely new jobs. For instance, for lawyers, new jobs pertaining to AI regulations are emerging. AI will also change existing jobs for the better. It is already replacing the most boring parts of our work, leaving us to focus on the more rewarding and human aspects. Moreover, it is true that no worker will be replaced by an AI, they will be replaced by a human that knows how to use AI.

Now I am not so sure, and neither is Sam Altman. The prospect of superintelligence is no longer a distant future. It is a planning horizon.

To address the question of human employment in a world of increasingly powerful AI, an issue well beyond my unaided ability to resolve, I turn to a Panel of AI Experts. For this exercise, I use OpenAI-based models that I have fine-tuned for analysis across multiple disciplines. They are not superintelligent, but they are highly capable and broadly informed. They created a five AI-persona expert panel to try to answer these issues. The only persona I required is the “devil’s advocate” persona because I have found that AI type is indispensable to brainstorming exercises like this. I did not specify any other character, even the first character chosen, The “CentaurProfessional, although I must admit he sounds just like me.

An illustration depicting a central figure, representing 'Human in the Loop,' surrounded by various symbolic characters: a Centaur, a Devil's Advocate, a Sin-Eater, and the concept of 'Human Edge.' They are engaged in a digital environment filled with computers and data analytics visuals, emphasizing collaboration between humans and technology.
The Human in the Loop should remain in charge and verify AI work.

Voice 1: The “Centaur” Professional (The Hybrid Advocate)

Persona: The pragmatic professional who has fully integrated AI, but remains firmly in control. For background see my From Centaurs To Cyborgs: Our evolving relationship with generative AI (April 2024). Except for the citations that follow, all of the language from here to the Conclusion was written by the AIs, not me.

The Perspective: Let’s begin with a reality check. You’re more likely to lose your job to someone using AI than to AI itself. That single sentence cuts through most of the noise.

The fear of immediate, total automation misunderstands how work actually happens. We do not operate on smooth technological curves, we operate on what researchers call a “jagged frontier.” AI excels at certain tasks and fails at others, often unpredictably. This is why hybrid human-AI teams—Centaurs—consistently outperform both humans alone and AI alone. Recent research suggests improvements approaching 70% in certain knowledge-work domains. [See e.g.The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%Also see research and reports of top expert teams in Navigating the Jagged Technological Frontier (Working Paper 24-013, Harvard Business School, Sept. 2023) and my Navigating the AI Frontier (Oct. 2024).]

In law, AI can draft a brief in seconds. But it cannot sign it. It does not carry malpractice insurance. It does not stand before a judge. It cannot be sanctioned—or disbarred.

In medicine, AI may catch patterns a doctor misses. But patients do not sue algorithms—they sue physicians.

Sam Altman himself has described using AI to analyze medical data more effectively than his own doctor. Yet no serious observer concludes from this that doctors are obsolete. The conclusion is simpler:

Doctors who use AI will replace doctors who do not. The same applies across professions.

The future belongs to the Centaur—the professional who augments judgment with machine intelligence, but never abdicates responsibility.

Your job is not disappearing. The drudgery is.

As I explained in The Great AI Transition: From Tool to Teammate (June 2024), “the real shift is from doing the work to supervising it, humans move up the chain of responsibility, not out of the system.” [The AI hallucinated this article and cite, which obviously was supposed to refer to one of my articles. I am embarrassed to say that the title and quote sounded so plausible that I had to look it up to be sure. I then called the AI on this and it admitted to the hallucination and apologized.]

A professional in a suit stands in a courtroom with a circular holographic interface. The background features judicial elements like a gavel and a jury. The hologram includes icons for legal documents, checkmarks, and scales of justice, representing the integration of AI in legal practice.
The “Centaur” Professional (The Hybrid Advocate)

Voice 2: The “Sin-Eater” (AI Risk & Accountability Officer)

Persona: The human firewall—absorbing legal and ethical responsibility for AI outputs.

The Perspective: The Centaur is right—but incomplete. Because every gain in capability creates a parallel demand for accountability.

Wharton’s Ethan Mollick coined the term “Sin-Eater” to describe a new role: the human who vouches for AI-generated work and bears the consequences when it fails. That role is not theoretical—it is inevitable.

As AI systems scale from minutes to months-long projects, the need for verification, auditing, and compliance will explode. OpenAI’s own policy proposals emphasize the need for an “AI trust stack”—auditing regimes, validation systems, and human oversight at every layer.

And then there is cybersecurity. Our current software ecosystem is already vulnerable. AI will amplify both offense and defense—but offense often scales faster. Sam Altman has warned openly: AI will become extraordinarily good at identifying software vulnerabilities. That means bad actors will too.

This creates a massive new labor demand. Not for passive users—but for active defenders. We will need an army of human-AI teams to audit, test, and secure critical systems. This is not optional. It is civilizational maintenance.

A digital illustration depicting a corporate office setting with two figures: one in a dark cloak representing 'The Sin-Eater' and another in a business suit symbolizing the 'AI Risk & Accountability Officer'. Surrounding them are visual elements like an AI Bias Map, Accountability Audit, and concepts of risk and reward, emphasizing themes of AI accountability and mitigation.
The “Sin-Eater” (AI Risk & Accountability Officer)

Voice 3: The “Startup-in-a-Box” Entrepreneur

Persona: The solo builder with the leverage of a 100-person company.

The Perspective: Why is the conversation so focused on saving existing jobs? We are on the verge of the largest expansion of individual capability in human history.

Sam Altman has spoken repeatedly about a future where one person can build what once required an entire company. AI agents will handle coding, marketing, accounting, logistics—everything that currently creates friction.

The barriers to entry are collapsing.

Today, a brilliant nurse or mechanic might never start a business—not because of lack of skill, but because of administrative overhead. Tomorrow, that overhead disappears.

This is the rise of the micro-entrepreneurial economy.

Access to powerful AI tools—what some call a “Right to AI”—may become as foundational as access to electricity. With it, millions can create, compete, and innovate independently.

Yes, large bureaucracies may shrink. But they will be replaced by networks of highly capable individuals.

The question is not just “Will I lose my job?” It is also: “What could I build if friction disappeared?”

A young entrepreneur interacts with a digital AI dashboard featuring various tools such as market trend predictors, legal compliance AI, and revenue forecasting. The office background shows a modern workspace with colleagues engaged in discussions.
The “Startup-in-a-Box” Entrepreneur

Voice 4: The “Human Edge” Advocate

Persona: The guardian of empathy, presence, and human connection.

The Perspective: The technologists talk about efficiency. But humans do not live on efficiency alone.

There is a category of value AI cannot replicate: being human with another human.

Josh Achiam of OpenAI described visiting a fully automated café. It worked perfectly. It was fast. It was efficient. nIt was also—his word—unfulfilling. He missed the human interaction.

This is the “Human Edge.”

As AI absorbs administrative and analytical burdens, the value of human-centered work will rise dramatically. Teaching, nursing, therapy, caregiving—these are not residual categories. They are foundational. Today, these professions are underpaid and overburdened. That is a policy failure—not a market inevitability.

If AI delivers the productivity gains promised, then society must redirect that surplus toward elevating human-centered roles. The OpenAI policy proposals explicitly call for this—training pipelines, wage support, and institutional investment.

The future of work is not less human. It is more human than ever.

A professional woman holding a tablet in a modern office setting, surrounded by glowing icons representing compassion, justice, and ethics, with 'hA' and 'HUMAN CORE' highlighted in the center.
The “Human Edge” Advocate

Voice 5: The Contrarian “Devil’s Advocate”

Persona: Skeptic. Realist. Immune to hype.

The Perspective: Let’s stop romanticizing. Disruption is coming—and it will be uneven, painful, and fast. Telling displaced workers to “become entrepreneurs” or “pivot to caregiving” is not a plan. It is a slogan. We need structural reform.

If AI concentrates wealth in capital rather than labor, then the tax system must adapt. That means shifting the burden toward capital gains, corporate profits, and possibly automated labor.

We must also decouple basic security from employment. Healthcare, retirement, and income stability cannot depend entirely on whether your job survives automation.

And yes—let’s talk about infrastructure. We have a decaying physical world and a fragile digital one. Meanwhile, AI companies are generating enormous wealth. That wealth must be reinvested.

A modern New Deal—focused on infrastructure, cybersecurity, and energy—is not just desirable. It is necessary.

This is not anti-capitalist. It is pro-stability.

An elderly businessman in a suit stands in a modern office setting, holding a digital tablet with a serious expression. He is surrounded by glowing symbols, including scales of justice, hands, and a devilish figure labeled 'Devil's Advocate.' The scene is framed by a circular neon border with 'hA' and 'The Contrarian View' inscribed.
The Contrarian “Devil’s Advocate”

Conclusion: Responsibility at the Edge of Superintelligence

This panel reveals a truth that resists simplification: the future of work in the age of AI is difficult to predict. At this point it could go either way.

Personally, I am now more inclined to agree with the curmudgeon Contrarian than the mini-me Hybrid Advocate. That is a change for me. It reflects a growing concern that the risks may be advancing faster than the benefits. The real question is whether we, and our institutions, can adapt quickly enough.

The practical advice is straightforward. Begin serious AI training now. At the same time, explore work where the human edge still matters. You may find not only greater security, but greater satisfaction.

Above all, hold the new centers of power, economic and technological, to their obligations. Stand for both human rights and progress. We should be able to do both. In today’s world, we have no choice. It is too dangerous to stand still.

Superintelligence may drive the engine of the future. But I continue to insist that humanity must remain firmly and responsibly at the wheel.

A business presentation scene featuring five diverse characters at a panel discussion. Each character represents a different role: a stern older man, a confident woman, a professional in a suit, a figure in a dark cloak, and a relaxed entrepreneur. Behind them, large screens display icons related to AI, risk management, and funding, suggesting a technology-focused theme.

Ralph Losey Copyright 2026. All Rights Reserved.


Discover more from e-Discovery Team

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from e-Discovery Team

Subscribe now to keep reading and get access to the full archive.

Continue reading