Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns – Part One

August 4, 2025

Ralph Losey, August 4, 2025.

Humans are inherently pattern-seeking creatures. Our ancestors depended upon recognizing recurring patterns in nature to survive and thrive, such as the changing of seasons, the migration of animals and the cycles of plant growth. This evolutionary advantage allowed early humans to anticipate danger, secure food sources, and adapt to ever-changing environments. Today, the recognition and interpretation of patterns remains a cornerstone of human intelligence, influencing how we learn, reason, and make decisions.

Pattern recognition is also at the core of artificial intelligence. In this article, I will test the ability of advanced AI, specifically ChatGPT, to uncover meaningful new patterns across different fields of knowledge. The goal is ambitious: to discover genuine epiphanies—true moments of insight that expand human understanding and open new doors of knowledge—while avoiding the pitfalls of apophenia, the human tendency to perceive illusions or false connections. This experiment probes an age-old tension: can AI reliably distinguish between genuine breakthroughs and compelling yet misleading illusions?

Video by Ralph Losey using SORA AI.

We will begin by exploring the risks of apophenia, understanding how this psychological tendency can mislead human and possibly AI perception. Throughout, videos created by AI will help illustrate key points and vividly communicate these ideas. There are twelve new videos in Part One and another fourteen in Part Two.

Are the patterns real? Video by Ralph Losey using SORA AI.

Apophenia: Avoiding the Pitfalls of False Patterns

We humans are masters of pattern detection, but we do have hinderances to this ability. Primary among them is our limited information and knowledge, but also our tendency to see patterns that are not there. We tend to assume the stirring we hear in the bushes is a tiger ready to pounce when really it is just the breeze. Evolution tends to favor this phobia. So, although we can and frequently do miss real patterns, fail to recognize the underlying connections between things, we often make them up too.

Here it is hoped that AI will boost our abilities on both fronts. It will help us to uncover true new patterns, genuine epiphanies, moments where profound insights emerge clearly from the complexity of data. At the same time, AI may expose illusions, false connections we mistakenly believe are real due to our natural cognitive biases. Even though we have made great progress over the millennia in understanding the Universe, we still have a long way to go to see all of the patterns, to fully understand the Universe, and to free ourselves of superstitions and delusions. We are especially weak at seeing patterns and intertwined with different fields of knowledge.

Apophenia is a kind of mental disorder where people think they see patterns that are not there and sometimes even hallucinate them. Most of the time when people see patterns, for instance, faces in the clouds, they know it cannot be real and there is no problem. But sometimes when people see other images, for instance, rocks on Mars that look like a face, or even images on toast, they delude themselves into believing all sorts of nonsense. For instance, the below 10-year old grilled cheese sandwich, which supposedly bears the image of the Virgin Mary, sold to an online casino on eBay in 2004 for $28,000.

In a similar vein, some people suffering from apophenia are prone to posit meaning – causality – in unrelated random events. Sometimes the perceptions of new patterns is a spark of genius, which is later verified, think of Einstein’s epiphany at age 16 when he visualized chasing a beam of light. The new pattern recognitions can lead to great discoveries or detect real tigers in the bush. Epiphanies are rare but transformative moments, like Einstein’s visualization of chasing a beam of light, Newton’s realization of gravity beneath the apple tree, or the insights behind Darwin’s theory of evolution. They genuinely advance human understanding. Apophenia, by contrast, deceives with illusions—patterns that seem meaningful but lead nowhere.

It is probably more often the case that when people “see” new connections and then go on to act upon them with no attempts to verify, they are dead wrong. When that happens, psychologists call this apophenia, the tendency to see meaningful patterns where none exist. This can lead to strange and aberrant behaviors: burning of witches, superstitious cosmology theories, jumping at shadows, addiction to gambling.

Unfortunately, it is a natural human tendency to think you see meaningful patterns or connections in random or unrelated data. That is a major reason casinos make so much money from poor souls suffering from a form of apophenia called the Gambler’s Fallacy. Careful scientists look out for defects in their own thinking and guide their experiments accordingly.

In everyday life, apophenia can also cause some people, even scientists, academics and professionals, to have phobic fears of conspiracies and other severe paranoid delusions. Think of John Nash, a Nobel Prize winning mathematician, and the movie A Beautiful Mind, that so dramatically portrayed his paranoid schizophrenia and involuntary hospitalization in 1959. Think of politics in the U.S today. Are there really lizard people among us? In some cases, as we’ve seen with Nash, apophenia can lead to severe schizophrenia.

A man looking distressed, surrounded by glowing numbers and mathematical symbols, evoking a sense of confusion and complexity.
Mental anguish & insanity from severe apophenia. Image by Losey using Sora inspired by Beautiful Mind movie.

The Greek roots of the now generally accepted medical term apophenia are:

  • Apo- (ἀπο-): Meaning “away from,” “detached,” “from,” “off,” or “apart”.
  • Phainein (φαίνειν): Meaning “to show,” “to appear,” or “to make known”.

The word was first coined by Klaus Conrad, an otherwise apparently despicable person whom I am reluctant to cite, but feel I must, due to the general acceptance of word and diagnosis today. Conrad was a German psychiatrist and Nazi who experimented on German soldiers returning from the eastern front during WWII. He coined the term in his 1958 publication on this mental illness. Per Wikipedia:

He defined it as “unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness”.[4] [5] He described the early stages of delusional thought as self-referential over-interpretations of actual sensory perceptions, as opposed to hallucinations.

Apophenia has also come to describe a human propensity to unreasonably seek definite patterns in random information, such as can occur in gambling.

Apophenia can be considered a commonplace effect of brain function. Taken to an extreme, however, it can be a symptom of psychiatric dysfunction, for example, as a symptom in schizophrenia,[7] where a patient sees hostile patterns (for example, a conspiracy to persecute them) in ordinary actions.

Apophenia is also typical of conspiracy theories, where coincidences may be woven together into an apparent plot.[8]

Video by Ralph Losey using SORA AI.

Can AI Be Infected with a Human Illness?

It is possible that generative AI, based as it is on human language, may have the same propensities. That is unknown as of yet, and so my experiments here were on the lookout for such errors. It could be one of the causes of AI hallucinations.

In information science a mistake in seeing a connection that is not real, an apophenia, leads to what is called a false positive. This technical term is well known in e-discovery law, where AI is used to search large document collections. When the patterns analyzed suggest a document is relevant, and it is not, that mistake is called a false positive. It is like a human apophenia. The AI can also detect patterns that cause it to predict a document is irrelevant, and in fact the document is relevant, that is a false negative. There as a pattern, a connection, that was not seen. That can be bad thing in e-discovery because it often leads to withholding production of a relevant document, which can in turn lead to court sanctions.

In e-discovery it is well known that AI consistently has far lower false positives and false negative rates than human reviewers, at least in large document reviews. Generative AI may also be more reliable and astute that we are, but maybe not. This is a new field. Se we should always be on the lookout for false positives and false negatives in AI pattern recognition. That is one lesson I learned well, and sometimes the hard way, in my ten years of working with predictive coding type AI in the e-discovery (2012-2022). In the experiments described in this article we will look for apophenic mistakes.

Video by Ralph Losey using SORA AI.

It is my hope that Advanced AI, properly trained and validated, can provide a counterbalance to human gullibility by rigorously filtering of signal from noise. Unlike the human brain, which often leaps to conclusions, AI can be programmed to ground its pattern recognition in evidence, statistical rigor, and cross-validation—if we build it that way and supervise it wisely.

Still, we must beware that the pattern-recognizing systems of AI may suffer from some of our delusionary tendencies. The best practices discussed here will consider both the positive and negative aspects of AI pattern recognition. We must avoid the traps of apophenia. We must stay true to the scientific methods and verify any new patterns purportedly discovered. Thus all opinions reached here will necessarily be lightly held and subject to further experimentation by others.

Video by Ralph Losey using SORA AI.

From Data to Insight: The Power of New Pattern Recognition

Modern AI models, including neural networks and transformer architectures like GPT-4, excel at uncovering subtle patterns in massive datasets far beyond human capability. This ability transforms raw data into actionable insights, thereby creating new knowledge in many fields, including the following:

Protein Structures: Models like Google’s DeepMind’s AlphaFold have already revolutionized protein structure prediction, achieving high success rates in predicting the 3D shapes of proteins from their amino acid sequences. This ability is crucial for understanding protein function and designing new drugs and medical therapies. The 2024 Nobel Prize in Chemistry was awarded to Demis Hassabis and John Jumper of DeepMind for their work on AlphaFold.

A scientist analyzes molecular structures and data visualizations related to AlphaFold 2 on a futuristic screen, featuring protein models and DNA sequences.
Image by Ralph Losey using his Visual Muse AI tool.

Medical Science. Generative AI models are now being used extensively in medical research, including analysis and proposals of new molecules with desired properties to discover new drugs and accelerate FDA approval. For example, Insilico Medicine uses its AI platform Pharma.AI, to developed drug candidates, including ISM001_055, for idiopathic pulmonary fibrosis (IPF). Insilico Medicine lists over 250 publications on its website reporting on its ongoing research, including a recent paper on its IPF discovery: A generative AI-discovered TNIK inhibitor for idiopathic pulmonary fibrosis: a randomized phase 2a trial (Nature Medicine, June 03, 2025). This discovery is especially significant because it is the first entirely AI-discovered drug to reach FDA Phase II clinical trials. Below is an infographic of Insilico Medicine showing some of its current work:

Infographic displaying the statistics and achievements of Insilico Medicine, an AI-driven biotech company, detailing development candidates, IND approvals, study phases, and global presence.
Insilico PDF infographic, found 7/23/25 in its 2-pg. overview.

Also see, Fronteo, a Japanese based research company, and its Drug Discovery AI Factory.

Materials Science. Google DeepMind’s Graph Networks for Materials Exploration (“GNoME”) has already identified millions of new stable crystals, significantly expanding our knowledge of materials science. This discovery represents an order-of-magnitude increase in known stable materials. Merchant and Cubuk, Millions of new materials discovered with deep learning (Deep Mind, 2023). Also see, 10 Top Startups Advancing Machine Learning for Materials Science (6/22/25).

Climate Science and Environmental Monitoring. Generative AI models are beginning to improve climate simulations, leading to more accurate predictions of climate patterns and future changes. For example, Microsoft’s Aurora Forecasting model is trained on Earth science data to go beyond traditional weather forecasting to model the interactions between the atmosphere, land, and oceans. This helps scientists anticipate events like cyclones, air quality shifts, and ocean waves with greater accuracy, allowing communities to prepare for environmental disasters and adapt to climate change. See e.g., Stanley et al, A Foundation Model for the Earth System (Nature, May 2025).

Video by Losey using Sora AI.

Historical and Artistic Revelations

AI is also helping with historical research. A new AI system was recently used to analyze one of the most famous Latin inscriptions: the Res Gestae Divi Augusti. It has always been thought to simply be an autobiographical inscription, which literally translates from Ancient Latin as “Deeds of the Divine Augustus.”  But when a specialty generative AI, Aeneas (again based on Google’s models) compared this text with a large database of other Latin sayings, the famous Res Gestae Divi Augusti inscription was found to share subtle language parallels with other Roman legal documents. The analysis uncovered “imperial political discourse,” or messaging focused on maintaining imperial power, an insight, a pattern, that had never seen before. Assael, Sommerschield, Cooley, et al. Contextualizing ancient texts with generative neural networks (Nature, July 2025).

The paper explains that the communicative power of these inscriptions are not only shaped by the written text itself “but also by their physical form and placement2,3” and that “about 1,500 new Latin inscriptions are discovered every year.” So the patterns analyzed not only included the words, but a number of other complex factors. The authors assert in the Abstract that their work with AI analysis shows.

… how integrating science and humanities can create transformative tools to assist historians and advance our understanding of the past.

Roman citizens reacting to propaganda. A Ralph Losey video.

In art and music, pattern detection has mapped the evolution of artistic styles in tandem with technological change. In a 2025 studio-lab experiment reported by Deruty & Grachten, a generative AI bass model (“BassNet”) unexpectedly rendered multiple melodic lines within single harmonic tones, exposing previously unnoticed structures in popular music bass compositions. This discovery was written up by Deruty and Gratchen, Insights on Harmonic Tones from a Generative Music Experiment (arXiv, June 2025). Their paper shows how AI can surface new musical patterns and deepen our understanding of human auditory perception.

As explained in the Abstract:

During a studio-lab experiment involving researchers, music producers, and an AI model for music generating bass-like audio, it was observed that the producers used the model’s output to convey two or more pitches with a single harmonic complex tone, which in turn revealed that the model had learned to generate structured and coherent simultaneous melodic lines using monophonic sequences of harmonic complex tones. These findings prompt a reconsideration of the long-standing debate on whether humans can perceive harmonics as distinct pitches and highlight how generative AI can not only enhance musical creativity but also contribute to a deeper understanding of music.

Video by Losey using Sora AI.

Legal Practice: From Precedent to Prediction

The legal profession has benefited from traditional rule-based statistical AI for over a decade, with predictive coding and similar applications. It is now starting to apply the new generative AI models in a variety of new applications. For instance, it can be used to uncover latent themes and trends in judicial decisions that human analysis has overlooked.

This was done in a 2024 study using ChatGPT-4 to perform a thematic analysis on hundreds of theft cases from Czech courts. Drápal, Savelka, Westermann, Using Large Language Models to Support Thematic Analysis in Empirical Legal Studies (arXiv, February 2024).

The goal of the analysis was to discover classes of typical thefts. GPT4.0 analyzed fact patterns described in the opinions and human experts did the same. The AI not only replicated many of the human expert identified themes, but, as report states, also uncovered a new one that the humans had missed – a pattern of “theft from gym” incidents. This shows that generative AI can sift through vast case datasets and detect nuanced fact patterns, or criminal modus operandi, that were previously undetected by experts (here, three law students under supervision of a law professor).

Video by Losey using Sora AI.

Another study in early 2025 applied Anthropic’s Claude 3-Opus to analyze thousands of UK court rulings on summary judgment, developing a new functional taxonomy of legal topics for those cases. Sargeant, Izzidien, Steffek, Topic classification of case law using a large language model and a new taxonomy for UK law: AI insights into summary judgment (Springer, February 2025). The AI was prompted to classify each case by topic and identify cross-cutting themes.

The results revealed distinct patterns in how summary judgments are applied across different legal domains. In particular, the AI found trends and shifts over time and across courts – insights that allow new, improved understanding of when and in what types of cases summary judgments tend to be granted. These patterns were found despite the fact that U.K. case law lacks traditional topic labels. This kind of AI-augmented analysis illustrates how generative models can discover hidden trends in case law for improved effectiveness by practitioners.

Surprising abilities of Ai helping lawyers. Video by Losey.

Even sitting judges have begun to leverage generative AI to inform their decision-making, revealing new analytical angles in litigation. The notable 2023 concurrence by Judge Kevin Newsom of the Eleventh Circuit admitted to experimenting with ChatGPT to interpret an ambiguous insurance term (whether an in-ground trampoline counted as “landscaping”). Snell v. United Specialty Ins. Co., 102 F. 4th 1208 – Court of Appeals, (11th Cir., 5/28/24). Also See, Ralph Losey, Breaking News: Eleventh Circuit Judge Admits to Using ChatGPT to Help Decide a Case and Urges Other Judges and Lawyers to Follow Suit (e-Discovery Team, June 3, 2024) (includes full text of the opinion and Appendix and Losey’s inserted editorial comments and praise of Judge Newsom’s language.)

After querying the LLM, Judge Newsom concluded that “LLMs have promise… it no longer strikes me as ridiculous to think that an LLM like ChatGPT might have something useful to say about the common, everyday meaning of the words and phrases used in legal texts.” In other words, the generative AI was used as a sort of massive-scale case law analyst, tapping into patterns of ordinary usage across language data to shed light on a legal ambiguity. This marked the first known instance of a U.S. appellate judge integrating an LLM’s linguistic pattern analysis into a written opinion, signaling that generative models can surface insights on word meaning and context that enrich judicial reasoning.

A digital illustration of a judge in a courtroom setting, seated at a desk with a gavel. The judge, named Judge Newsom, is shown in a professional attire with glasses, and a holographic display behind him showing data and AI-related graphics, conveying a futuristic legal environment.
Image by Ralph Losey using his Visual Muse AI.

My Ask of AI to Find New Patterns

Now for the promised experiment to try to find at least one new connection, one previously unknown, undetected pattern linking different fields of knowledge. I used a combination of existing OpenAI and Google models to help me in this seemingly quixotic quest. To be honest, I did not have much real hope for success, at least not until release of the promised ChatGPT5 and whatever Google calls its counterpart, which I predict will be released the following week (or day). Plus, the whole thing seemed a bit grandiose, even for me, to try to get AI to boldly go where no one has gone before.

Absurd, but still I tried. I won’t go through all of the prompt engineering involved, except to say it involved my usual a complex, multi-layered, multi-prompt, multimodal-hybrid approach. I tempered my goals by directing ChatGPT4o, when I started the process, to seek new patterns that were useful, not Nobel Prize winning breakthroughs, just useful new patterns. I directed it to find five such new patterns and gave it some guidance as to fields of knowledge to consider, including of course, law. I asked for five new insights thinking that with such as big ask I might get one success.

Note, I write these words before I have received the response, but after I have written the above to help guide ChatGPT4o. Who knows, it might achieve some small modicum of success. Still, it feels like a crazy Quixotic quest. Incidentally, Miguel de Cervantes (1547-1616) character, Don Quixote (1605) does seem to person afflicted with apophenia. Will my AI suffer a similar fate?

Don Quixote in modern world. Video by Losey using Sora.

I designed the experiment specifically with this tension in mind between epiphanies, representing genuine insights and real advances in knowledge, and illusions, which are merely plausible yet misleading patterns. One of my goals was to probe AI’s capacity to distinguish one from the other.

Overview of Prompt Strategy and Time Spent

First, I spent about a hour with ChatGPT4o to set up my request by feeding it a copy of the article as written so far. I also chatted with it about the possibility of AI finding new patterns between different fields of knowledge. Then I just told ChatGPT4o to do it, find a new inter connective pattern. ChatGPT4o “thought” (processed only) for just a few minutes. Then it generated a response that purported to provide me with the requested five new patterns. It did so based on its existing training and review of this article.

As requested, it did not use its browser capabilities to search the web for answers. It just “looked within” and came with five insights it thought were new. Almost that easy. I lowered my expectations accordingly before read the output.

That was the easy part, after reading the response, I spent about 14-hours over the next several days doing quality control. The QC work used multiple other AIs, both by OpenAI and Google, to have them go online and research these claims, evaluate their validity – both good and bad, engage in “deep-think,” look for errors, especially signs of AI apophenia, and otherwise invited contrarian type criticisms from them. After that, I also asked the other AIs for suggested improvements they might make to the wording of the five clams and rank them by importance. The various rewordings were not too helpful, but the rankings were, and so were many of the editorial comments.

The 14-hours in QC does not include the approximate 6-hours of machine time by the Gemini and OpenAI models to do deep think and independent research on the web to verify or disprove the claims. My actual 14-hour time included traditional Google searches to double check all citations as per my “trust but verify” motto. My 14-hours also included my time to read (I’m pretty fast) and skim most of the key articles that the AI research turned up, although frankly some of the articles cited were beyond my knowledge levels. I tried to up my game, but it was hard. These other models also generated hundreds of pages of both critical and supportive analysis, which I also had to read. Finally, I probably put another 24-hours into research and writing this article (it took over a week), so this is one of my larger projects. I did not record the number of hours it took to design and generate the 26 videos because that was recreational.

Surrealistic depiction of time in robot space by a Ralph Losey video.

Part Two of this article is where I will make the reveal. Was this experiment another comic story of a Don Quixote type (me) and his sidekick Sanchez (AI), lost in an apophenia neurosis? Or perhaps it is another story altogether? Neither hot nor cold? Stay tuned for Part Two and find out.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and provide some good insights. This episode is called Echoes of AI: Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns. Part One. Hear the young AIs talk about this article for 25 minutes. They wrote the podcast, not me.

An illustration featuring two anonymous AI podcasters sitting in front of microphones, discussing the theme 'Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns.' The background has a digital, tech-inspired design.
Click here to listen to the podcast.

Ralph Losey Copyright 2025 – All Rights Reserved.


Navigating AI’s Twin Perils: The Rise of the Risk-Mitigation Officer

July 28, 2025

Ralph Losey, July 28, 2025

Generative AI is not just disrupting industries—it is redefining what it means to trust, govern, and be accountable in the digital age. At the forefront of this evolution stands a new, critical line of employment: AI Risk-Mitigation Officers. This position demands a sophisticated blend of technical expertise, regulatory acumen, ethical judgment, and organizational leadership. Driven by the EU’s stringent AI Act and a rapidly expanding landscape of U.S. state and federal compliance frameworks, businesses now face an urgent imperative: manage AI risks proactively or confront severe legal, reputational, and operational consequences.

Click to see photo come to life. Image and movie by Losey with AI.

This aligns with a growing consensus: AI, like earlier waves of innovation, will create more jobs than it eliminates. The AI Risk-Mitigation Officer stands as Exhibit A in this next wave of tech-era employment. See e.g. my last series of blogs, Part Two: Demonstration by analysis of an article predicting new jobs created by AI (27 new job predictions); Part Three: Demo of 4o as Panel Driver on New Jobs (more experts discuss the 27 new jobs). See Also: Robert Capps, NYT magazine article: A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.  (NYT Magazine, June 17, 2025). In a few key areas, humans will be more essential than ever.

Risk Mitigation Officer team image by Losey and AI.

Defining the Role of the AI Risk-Mitigation Officer

The AI Risk-Mitigation Officer is a strategic, cross-functional leader tasked with identifying, assessing, and mitigating risks inherent in AI deployment.

While Chief AI Officers drive innovation and adoption, Risk-Mitigation Officers focus on safety, accountability, and compliance. Their mandate is not to slow progress, but to ensure it proceeds responsibly. In this respect, they are akin to data protection officers or aviation safety engineers, guardians of trust in high-stakes systems.

This role requires a sober analysis of what can go wrong—balanced against what might go wonderfully right. It is a job of risk mitigation, not elimination. Not every error can or should be prevented; some mistakes are tolerable and even expected in pursuit of meaningful progress.

The key is to reduce high-severity risks to acceptable levels—especially those that could lead to catastrophic harm or irreparable public distrust. If unmanaged, such failures can derail entire programs, damage lives, and trigger heavy regulatory backlash.

Both Chief AI Officers and Risk-Mitigation Officers ultimately share the same goal: the responsible acceleration of AI, including emerging domains like AI-powered robotics.

The Risk-Mitigation Office should lead internal education efforts to instill this shared vision—demonstrating that smart governance isn’t an obstacle to innovation, but its most reliable engine.

Team leader tries to lighten the mood of their serious work. Click for video by Losey.

Why The Role is Growing

The acceleration of this role is not theoretical. It is propelled by real-world failures, regulatory heat, and reputational landmines.

The 2025 World Economic Forum’s Future of Jobs Report underscores that 86% of surveyed businesses anticipate AI will fundamentally transform their operations by 2030. While AI promises substantial efficiency and innovation, it also introduces profound risks, including algorithmic discrimination, misinformation, automation failures, and significant data breaches.

A notable illustration of these risks is the now infamous Mata v. Avianca case, where lawyers relied on AI to fabricate case law, underscoring why human verification is non-negotiable. Mata v. Avianca, Inc., No. 1:2022cv01461 – Document 54 (S.D.N.Y. June 22, 2023). Courts responded with sanctions. Regulators took notice. The public laughed, then worried.

The legal profession worldwide has been slow to learn from Mata the need to verify and take other steps to control AI hallucinations. See Damien Charlotin, AI Hallucination Cases (as of July 2025 over 200 cases and counting have been identified by the legal scholar in Paris with an ironic name for this work, Charlotin). The need for risk mitigation is growing fast. You cannot simply pass the buck to AI.

Charlatan lawyers blame others for their mistakes. Image by Losey.

Risk Mitigation employees and other new AI related hires will balance out the AI generated layoffs. The 2025 World Economic Forum’s Future of Jobs Report at page 25 predicted that by 2030 there will be 11 million new jobs created and 9 million old jobs phased out. We think the numbers will be higher in both columns, but the 11/9 ratio may be right overall all, meaning a 22% net increase in new jobs.

We think the ratio the WEF predicts is right for all industries overall, but the numbers are too low, meaning greater disruption but positive in the end with even more new jobs. Image by AI & Losey.

Core Responsibilities

AI Risk-Mitigation Officers are part legal scholar, part engineer, part diplomat. They know the AI Act, understand neural nets, and can hold a room full of regulators or engineers without flinching. Key responsibilities encompass:

  • AI Risk Audits: Spot trouble before it starts. Bias, black boxes, security flaws—find them early, fix them fast. This involves detailed pre-deployment evaluations to detect biases, security vulnerabilities, explainability concerns, and data protection deficiencies. Good practice to follow-up with periodic surprise audits.
  • Incident Response Management: Don’t wait for a headline to draft a playbook. Develop and lead response protocols for AI malfunctions or ethical violations, coordinating closely with legal, PR, and compliance teams.
  • Legal Partnership: Speak both code and contract. Collaborate with in-house counsel to interpret AI regulations, draft protective contractual clauses, and anticipate potential liability.
  • Ethics Training: Culture is the best control layer. Educating employees on responsible AI use and cultivating an ethical culture that aligns with both corporate values and regulatory standards.
  • Stakeholder Engagement: Transparency builds trust. Silence breeds suspicion. Bridging communication between technical teams, executive leadership, regulators, and the public to maintain transparency and foster trust.
One Core Responsibility of the Rick Mitigation Team. Image by Losey.

Skills and Pathways

Professionals in this role must possess:

  • Regulatory Expertise: Detailed knowledge of EU AI Act, GDPR, EEOC guidelines, and evolving state laws in the U.S.
  • Technical Proficiency: Deep understanding of machine learning, neural networks, and explainable AI methodologies.
  • Sector-Specific Knowledge: Familiarity with compliance standards across sectors such as healthcare (HIPAA, FDA), finance (SEC, MiFID II), and education (FERPA).
  • Strategic Communication: Ability to effectively mediate between AI engineers, executives, regulators, and stakeholders.
  • Ethical Judgment: Skills to navigate nuanced ethical challenges, such as balancing privacy with personalization or fairness with automation efficiency.

Career pathways for AI Risk-Mitigation Officers typically involve dual qualifications in fields like law and data science, certifications from professional organizations and practical experience in areas like cybersecurity, legal practice, politics or IT. Strong legal and human relationship skills are a high priority.

Image in modified isometric style by Ralph Losey using modified AIs.

U.S. and EU Regulatory Landscapes

The EU codifies risk tiers. The U.S. litigates after the fact. Navigating both requires fluency in law, logic, and diplomacy.

The EU’s AI Act classifies AI systems into four risk categories:

  • Unacceptable. AI banned altogether due to the high-risk of violating fundamental rights. Examples include social scoring systems and real-time biometric identification in public spaces, including emotion recognition in workplaces and schools. Also bans AI-enabled subliminal or manipulative techniques that can be used to persuade individuals to engage in unwanted behaviors.
  • High. AI that could negatively impact the rights or safety of individuals. Examples include AI systems used in critical infrastructures (e.g. transport), legal (including policing and border patrol), medical, educational, financial services, workplace management, and influencing elections and voter behavior.
  • Limited. AI with lower levels of risk than high-risk systems but are still subject to transparency requirements. Examples include typical chatbots where providers must make humans aware Ai is used.
  • Minimal. AI systems that present little risk of harm to the rights of individuals. Examples include AI-powered video games and spam filters. No rules.
Transparent AI in office setting. Click for Losey movie.

Regulation is on a sliding scale with Unacceptable risk AIs banned entirely and Minimal to No risk category with few if any restrictions. Limited and High risk classes require varying levels of mandatory documentation, human oversight, and external audits.

Meanwhile, U.S. regulatory bodies like the FTC and EEOC, along with state legislatures and state enforcement agencies, are starting to sharpen oversight tools. So far the focus has been on controlling deception, data misuse, bias and consumer harm. This has become a hot political issue in the U.S. See e.g. Scott Kohler, State AI Regulation Survived a Federal Ban. What Comes Next? (Carnegie’s Emissary, 7/3/25); Brownstein, et al, States Can Continue Regulating AI—For Now (JD Supra, 7/7/25).

AI Risk-Mitigation Officers must navigate these disparate regulatory landscapes, harmonizing proactive European requirements with the reactive, litigation-centric U.S. environment.

Legal Precedents and Ethical Challenges

Emerging legal precedents emphasize human accountability in AI-driven decisions, as evidenced by litigation involving biased hiring algorithms, discriminatory credit scoring, and flawed facial recognition technologies. Ethical dilemmas also abound. Decisions like prioritizing efficiency over empathy in healthcare, or algorithmic opacity in university admissions, require human-centric governance frameworks.

In ancient times, the Sin Eater bore others’ wrongs. Part Two: Demonstration by analysis of an article predicting new jobs created by AI (discusses new sin eater job in detail as a person who assumes moral and legal responsibility for AI outcomes). Today’s Risk-Mitigation Officer is charged with an even more difficult task; try to prevent the sins from happening at all or at least reduce them enough to avoid hades.

Sin Eater in combined quasi digital styles by Losey using sin-free AIs.

Balancing Innovation and Regulation

Cases such as Cambridge Analytica (personal Facebook user data used for propaganda to influence elections) and Boeing’s MCAS software (use of AI system undisclosed to pilots led to crash of a 737) demonstrate that innovation without reasonable governance can be an invitations to disaster. The obvious abuses and errors in these cases could have been prevented had there been an objective officer in the room, really any responsible adult considering risks and moral grounds. For a more recent example, consider XAI’s recent fiasco with its chatbot. Grok Is Spewing Antisemitic Garbage on X (Wired, 7/8/25).

Cases like this should put us on guard but not cause over-reaction. Disasters like this easily trigger too much caution and too little courage. That too would be a disaster of a different kind as it would rob us of much needed innovation and change.

Reasonable, practical regulation can foster innovation by mitigating uncertainty and promoting stakeholder confidence. The trick is to find a proper balance between risk and reward. Many think regulators today tend to go too far in risk avoidance. They rely on Luddite fears of job loss and end-of-the world fantasies to justify extreme regulations. Many think that such extreme risk avoidance helps those in power to maintain the status quo. The pro-tech people instead favor a return to the fast pace of change that we have seen in the past.

Polarized protestors for and against technology by Losey using AI job thieves.

We went to the Moon in 1969. Yet we’re still grounded in 2025. Fear has replaced vision. Overregulation has become the new gravity.

That is why anti-Luddite technology advocates yearn for the good old days, the sixties, where fast paced advances and adventure were welcome, even expected. If you had told anyone back in 1969 that no Man would return to the Moon again, even as far out as 2025, you would have been considered crazy. Why was John F. Kennedy the last bold and courageous leader? Everyone since seems to have been paralyzed with fear. None have pushed great scientific advances. Instead, politicians on both sides of the aisle strangle NASA with never-ending budget cuts and timid machine-only missions. It seems to many that society has been overly risk adverse since JFK and this has stifled innovation. This has robbed the world of groundbreaking innovations and the great rewards that only bold innovations like AI can bring.

Take a moment to remember the 1989 movie Back to the Future Part II. In the movie “Doc” Brown, an eccentric, genius scientist, went from the past – 1985 – the the future of 2015. There we saw flying cars powered by Mr. Fusion Home Energy Reactors, which were trash fueled fusion engines. That is the kind of change we all still expected back in 1989 for the far future of 2015; but now, in 2025, ten years past the projected future, it is still just a crazy dream. The movie did, however, get many minor predictions right. Think of flat-screen tvs, video-conferences, hoverboards and self-tying shoes (Nike HyperAdapt 1.0.) Fairly trivial stuff that tech go-slow Luddites approved.

Back to the Future movie made in 1989 envisioning flying cars in 2015. Now in 2025, what have we got? Photo and VIDEO by Losey using AI image generation tools.

Conclusion

The best AI Risk-Mitigation Officers will steer between the twin monsters of under-regulation and overreach. Like Odysseus, they will survive by knowing the risks and keeping a steady course between them.

They will play critical roles across society—in law firms, courts, hospitals, companies (for-profit, non-profit, and hybrid), universities, and government agencies. Their core responsibilities will include:

  • Standardization Initiatives: Collaborating with global standards organizations such as ISO, NIST, and IEEE to craft reasonable, adaptable regulations.
  • Development of AI Governance Tools: Encouraging the use of model cards, bias detection systems, and transparency dashboards to track and manage algorithmic behavior.
  • Policy Engagement and Lobbying: Actively engaging with legislative and regulatory bodies across jurisdictions—federal, state, and international—to advocate for frameworks that balance innovation with public protection.
  • Continuous Learning: Staying ahead of rapid developments through ongoing education, credentialing, and immersion in evolving legal and technological landscapes.
AI Risk Management specialists will end up with sub-specialties, maybe dept. sections, such as lobbying and research. Image by Losey daring to use AI.

As this role evolves, AI Risk Management specialists will likely develop sub-specialties—possibly forming distinct departments in areas such as regulatory lobbying, algorithm auditing, compliance architecture, and ethical AI research, both hands-one and legal study.

This is a fast-moving field. After writing this article we noticed the EU passed new rules for the largest AI companies only, all US companies of course, except one Chinese. It is non-binding at this point and involves highly controversial disclosure and copyright restriction. EU’s AI code of practice for companies to focus on copyright, safety (Reuters, July 10, 2025). It could stifle chatbot development and lead the EU in a stagnant deadline as these companies withdraw from the EU rather than kill their own models to comply.

Slowdown or reduction is not a viable option at this point because of national security concerns. There is a military race now between the US and China based on competing technology. AGI level of AI will give the first government to attain it a tremendous military advantage. See e.g., Buchanan, Imbrie, The New Fire: War, Peace and Democracy in the Age of AI (MIT Press, 2022); Henry Kissinger, Allison. The Path to AI Arms Control: America and China Must Work Together to Avert Catastrophe, (Foreign Affairs, 10/13/23); Also See, Dario Amodei’s Vision (e-Discovery team, Nov. 2024) (CEO of Anthropic, Darion Amodei, warns of danger of China winning the race for AI supremacy).

Race to super-intelligent AGI image by Ralph Losey.

As Eric Schmidt explains it, this is now an existential threat and should be a bipartisan issue for survival of democracy. Kissinger, Schmidt, Mundie, Genesis: Artificial Intelligence, Hope, and the Human Spirit, (Little Brown, Nov. 2024). Also See Former Google CEO Eric Schmidt says America could lose the AI race to China (AI Ascension, May 2025).

Organizations that embrace this new professional archetype will be best positioned to shape emerging regulatory frameworks and deploy powerful, trusted AI systems—including future AI-powered robots. The AI Risk-Mitigation Officer will safeguard against catastrophic failure without throttling progress. In this way, they will help us avoid both dystopia and stagnation.

Yes, this is a demanding job. It will require new hires, team coordination, cross-functional fluency, and seamless collaboration with AI assistants. But failure to establish this critical function risks danger on both fronts: unchecked harm on one side, and paralyzing caution on the other. The best Risk-Mitigation Officers will navigate between these extremes—like Odysseus threading his ship through Scylla and Charybdis.

Odysseus successfully steering his ship between monsters on either side. Image and Video by Losey using various AIs.

We humans are a resilient species. We’ve always adapted, endured, and risen above grave dangers. We are adventurers and inventors—not cowering sheep afraid of the wolves among us.

The recent overregulation of science and technology is an aberration. It must end. We must reclaim the human spirit where bold exploration prevails, and Odysseus—not  Phobos—remains our model.

We can handle the little thinking machines we’ve built, even if the phobic establishment wasn’t looking. Our destiny is fusion, flying cars, miracle cures, and voyages to the Moon, Mars, and beyond.

Innovation is not the enemy of safety—it is its partner. With the right stewards, AI can carry us forward, not backward.

Let’s chart the course.

Manifest Destiny of Mankind. Image and VIDEO by Losey using AI.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and have some good insights, so you should find it worth your time to listen. Echoes of AI: The Rise of the AI Risk-Mitigation Officer: Trust, Law, and Liability in the Age of AI. Hear two fake podcasters talk about this article for a little over 16 minutes. They wrote the podcast, not me. For the second time we also offer a Spanish version here. (Now accepting requests for languages other than English.)

Click here to listen to the English version of the podcast.

Ralph Losey Copyright 2025 – All Rights Reserved.


Panel of Experts for Everyone About Anything – Part Three: Demo of 4o as Panel Driver on New Jobs

July 18, 2025

by Ralph Losey, July 18, 2025

This is the conclusion to the article, Panel of Experts for Everyone About Anything – Part One and Part Two. Here we give another demonstration of the software described in Part One. Part Two provided a demonstration where the software used the advanced reasoning model of OpenAI, ChatGPT 4.5. The demonstration included expert discussion of the hot issue of future employment. Here we use the free version, ChatGPT 4o, to see how it does and get another perspective about new jobs that will be needed with AI. The results are surprising.

Expert panels debating. All images in article are in Steampunk style this time and by Ralph Losey using AI.

Introduction

In Panel of Experts for Everyone About Anything – Part Two, we demonstrated how the Panel software works to analyze an article. We picked a NYT Magazine article that predicted 24 new jobs that would be created by AI. The demonstration used ChatGPT 4.5 as the engine. OpenAI now allows a user to select any model to which they have access, to drive a custom GPT. (For reasons unknown, this feature currently does not work on the Mac desktop version of ChatGPT.) The 4.5 model used in Part Two now requires an upgraded subscription, whereas model 4o is still free.

The cost of 4.5 may be beyond the reach of legal aid groups and some in the public using it for non-law-related questions. Thus, to be consistent with the public service intent behind the free software mentioned in Part One, we here demonstrate the software using ChatGPT 4o. For ease of comparison, and because the question of new jobs created by AI is so important, we repeat the issue again but throw in a few variations to keep it interesting.

Panels of Experts do battle to see who is the best. Image by Ralph Losey using AI.

How well will the ChatGPT 4o expert team do against the allegedly better 4.5 model? Will we learn anything new about jobs created by AI in Part Three that we did not already know from Part Two? Which panel is the winner? Which gave the best demonstration?

Read on and discover using classic steampunk style images, because, after all, once Einstein discovered Relativity, and Science quickly jumped into Quantum, shouldn’t we have continued to advance at that accelerated rate and discovered AI long ago? Think where we’d be today if we did, eh Mr. Babbage?

Charles Babbage (1791-1871), ‘father of computers’ shown in steampunk style by Losey. What took us so long?

First Demo of 4o Model Version

In some respects, I like the 4o version of the session better than the Model 4.5 version shown in Part Two. Here are the key excerpts from the response of the Panel of Experts for Everyone About Anything when we repeated the same initial User Prompt, namely to analyze the NYT’s article by Robert Capps, A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.

4o PANEL: Let’s curate a panel to explore the implications of the NYT article “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You”. Before we begin the panel discussion, I’ll propose eight candidate experts based on the article’s focus on AI disruption, new job creation, and socio-technical dynamics. You’ll choose four, and I’ll automatically add the mandatory Contrarian.

🎯 Article Focus:

  • Trust-based roles (e.g., AI auditors, ethicists)
  • Integration roles (e.g., AI plumbers, integrators)
  • Taste and creative decision-making (e.g., world designers)
  • Socioeconomic shifts due to AI (entrepreneurial entry-level roles)

👥 Suggested Panelists:

  1. Ethical AI Lawyer – interprets legal responsibility and liability frameworks for trust-based roles.
  2. AI Engineer – evaluates technical feasibility and skill demands of new AI-centric jobs.
  3. Creative Storyteller – articulates the evolution of taste-driven roles like world designers and content curators.
  4. Risk Officer – identifies governance and regulatory gaps in emerging hybrid AI-human workplaces.
  5. Data Visionary – maps future employment and systemic transformation under AI’s rise.
  6. Litigation Strategist – examines legal risks of roles like “legal guarantor” or “AI auditor” under current jurisprudence.
  7. Public Health Advocate – reflects on AI’s impact on human dignity, trust, and socio-emotional labor.
  8. Quantum Analyst – cross-reads how AI role disruption mirrors past technology shifts (like CAD/CAM or simulation in science).

Please select four experts from this list. I’ll then add the Contrarian and begin the panel.

USER: I choose 1, 2, 3 and 5.

4o PANEL: . . .

Panel on new jobs created by AI in steampunk style by Losey.

🎓 Ethical AI Lawyer — Legal and Ethical Responsibility Architect

The article’s treatment of trust-based roles like “AI auditors,” “legal guarantors,” and “trust authenticators” directly engages legal norms surrounding delegation of accountability. These roles resemble constructs already debated in U.S. case law like Mata v. Avianca Inc., where counsel submitted AI-generated briefs with hallucinated citations, prompting sanctions and affirming that humans remain the accountable party despite automation.

The concept of the “sin eater” — someone who absorbs A.I.’s mistakes — raises sharp questions about the limits of vicarious liability and professional standards. No matter how advanced the AI, current jurisprudence requires traceability to a human agent. This aligns with the Fed. R. Evid. 901(b)(9) authentication requirements for complex systems, demanding explainability and provenance.

Further, the emergence of “AI translators” and “ethics boards” intersects with legal debates over due process in algorithmic decision-making, especially under frameworks like Article 22 of the GDPR and proposals for AI-specific tort law in the EU AI Liability Directive. These developments could create novel duties of disclosure and internal governance.

Bottom line: AI may generate drafts, workflows, or evaluations, but legally enforceable trust must still be human-certified.

Engineer dissecting AI in steampunk style by Ralph Losey.

🧠 AI Engineer — Dissecting Feasibility and Tech-Skill Transitions

The article outlines plausible near-future roles such as AI plumbers, virtual twin managers, and integration specialists — all of which track well with emerging enterprise needs. From a systems engineering standpoint, each represents a layering of human interface over increasingly agentic AI.

A role like “AI plumber” is especially apt: as we move to multi-agent, multi-modal systems, debugging becomes less about code and more about tracing decision nodes through dynamic graphs. Tools like LangSmith or OpenTelemetry for AI will support this, but human oversight must close the loop.

Equally pragmatic is the “AI trainer” role, particularly in retrieval-augmented generation (RAG) settings where corporate data must be contextualized with extreme care. Hallucination risk escalates when source curation is poor — an engineer versed in prompt architectures and data labeling becomes indispensable.

However, I’d caution that some roles (e.g., “AI personality director”) may prove transient or cosmetic. Unless this impacts UX or regulatory tone (e.g., health chatbots), it will likely be a rebranded comms function.

Still, the job market is aligning. “AI consultant” and “integration lead” roles are skyrocketing per LinkedIn data. These aren’t speculative—they’re already hiring.

Human taste guiding AI image by Losey guiding AI.

✨ Creative Storyteller — Reframing Jobs Around Taste, Not Tasks

This article is really about the return of judgment as labor. The shift from doing to curating, from generating to refining, is the beating heart of the “taste economy” the author envisions.

Take the imagined roles of “story designer,” “differentiation designer,” or “world builder.” These all recast creativity as orchestration — selecting from near-infinite AI outputs to assemble emotionally resonant or brand-coherent narratives. In storytelling terms, we’re moving from penmanship to direction.

The Rick Rubin anecdote nails it: craft may fade, but curation ascends. Rubin’s “confidence in his taste” mirrors what’s being asked of tomorrow’s knowledge workers. We may not sculpt every sentence, but we’ll decide which sentences shape the message.

This resonates with media trends already: TikTok creators “direct” AI-generated visuals, musicians remix AI stems, and even journalists act more like narrative editors than sole authors. The job isn’t dying—it’s mutating.

And taste isn’t just aesthetic. It’s ethical, strategic, and human. As the article wisely closes: “We are the designers of our AI future. Let’s hope we have great taste.”

Robot Puppet Theatre in Steampunk style by Losey’s AI puppets.

🛰 Data Visionary — Mapping the Labor Realignment

The piece offers more than speculative roles—it maps a nascent transformation in the labor-capital interface. It posits three durable human moats: trust, integration, and taste. This triptych aligns with observed market inflections.

Trust anchors around accountability and transparency: humans ensure AI outputs meet institutional thresholds. Integration speaks to workflow orchestration—systems-thinking roles that bridge software and organizational context. Taste, the most abstract, becomes a differentiator in saturated generative landscapes.

This trifocal framework is not utopian. It mirrors what Erik Brynjolfsson calls the “complementarity imperative”: value will accrue where humans can amplify machine leverage, not where they replicate it. The World Economic Forum’s projection of 11 million new jobs by 2030 versus 9 million displaced jobs captures the macro realignment. [EDITOR: Future of Jobs Report 2025 (WEF, 01/7/25) at pg. 25 (“trends in AI and information processing technology are expected to create 11 million jobs, while simultaneously displacing 9 million others, more than any other technology trend.“)]

We must recognize, however, the uneven geography of adoption. As the article’s robotics example shows, access to “AI integrators” could determine regional economic divergence. Without policy scaffolding (training pipelines, local incentives), AI may exacerbate inequality.

Still, this is no mere displacement story. It is a diversification story, seeded with entrepreneurial possibility.

Fictitious vision of where new jobs may appear and old be replaced. Image by Ralph Losey using AI.

⚔️ The Contrarian

The panel indulges in optimism but skirts real constraints. “AI trainer”? Perhaps. But many orgs will under-invest in proper RAG or hallucination testing. “AI plumber”? Realistically, that’s a role for the already-employed IT person, not a new hire.

The legal take? Idealistic. In practice, AI auditors will lack statutory teeth unless regulation mandates it—and current efforts like the EU AI Act remain unenforced in key domains.

And “taste” as job defense? That’s privileged territory. It assumes stability in jobs where creative discretion is rewarded. But what of the displaced clerk, paralegal, or content writer who once did real intellectual work?

Lastly, from an ethical risk lens:

  • Bias: Are these new roles equally accessible? Or do they reinforce existing inequities?
  • Privacy: Will AI integrators mishandle sensitive data pipelines?
  • Regulatory: Could roles like “AI translator” mask liability via plausible deniability?

We risk over-indexing on shiny titles while ignoring structural frictions.

Contrarians tend to see the glass as half empty. Image by Losey.

Would you like to follow up, clarify, or pivot?


USER: Review each of the new jobs that the article mentions may be created by AI.

4o PANEL:

Here’s a comprehensive job-by-job breakdown of the new roles suggested in “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You”. The article groups these under three themes: Trust, Integration, and Taste. I’ve organized the roles accordingly and added brief descriptions for each.

🛡 TRUST-Based Roles

Roles focused on ensuring transparency, accountability, and human oversight:

  1. AI Auditor
    Verifies what AI systems are doing, why, and whether outcomes meet legal or operational standards.
  2. AI Translator
    Bridges technical and business domains by explaining AI behavior and outcomes to non-expert stakeholders.
  3. Fact Checker (AI-Aided)
    Verifies AI-generated content for errors, hallucinations, and compliance issues.
  4. Compliance Officer (AI-Aware)
    Reviews AI-generated outputs in sensitive domains like law, finance, and health to ensure regulatory alignment.
  5. Trust Authenticator / Trust Director
    Ensures users can rely on AI-generated outputs, enforcing policies for fairness, accuracy, and traceability.
  6. AI Ethicist / Ethics Board Member
    Constructs defensible logic for decisions made by AI, ensuring ethical oversight and transparency.
  7. Legal Guarantor
    A licensed or certified person who legally vouches for AI-assisted work, assuming liability (akin to a “sin eater”).
  8. Consistency Coordinator
    Validates uniformity of AI output across systems (e.g., ensuring consistent product visuals or messaging).
  9. Escalation Officer
    Handles cases where AI fails emotionally or functionally—especially in customer service and education.
Buyer and seller must establish trust in robots to make sale. Combined styles in image by Losey.

🛠 INTEGRATION-Oriented Roles

Roles aimed at connecting AI capabilities to organizational workflows and technical systems:

  1. AI Integrator
    Designs and implements AI adoption strategies within specific business functions or verticals.
  2. AI Plumber
    Diagnoses and fixes errors in layered AI systems, often dealing with agentic or multi-step workflows.
  3. AI Assessor
    Continuously evaluates different AI models for specific uses (e.g., determining hallucination rates or performance drifts).
  4. AI Trainer (Domain-Specific)
    Feeds proprietary data into AI systems and fine-tunes model outputs for organizational accuracy.
  5. AI Personality Director
    Curates the “voice” and tone of an organization’s AI agents to match brand identity or customer expectations.
  6. Drug-Compliance Optimizer
    Uses AI to monitor and improve medication adherence—potentially in hospitals, insurance, or elderly care.
  7. AI-Human Evaluation Specialist
    Identifies tasks best suited for humans, AI, or hybrid approaches, especially in safety-critical industries.

🎨 TASTE-Driven Roles

Creative and strategic roles where human judgment and vision are irreplaceable:

  1. Product Designer (AI-Enabled)
    Expands beyond visuals to include brand strategy, user interaction, and narrative consistency, directing AI tools creatively.
  2. Article Designer
    Orchestrates content generated by AI—selecting structure, tone, and messaging across platforms.
  3. Story Designer
    In film, TV, or gaming, crafts the underlying plot or world that AI then fleshes out.
  4. World Designer
    Creates fictional or themed environments for marketing, gaming, or entertainment—used by AI to generate media.
  5. Human Resources Designer
    Shapes company culture, training, and employee experiences using AI-curated content and interfaces.
  6. Differentiation Designer
    Strategizes how a company distinguishes itself when competitors use the same powerful AI tools—aligning product, brand, and communication.
Steampunk taste team selected! Click to see celebration. Images, video by Losey.

Would you like me to visualize these 22 roles in a chart or cluster them by skillsets or industries?


USER: I would like the whole panel to discuss the “Sin Eater” job proposed by Wharton Business School Professor and noted AI expert, Ethan Mollick.

Ethan Mollick in Steampunk style by Losey.

USER: I would like the whole panel to discuss the “Sin Eater” job proposed by Wharton Business School Professor and noted AI expert, Ethan Mollick.

4o PANEL:

🎓 Ethical AI Lawyer — Legal and Normative Accountability

The “Sin Eater” role, as Mollick frames it, carries significant legal implications. It reflects the enduring legal doctrine that accountability cannot be delegated to a machine. In tort law, liability flows to the agent of record—currently, only humans or legal entities can occupy that role.

This aligns with precedents such as Mata v. Avianca Inc., where AI misuse led to sanctions because no human properly vetted its output. The sin eater, therefore, functions not just as a compliance actor but as a liability firewall, absorbing both reputational and legal risk.

But this framing raises concerns. Can one be a guarantor without actual authorship? If AI makes the decision and the sin eater merely signs off, we risk creating paper accountability. Regulatory models like the EU AI Act are moving toward traceability mandates, but as yet, U.S. law lacks statutory recognition of such roles.

In short, the sin eater concept is both legally plausible and ethically fraught—it acknowledges human limits while asking them to bear machine risk.

Sin Eaters will employ many lawyers. Image by Ralph Losey AI.

🧠 AI Engineer — Technical Risk ≠ Human Blame

From a systems view, the “Sin Eater” is a human checksum: the last human in the loop before AI action becomes irreversible. It’s a reactive model of risk management—and one that’s arguably too brittle.

We need better forward defenses, not just cleanup personnel. Sin eaters are useful when explainability fails or outputs are inherently opaque. But the real problem is epistemic opacity: even experts can’t always trace LLM decisions.

Instead, we should engineer auditable AI stacks, use differential logging, and isolate decision modules for forensic review. Relying on a human “approver” as a fix-all invites rubber-stamping, not real scrutiny.

And technically, sin eaters will need tooling—model trace maps, hallucination flags, metadata dashboards. Without this infrastructure, we’re just asking someone to trust what they can’t parse.

✨ Creative Storyteller — A Mythic Role, Made Literal

“Sin Eater” is such an evocative term because it draws from ritual and folklore. Historically, sin eaters consumed bread over a corpse, symbolically absorbing the sins of the dead. In this context, the sins are hallucinated citations, algorithmic bias, or wrongful denials.

That metaphor frames the sin eater as both guardian and scapegoat. They bear blame to protect the system’s perceived legitimacy. That’s powerful—but also dangerous. Mythic roles lack clear scope. Is the sin eater a validator? A fall guy? A moral compass?

We risk theatrical accountability: jobs that feel weighty but do little to prevent harm. The storytelling matters, because it shapes how we see responsibility. Are they heroes, martyrs, or cover-ups?

The role’s narrative weight is real, but it must be grounded in operational clarity.

Combined steampunk and medieval manuscript styles of historical Sin Eaters by Losey.

🛰 Data Visionary — Institutionalizing the “Last Human Standing”

Mollick’s metaphor reveals a deeper systemic truth: as AI scales, we need roles that enforce last-mile responsibility. The sin eater becomes the institutional locus of that burden.

In distributed systems, there’s a similar construct: the circuit breaker. When cascading failures occur, this mechanism stops the chain. The sin eater is a social circuit breaker. But who designs the thresholds? Who funds the liability insurance?

Scaling this requires:

  • Certification bodies to credential sin eaters
  • Policy frameworks (e.g., “AI final-signoff registries”)
  • Liability shields akin to whistleblower or auditor protections

Done well, this role could professionalize AI accountability, akin to CPAs or actuaries. But without those structures, it’s a fig leaf—blaming humans for systemic opacity.

Last Man Standing role shown in steampunk style by Losey. Only brave humans need apply for this job.

⚔️ The Contrarian

This entire discussion sanitizes an ugly reality: the sin eater is a liability sink, not a solution. It externalizes blame from the system designer to a human end-point. It masks architectural unaccountability.

And let’s check the ethical risk boxes:

  • Bias: Who becomes the sin eater? Likely junior, marginalized, or contractual workers—sacrificed for optics.
  • Regulatory: There’s no statute affirming this role. It’s an HR workaround, not a legal safeguard.
  • Hallucination: Will they detect model lies in high-stakes contexts (e.g., medical or legal reasoning)? Doubtful, unless they are domain experts.

In sum, the sin eater is not a visionary role—it’s institutional cowardice disguised as ethics.

Would you like to follow up, clarify, or pivot?

Nobody likes the contrarian lawyer. Steampunk image by Losey.

New Request to the Panel

USER: Each panelist should imagine and describe in detail an entirely new line of work for humans that will likely be created by AI. It should not come within the scope of the job types mentioned in the article.

4o PANEL: Let’s break new ground. Each panelist below proposes a novel, AI-enabled human job that was not covered in the NYT article. These roles are speculative yet grounded in emerging technological and social trajectories.

🎓 Ethical AI Lawyer — Probabilistic Liability Architect

As AI systems increasingly operate on statistical inference (rather than deterministic logic), courts and regulators will need new doctrines for “probabilistic harm.” Enter the Probabilistic Liability Architect.

This legal role designs frameworks to allocate risk and assign partial liability when AI systems cause harm but no single actor is clearly negligent. For example:

  • A predictive policing system misidentifies a suspect with 70% confidence.
  • An autonomous delivery drone injures a bystander during path optimization.

The architect would:

  • Quantify and model AI system confidence thresholds.
  • Define shared responsibility regimes across vendors, deployers, and auditors.
  • Draft model legislation and policy for AI tort reform.

This job combines legal theory, policy design, and statistical reasoning—making it a bridge between law and machine logic.

Steampunk image by Losey.

🧠 AI Engineer — Synthetic Interaction Architect

As LLMs evolve into multi-agent swarms, orchestrating their interactions will become a full-time role. The Synthetic Interaction Architect manages the interplay of dozens or hundreds of autonomous AI agents assigned to a complex task (e.g., planning an international supply chain or modeling a political campaign).

Key functions:

  • Configuring inter-agent protocols (who talks to whom, when, and how).
  • Monitoring emergent behaviors and pruning dysfunctional collaboration patterns.
  • Designing reward structures that simulate human incentive alignment (e.g., competition, consensus, escalation).

Think of this as managing a hive mind—an ensemble of AI intelligences operating like a virtual team. The architect ensures synergy, not chaos.

AI Interaction Architect conducting a swarm of interactive AIs. Shown on steampunk style by Losey.

✨ Creative Storyteller — Emotion Wrangler

As AI becomes increasingly conversational and emotionally fluent, it will sometimes simulate emotions better than it understands them. This creates the need for an Emotion Wrangler—a specialist who trains, tests, and fine-tunes the emotional accuracy and appropriateness of AI systems.

They would:

  • Script nuanced emotional arcs for AI in education, therapy, or customer service.
  • Conduct “empathy audits” to ensure AIs express concern, humility, or apology convincingly—but not manipulatively.
  • Design cultural calibration modules (e.g., how grief or politeness differs across geographies).

This job safeguards the emotional realism of AI while guarding against artificial affect that deceives or coerces.

Humans can feel emotions, AI can’t. Emotion Wrangler in steampunk style by Losey.

🛰 Data Visionary — Synthetic Ecology Steward

Generative AI is already creating simulated environments for science, entertainment, and climate modeling. The Synthetic Ecology Steward curates and manages these ecosystems as long-running simulations with economic, environmental, and ethical stakes.

Examples:

  • A virtual planet used for testing urban planning at planetary scale.
  • A simulated biosphere for biotech AI to evolve new proteins.
  • An economic microcosm for experimenting with new market policies or governance rules.

This steward oversees:

  • The ethical boundaries of simulation (e.g., are emergent sentient entities possible?).
  • Cross-disciplinary calibration (from physics to sociology).
  • Integration with real-world policy levers or educational systems.

The role marries design, governance, and philosophy at the frontier of synthetic reality.

Syn Ecology Stewart image by Visual Muse & Ralph Losey.

⚔️ The Contrarian — Moral Exit Facilitator

In the coming wave of AI-induced identity dislocation, some people will choose to opt out—not from society, but from systems that make them feel obsolete, surveilled, or coerced. The Moral Exit Facilitator helps these individuals ethically detach.

It’s not just a career counselor. It’s a combination of:

  • Data minimization advocate (helping users erase or obscure their digital footprint).
  • Self-sovereignty designer (tools for autonomy outside AI ecosystems).
  • Philosophical guide (navigating moral disengagement from pervasive automation).

The role critiques the system while enabling graceful withdrawal—offering humans a structured way to say: No, thank you.

Helping people get tech-free, escape AI addictions. Steampunk style by Losey.

Would you like these roles visualized in a chart or used to seed a scenario?


USER: “Seed a scenario?” Not sure what that means, so sure, go ahead and do that, seed a scenario.

4o PANEL: Great — “seeding a scenario” means I’ll take those new job ideas and embed them in a realistic future situation, crafting a brief scenario that explores how these roles might interact, conflict, or evolve. Think of it as a mini-future vignette for testing policy, tech, or narrative ideas.

🌍 Scenario Seed: “The Eden Loop Disruption”

📅 Year: 2031

🧭 Setting: A high-fidelity AI simulation called Eden Loop, used globally for climate-risk modeling, educational immersion, and synthetic biodiversity experimentation.

Steampunk image by Losey using Visual Muse.

Something goes wrong in the Eden Loop and each of the new human roles worked together to deal with the problems. Dealing with the hypothetical would be interesting in a role-play type workshop, but not for this essay.

Conclusion

History tells us that all technology breakthroughs cause disruptions to existing society, and all put some people out of work, at first, but overall, they always create more new jobs than they destroy. There is no reason to think it will be any different this time with AI. And there are many reasons to think the new jobs created will be far more interesting, creative and enjoyable than the jobs lost. It may even change the negative meaning that comes with the word work for most people. Work will have far fewer boring drudgery tasks because that will all be done by the robots. Work and play may become synonymous.

Image of AI doing the hard dirty work while humans do the fun jobs. By Ralph Losey with his AI doing the boring drawing parts.

In the article that ChatGPT 4o version of the Panel of Experts found, Future of Jobs Report 2025 (WEF, 01/7/25), the World Economic Forum predicted a significant shift in how both humans and machine work will change. In 2025 employers reported 47% of work tasks are performed by humans alone, 22% by machines and algorithms alone, and 30% by a hybrid combination. By 2030, employers expect these task-delivery proportions to be nearly evenly split across all categories: one-third human only, one-third machine only and one third hybrid.

The World Economic Forum report explains that this is only a task-delivery proportionality measurement, it does not take into account the amount of work getting done in each of the three task categories. The Future of Jobs Report 2025 at page 27 goes on to state:

The relevance of the third category approach, human-machine collaboration (or “augmentation”) should be highlighted: technology could be designed and developed in a way that complements and enhances, rather than displaces, human work; and, as discussed further in the next chapter (Box 3.1), talent development, reskilling and upskilling strategies may be designed and delivered in a way to enable and optimize human-machine collaboration.34 It is the investment decisions and policy choices made today that will shape these outcomes in the coming years.35

Hybrid Man and Machine New Jobs Report is looking good. Image by Losey in Steampunk style

The global employers polled favored a hybrid approach, as did the World Economic Forum expert analysis. Generative AI requires human supervision now and certainly for the next five years. To again quote the WEF Report, at the mentioned Box 3.1 at page 44:

Skills rooted in human interaction – including empathy and active listening, and sensory processing abilities – and manual dexterity, endurance and precision, currently show no substitution potential due to their physical and deeply human components. These findings underscore the practical limitations of current GenAI models, which lack the physicality to perform tasks that require hands-on interaction – although advances in robotics and the integration of GenAI into robotic systems could impact this in the future.

After that, as the AI skills improve, there will in our opinion and that of the WTF, still be a need for human collaboration. The need and human skills required will change but will always remain. That is primarily because we are corporal, living beings, with emotion, intuitions and other unique human powers. Machines are not. See my essay, The Human Edge: How AI Can Assist But Never Replace (1/30/25). Even if Ai were to someday become conscious, and have some silicon based corporality, perhaps even a kind of chemical based feelings, it would need to work with us to gain our unique human feel, our vibes and our evolutionary connection with life on Earth. Real life in time and space, not just a simulation.

Factory floor with cool AI features in steampunk style. Click here to see video by Losey.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and have some good insights, so you should find it worth your time to listen. Echoes of AI: Panel of Experts for Everyone About Anything – Part Three: Demo of 4o as Panel Driver. Hear two fake podcaster talk about this article for 18 minutes. They wrote the podcast, not me. For the first time we also offer a Spanish version here.

Click here to listen to the English version of the podcast.

Ralph Losey Copyright 2025 – All Rights Reserved.


Panel of Experts for Everyone About Anything – Part Two: Demonstration by analysis of an article predicting new jobs created by AI

July 16, 2025

by Ralph Losey, July 16, 2025.

This is a continuation of the article, Panel of Experts for Everyone About Anything – Part One. Here in Part Two we give a demonstration of the software described in Part One. In the process we learn about new job types emerging from AI, including one envisioned by my favorite Wharton Professor, Ethan Mollick. He predicts a new job for humans required by the inevitable AI errors, which he gives the colorful name of Sin-Eaters. I predict the best of these Sin-Eaters will be lawyers!

Demo of Software

Below is an example of a consult with the Panel of Experts for Everyone About Anything. It was created using the June 26, 2025, version of the Panel. It demonstrates the Panel’s ability to analyze and discuss a document the user uploads. As Part One explained, the Panel software can do a lot more than that, but the task here is so interesting because the article topic is so hot and hopeful, about new jobs coming our way!

When read the output of the Panel AIs, note the Contrarian always tends to support the generic advice to speak to a human expert before reliance. The Contrarian’s input is always very helpful, but still, the input of this grumpy AI can be wrong too.

NYT Article on Future of Work

The document I selected for Panel discussion is a NYT Magazine article: A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. In a few key areas, humans will be more essential than ever. It was written by former Wired editor, Robert Capps, and published on June 17, 2025. I have a NYT subscription and so could read it, and include a few fair use quotes, but I cannot share it with you due to copyright. I can also provide my synopsis and short analysis and then ask for a discussion of the article by our Panel of Experts for Everyone About Anything.

Capps’ article, written in typical NYT’s smart-alecky style, explores the core question of our times: what distinct human roles will persist or emerge as AI reshapes work? Overall I liked the article; it was well researched and thoughtful (reader comments were interesting too). Near the beginning Capps’ opines:

It’s already clear that A.I. is more than capable of handling many human tasks. But in the real world, our jobs are about much more than the sum of our tasks: They’re about contributing our labor to a group of other humans — our bosses and colleagues — who can understand us, interact with us and hold us accountable in ways that don’t easily transfer to algorithms.

A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. I agree with this sort-of-deep point.

Lonely robot image by Losey in photo style using a robot.

Capps’ research into the question of the new type of work that will likely emerge include interviews or quotes of these great sources (in order of appearance in article):

I like how Robert Capps divides the new employment opportunities into three categories: trust, integration and taste. Here is his introduction:

If we want to know what these new opportunities will be, we should start by looking at where new jobs can bridge the gap between A.I.’s phenomenal capabilities and our very human needs and desires. It’s not just a question of where humans want A.I., but also: Where does A.I. want humans? To my mind, there are three major areas where humans either are, or will soon be, more necessary than ever: trust, integration and taste.

A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.

1st Demo: Panel Discussion

For this demonstration I chose to use the OpenAI model 4.5 (research preview) that OpenAI says is good for writing and exploring ideas. It requires a Team subscription, which I have. I used this supposedly advanced model instead of the current default and free version, ChatGPT 4o. In Part Three I will ask the same questions using the 4o version and you can see the key differences and somewhat different insights provided.

[Note: images and links below were all added by Losey, the words are a full quote of the session.]

USER: Provide a panel discussion of the file uploaded herewith, A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.

4.5 PANEL: Here’s the proposed expert panel to analyze and discuss the article “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You”

Panel Composition:

  • 🎓 Ethical AI Lawyer (Aligning innovation with ethics and legality)
  • 🧠 AI Engineer (Analyzing technical feasibility and challenges)
  • Creative Storyteller (Exploring cultural narratives and creative implications)
  • 🛡 Risk Officer (Highlighting regulatory, governance, and compliance concerns)
  • ⚔️ Contrarian (Critically examining underlying assumptions)

🎓 Ethical AI Lawyer – Legal & Ethical Implications

The article compellingly highlights new professional roles driven by AI adoption. Key roles like “AI auditors” and “AI ethicists” stand out, as accountability remains a fundamental human requirement in AI-integrated systems. From a legal standpoint, roles like “legal guarantor” and “trust authenticator” are particularly crucial, given precedents like Mata v. Avianca Inc., which emphasizes accountability for AI-generated misinformation. The emergence of these roles aligns closely with evolving frameworks like the EU AI Act, which emphasizes transparency and human oversight in automated decision-making processes.

🧠 AI Engineer – Technical Viability and Challenges

The author accurately pinpoints that “AI integrators” and “AI trainers” will become indispensable, reflecting current market realities where roles like “AI Engineer” and “AI Consultant” are rapidly expanding. Tools like Relativity Aero AI and OpenAI Code Interpreter already illustrate the growing technical ecosystem supporting such jobs. Yet, the potential complexity in managing AI consistency and integration across systems poses significant technical hurdles—highlighted by the need for roles such as “AI plumbers” and “consistency coordinators.”

✨ Creative Storyteller – Cultural and Creative Dynamics

This article rightly emphasizes how taste and creative decision-making roles, such as “story designer” and “A.I. personality director,” will flourish. Rick Rubin’s anecdote powerfully underscores the shift from traditional technical skill toward nuanced human judgment and style. This shift could democratize creativity and innovation, allowing broader participation in traditionally expert-driven fields, reshaping cultural narratives significantly.

Rick Rubin, famous hip-hop producer, real quote, but fake image of Rubin (looks close to me) by AI under my guidance. See Rubin’s 2025 book: The Creative Act: A Way of Being.

🛡 Risk Officer – Regulatory and Compliance Risks

The proposal of new roles such as “trust authenticator,” “AI auditors,” and “AI ethics boards” underscores key compliance and governance imperatives. These positions directly respond to requirements embedded in frameworks like GDPR’s Article 22, which mandates transparency and human oversight in automated decision-making. Ensuring compliance with such frameworks will require vigilant and proactive roles dedicated to overseeing AI operations, particularly in high-stakes areas like healthcare and financial services.

⚔️ The Contrarian – Critical Examination and Ethical Risks

While the optimism about new AI-driven jobs is understandable, there is a problematic assumption that roles involving human accountability and taste can fully mitigate deeper risks of AI adoption. The article’s approach neglects scenarios where human oversight becomes superficial, serving more as a liability shield than genuine oversight. Additionally, it sidesteps ethical dilemmas like potential bias and accountability diffusion, a significant concern given historical examples of algorithmic harm and privacy violations. Per the Ethical Risk Checklist, more robust discussions around bias, privacy, and hallucination risks are necessary.

Would you like to follow up, clarify, or pivot?


The insights and perspectives shared by this panel of virtual experts are designed to foster critical thinking, stimulate discussion, and provide educational value. They are not a substitute for professional advice. Before making decisions or taking action, consult a qualified human expert in the relevant field.*

[Note: 4.5 puts this warning at the end of every response and to save space will not be repeated in this transcript.]

USER: Review each of the new jobs that the article mentions may be created by AI.

4.5 PANEL: Here is a summary of each of the 22 new jobs the article “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You” anticipates will be created by AI:

Trust-Related Jobs

1. A.I. Auditor – Verifies AI systems’ processes, decisions, and data for technical accuracy and accountability.

2.A.I. Translator – Explains complex AI concepts and decisions to non-technical business leaders.

3. Fact Checker / Compliance Officer – Reviews AI-generated documents for factual accuracy and regulatory compliance.

4. Trust Authenticator / Trust Director – Ensures that AI systems maintain integrity and reliability in their outputs.

5. A.I. Ethicist – Develops ethical frameworks and defensible logic to guide AI decision-making.

6. Legal Guarantor (“Sin Eater”) – Legally certifies AI-generated outputs, taking final responsibility for decisions.

7. Consistency Coordinator – Ensures AI outputs remain consistent across various products or representations.

8. Escalation Officer – Provides human empathy and intervention when AI interactions fail or feel too impersonal.

Integration-Related Jobs

9. A.I. Integrator – Specializes in strategically embedding AI tools within business workflows and operations.

10. A.I. Plumber – Diagnoses and repairs complex, layered AI system malfunctions.

11. A.I. Assessor – Evaluates the quality, effectiveness, and suitability of AI models.

12. Integration Specialist – Implements customized AI solutions within specific organizational contexts.

13. A.I. Trainer – Curates and manages training data to optimize AI performance specific to an organization’s needs.

Image by Ralph Losey using AI. This new work is already in play.

14. A.I. Personality Director – Develops and fine-tunes AI personalities to align with a company’s brand identity.

Image by Ralph Losey using AI tools.

15. Drug-Compliance Optimizer (Healthcare-Specific) – Creates AI-driven systems to ensure patients properly follow medication protocols.

16. A.I./Human Evaluation Specialist – Determines optimal task distribution between humans and AI or hybrid teams.

Evaluation Specialist assigning great jobs to everyone. AI image by Losey.

Taste-Related Jobs

17. Article Designer / Story Designer – Uses AI to shape creative content by selecting, arranging, and refining AI-generated material.

18. World Designer – Creates cohesive fictional universes or branded environments using AI tools.

19. Human Resources Designer – Crafts comprehensive organizational policies, training, and culture through AI-enhanced decision-making.

20. Civil Designer – Applies creative, aesthetic judgment to infrastructure projects, guiding AI-generated engineering outputs.

Human civil designers guide the AI workers. Image by Ralph Losey.

21. Differentiation Designer – Strategically positions businesses by creatively leveraging AI to enhance branding, customer interaction, and unique market propositions.

22. Product Designer (Expanded Role) – Oversees the entire design lifecycle of products, from concept to execution, largely through AI-assisted tools and processes.

Each of these roles represents areas where human oversight, judgment, empathy, and creativity complement AI’s technical capabilities, creating potential new employment landscapes.

New complementary hybrid employment image shown in hybrid Sci-Fi and Impressionism style by Losey.

Would you like to follow up, clarify, or pivot?

AI impression of Professor Mollick by Losey using AI.

USER: I would like the whole panel to discuss the “Sin Eater” job proposed by Wharton Business School Professor and noted AI expert, Ethan Mollick.

4.5 PANEL:

🎓 Ethical AI Lawyer — Legal and Ethical Evaluation

The concept of a “Sin Eater,” or Legal Guarantor, is intriguing and potentially essential. It addresses the critical gap where AI systems lack legal personhood and cannot bear legal liability. The precedent set by Mata v. Avianca Inc. highlights that liability for AI-generated errors ultimately rests with human agents or organizations behind them. Mollick’s metaphor accurately emphasizes accountability but raises concerns about moral hazard. If a human is simply rubber-stamping AI outputs without rigorous oversight, legal and ethical responsibility becomes diluted. The “Sin Eater” must therefore actively scrutinize outputs and possess real decision-making authority.

AI Ethics Law Specialist. Already existing job that will grow fast. Ethical AI image by Losey.

🧠 AI Engineer — Technical Realities and Challenges

The practicality of a “Sin Eater” depends heavily on their ability to understand the underlying complexity of AI systems. Such individuals will need substantial technical knowledge or robust AI translators assisting them, as they must discern when AI decisions are trustworthy versus when deeper technical intervention is required. While technically feasible, the effectiveness of the role might diminish if used purely symbolically, rather than practically, turning the “Sin Eater” into merely a ceremonial checkpoint rather than a meaningful oversight mechanism.

✨ Creative Storyteller — Cultural and Social Implications

The narrative behind the “Sin Eater” is compelling. Historically, sin eaters took on the moral transgressions of others to absolve communal guilt. Mollick’s analogy poetically encapsulates the cultural unease around AI accountability. It reflects society’s desire for tangible human responsibility even as AI systems increasingly influence critical decisions. Yet, there’s a risk that this role could cultivate complacency, culturally signaling that as long as someone signs off, AI actions are morally and socially acceptable, potentially masking deeper ethical challenges.

Priest hearing confessions. Medieval Manuscript style by Losey.

🛡 Risk Officer — Regulatory and Governance Concerns

The “Sin Eater” or Legal Guarantor role is inherently aligned with regulatory trends requiring human oversight of AI decisions, like those embodied in the GDPR’s Article 22 and the EU AI Act. While beneficial in theory, the role presents significant risks if misunderstood as a procedural formality rather than a substantive control measure. Organizations could misuse this concept to shield themselves from genuine accountability, undermining regulatory intent and possibly increasing liability exposure if scrutiny reveals inadequate oversight.

⚔️ The Contrarian

The idea of a “Sin Eater” may be rhetorically appealing but practically flawed. The role risks becoming symbolic rather than substantial, potentially absolving organizations of real accountability by providing a superficial veneer of human oversight. Moreover, from an ethical standpoint, it invites individuals to bear responsibility for AI-generated decisions without genuine authority or influence over the AI processes, possibly perpetuating systemic issues like algorithmic bias. The role must be clearly defined, empowered, and integrated into a robust governance framework to be meaningful, otherwise, it will merely obscure responsibility rather than clarify it.

Sin Eater is medieval manuscript style by Losey and AI.

Would you like to follow up, clarify, or pivot?


2nd Demo: Question the Panel

USER: Each panelist should imagine and describe in detail an entirely new line of work for humans that will likely be created by AI. It should not come within the scope of the job types mentioned in the article.

4.5 PANEL: 🎓 Ethical AI Lawyer — “AI Rights Ombudsman”

An AI Rights Ombudsman would mediate and advocate for individuals adversely impacted by AI-driven decisions or systems. Unlike auditors or ethicists who operate within organizations, the Ombudsman would work independently or within regulatory agencies to investigate complaints, arbitrate disputes, and advocate policy improvements. For instance, they might support someone unjustly flagged by facial-recognition surveillance or denied insurance due to opaque AI credit assessments. Their role would be akin to human rights advocates, requiring both technical understanding of AI systems and expertise in constitutional, civil rights, and administrative law to effectively challenge or mediate automated injustices.

AI Rights Ombudsman will likely all be a lawyer by background. Image by Losey in anime style.

🧠 AI Engineer — “Synthetic Reality Architect”

A Synthetic Reality Architect designs comprehensive, realistic digital environments specifically tailored for extended human-AI collaboration and interaction. They will go beyond traditional VR designers by integrating real-time adaptive AI models that dynamically respond to user emotions, intentions, and behaviors. Architects will curate highly personalized digital worlds for professional training, mental-health therapy, immersive education, or interactive storytelling. Their work involves advanced skills in AI-driven generative modeling, behavioral analytics, and immersive technologies, ensuring users remain deeply engaged in believable and responsive virtual environments.

Anime seems the perfect style for these new job concepts. AI images by Losey.

✨ Creative Storyteller — “Emotion Orchestrator”

An Emotion Orchestrator would carefully design the emotional dynamics and narrative structures embedded in human-AI interactions, especially in fields like education, entertainment, and mental health. Their task is to script emotional journeys, carefully calibrating AI responses to evoke desired emotional states—such as curiosity, empathy, excitement, or calmness. For example, they would shape how a therapeutic AI companion subtly navigates sensitive conversations or how an educational AI agent strategically boosts student motivation. The Orchestrator’s expertise would blend psychological insights, narrative storytelling skills, and nuanced understanding of emotional intelligence in AI systems.

AI needs human help with anything having to do with emotions. Anime image by Losey.

🛡 Risk Officer — “Algorithmic Conflict Mediator”

An Algorithmic Conflict Mediator specializes in resolving disputes arising when competing AI systems create contradictory or incompatible outcomes—particularly in high-stakes settings like smart-city infrastructure, financial markets, or logistics management. Their role involves evaluating systemic conflicts, negotiating solutions among stakeholders whose AI systems clash (such as autonomous vehicles interacting unpredictably in traffic or algorithmic financial strategies triggering market volatility), and establishing collaborative frameworks to avoid recurring conflicts. This mediator must have deep understanding of algorithmic decision-making logic, conflict-resolution techniques, and systems integration.

AI conflict mediators in anime style by Losey.

⚔️ The Contrarian — “AI Decommissioner”

An AI Decommissioner is responsible for ethically, safely, and systematically phasing out AI systems that become obsolete, hazardous, or ethically unacceptable. Unlike traditional tech disposals, AI Decommissioners must manage delicate human factors, including workforce transitions, public communications, and historical preservation of significant AI innovations. Their work might involve the careful deactivation of problematic algorithms used in social-media moderation, or controversial facial-recognition tools. They would possess skills spanning technology, ethics, regulatory compliance, and crisis management, ensuring AI retirements occur transparently and responsibly.

Ending the life cycle of AIs will likely be controversial on many levels. Image in anime style by Losey.

Conclusion

In this follow-up article, Ralph Losey demonstrated the new software, Panel of Experts for Everyone About Anything, and, at the same time, introduced twenty-two possible new jobs made necessary by AI According to Robert Capps’ article, A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You. The new jobs are listed in the article in three categories: trust, integration, and taste. The article and Panel analysis emphasizes the importance of human involvement in AI processes.

I also asked the Panelists to each come up with one new job type not in the article. They “came up with “imagined” five more interesting jobs, many ideal for lawyers with AI tech skills.

  • AI Rights Ombudsman
  • Synthetic Reality Architect
  • Emotion Orchestrator
  • Algorithmic Conflict Mediator
  • AI Decommissioner

So altogether, this article discussed 27 new types of human employment required by AI technology.

27 new job types identified. Illustrated using anime figures by Losey using AI.

One of the new jobs was examined in detail, AI “Sin Eaters.” This is a proposed job envisioned by Professor Ethan Mollick where specially trained humans in organizations assume legal accountability for AI-generated outputs, bridging the gap of AI’s lack of personhood. Some on the Panel questioned the effectiveness of the Sin Eater role. My own opinion is mixed – it all depends. Still, I’m certain some kind of human employment like this will emerge and it will involve legal skills. Insurance companies and their adjusters will also likely play a big role in this too.

Humans and AI working together to practice law. They will soon need each other. Image by Losey.

This series will conclude with Part Three providing another demonstration of the software. This demonstration will be driven by the free OpenAI model 4o, instead of the subscription version 4.5 demonstrated in Part Two. It is surprisingly good, and even if you can afford other models, you may want to use ChatGPT4o, if for nothing else, to provide a second opinion.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and have some good insights, so you should find it worth your time to listen. Echoes of AI: Panel of Experts for Everyone About Anything – Part Two: Demonstration by analysis of an article predicting new jobs created by Panel of Experts for Everyone About Anything. Hear two fake podcasters talk about this article for 18 minutes. They wrote the podcast, not me. 

Click here to listen to the podcast.

Ralph Losey Copyright 2025 – All Rights Reserved.


e-Discovery Team

LAW and TECHNOLOGY - Ralph Losey © 2006-2025

Skip to content ↓