2025 Year in Review: Beyond Adoption—Entering the Era of AI Entanglement and Quantum Law

December 31, 2025

Ralph Losey, December 31, 2025

As I sit here reflecting on 2025—a year that began with the mind-bending mathematics of the multiverse and ended with the gritty reality of cross-examining algorithms—I am struck by a singular realization. We have moved past the era of mere AI adoption. We have entered the era of entanglement, where we must navigate the new physics of quantum law using the ancient legal tools of skepticism and verification.

A split image illustrating two concepts: on the left, 'AI Adoption' showing an individual with traditional tools and paperwork; on the right, 'AI Entanglement' featuring the same individual surrounded by advanced technology and integrated AI systems.
In 2025 we moved from AI Adoption to AI Entanglement. All images by Losey using many AIs.

We are learning how to merge with AI and remain in control of our minds, our actions. This requires human training, not just AI training. As it turns out, many lawyers are well prepared by past legal training and skeptical attitude for this new type of human training. We can quickly learn to train our minds to maintain control while becoming entangled with advanced AIs and the accelerated reasoning and memory capacities they can bring.

A futuristic woman with digital circuitry patterns on her face interacts with holographic data displays in a high-tech environment.
Trained humans can enhance by total entanglement with AI and not lose control or separate identity. Click here or the image to see video on YouTube.

In 2024, we looked at AI as a tool, a curiosity, perhaps a threat. By the end of 2025, the tool woke up—not with consciousness, but with “agency.” We stopped typing prompts into a void and started negotiating with “agents” that act and reason. We learned to treat these agents not as oracles, but as ‘consulting experts’—brilliant but untested entities whose work must remain privileged until rigorously cross-examined and verified by a human attorney. That put the human legal minds in control and stops the hallucinations in what I called “H-Y-B-R-I-D” workflows of the modern law office.

We are still way smarter than they are and can keep our own agency and control. But for how long? The AI abilities are improving quickly but so are our own abilities to use them. We can be ready. We must. To stay ahead, we should begin the training in earnest in 2026.

A humanoid robot with glowing accents stands looking out over a city skyline at sunset, next to a man in a suit who observes the scene thoughtfully.
Integrate your mind and work with full AI entanglement. Click here or the image to see video on YouTube.

Here is my review of the patterns, the epiphanies, and the necessary illusions of 2025.

I. The Quantum Prelude: Listening for Echoes in the Multiverse

We began the year not in the courtroom, but in the laboratory. In January, and again in October, we grappled with a shift in physics that demands a shift in law. When Google’s Willow chip in January performed a calculation in five minutes that would take a classical supercomputer ten septillion years, it did more than break a speed record; it cracked the door to the multiverse. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (Jan. 2025).

The scientific consensus solidified in October when the Nobel Prize in Physics was awarded to three pioneers—including Google’s own Chief Scientist of Quantum Hardware, Michel Devoret—for proving that quantum behavior operates at a macroscopic level. Quantum Echo: Nobel Prize in Physics Goes to Quantum Computer Trio (Two from Google) Who Broke Through Walls Forty Years Ago; and Google’s New ‘Quantum Echoes Algorithm’ and My Last Article, ‘Quantum Echo’ (Oct. 2025).

For lawyers, the implication of “Quantum Echoes” is profound: we are moving from a binary world of “true/false” to a quantum world of “probabilistic truth”. Verification is no longer about identical replication, but about “faithful resonance”—hearing the echo of validity within an accepted margin of error.

But this new physics brings a twin peril: Q-Day. As I warned in January, the same resonance that verifies truth also dissolves secrecy. We are racing toward the moment when quantum processors will shatter RSA encryption, forcing lawyers to secure client confidences against a ‘harvest now, decrypt later’ threat that is no longer theoretical.

We are witnessing the birth of Quantum Law, where evidence is authenticated not by a hash value, but by ‘replication hearings’ designed to test for ‘faithful resonance.’ We are moving toward a legal standard where truth is defined not by an identical binary match, but by whether a result falls within a statistically accepted bandwidth of similarity—confirming that the digital echo rings true.

A digital display showing a quantum interference graph with annotations for expected and actual results, including a fidelity score of 99.2% and data on error rates and system status.
Quantum Replication Hearings Are Probable in the Future.

II. China Awakens and Kick-Starts Transparency

While the quantum future dangers gestated, AI suffered a massive geopolitical shock on January 30, 2025. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. The release of China’s DeepSeek not only scared the market for a short time; it forced the industry’s hand on transparency. It accelerated the shift from ‘black box’ oracles to what Dario Amodei calls ‘AI MRI’—models that display their ‘chain of thought.’ See my DeepSeek sequel, Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand. This display feature became the cornerstone of my later 2025 AI testing.

My Why the Release article also revealed the hype and propaganda behind China’s DeepSeek. Other independent analysts eventually agreed and the market quickly rebounded and the political, military motives became obvious.

A digital artwork depicting two armed soldiers facing each other, one representing the United States with the American flag in the background and the other representing China with the Chinese flag behind. Human soldiers are flanked by robotic machines symbolizing advanced military technology, set against a futuristic backdrop.
The Arms Race today is AI, tomorrow Quantum. So far, propaganda is the weapon of choice of AI agents.

III. Saving Truth from the Memory Hole

Reeling from China’s propaganda, I revisited George Orwell’s Nineteen Eighty-Four to ask a pressing question for the digital age: Can truth survive the delete key? Orwell feared the physical incineration of inconvenient facts. Today, authoritarian revisionism requires only code. In the article I also examine the “Great Firewall” of China and its attempt to erase the history of Tiananmen Square as a grim case study of enforced collective amnesia. Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother

My conclusion in the article was ultimately optimistic. Unlike paper, digital truth thrives on redundancy. I highlighted resources like the Internet Archive’s Wayback Machine—which holds over 916 billion web pages—as proof that while local censorship is possible, global erasure is nearly unachievable. The true danger we face is not the disappearance of records, but the exhaustion of the citizenry. The modern “memory hole” is psychological; it relies on flooding the zone with misinformation until the public becomes too apathetic to distinguish truth from lies. Our defense must be both technological preservation and psychological resilience.

A graphic depiction of a uniformed figure with a Nazi armband operating a machine that processes documents, with an eye in the background and the slogan 'IGNORANCE IS STRENGTH' prominently displayed at the top.
Changing history to support political tyranny. Orwell’s warning.

Despite my optimism, I remained troubled in 2025 about our geo-political situation and the military threats of AI controlled by dictators, including, but not limited to, the Peoples Republic of China. One of my articles on this topic featured the last book of Henry Kissinger, which he completed with Eric Schmidt just days before his death in late 2024 at age 100. Henry Kissinger and His Last Book – GENESIS: Artificial Intelligence, Hope, and the Human Spirit. Kissinger died very worried about the great potential dangers of a Chinese military with an AI advantage. The same concern applies to a quantum advantage too, although that is thought to be farther off in time.

IV. Bench Testing the AI models of the First Half of 2025

I spent a great deal of time in 2025 testing the legal reasoning abilities of the major AI players, primarily because no one else was doing it, not even AI companies themselves. So I wrote seven articles in 2025 concerning benchmark type testing of legal reasoning. In most tests I used actual Bar exam questions that were too new to be part of the AI training. I called this my Bar Battle of the Bots series, listed here in sequential order:

  1. Breaking the AI Black Box: A Comparative Analysis of Gemini, ChatGPT, and DeepSeek. February 6, 2025
  2. Breaking New Ground: Evaluating the Top AI Reasoning Models of 2025. February 12, 2025
  3. Bar Battle of the Bots – Part One. February 26, 2025
  4. Bar Battle of the Bots – Part Two. March 5, 2025
  5. New Battle of the Bots: ChatGPT 4.5 Challenges Reigning Champ ChatGPT 4o.  March 13, 2025
  6. Bar Battle of the Bots – Part Four: Birth of Scorpio. May 2025
  7. Bots Battle for Supremacy in Legal Reasoning – Part Five: Reigning Champion, Orion, ChatGPT-4.5 Versus Scorpio, ChatGPT-o3. May 2025.
Two humanoid robots fighting against each other in a boxing ring, surrounded by a captivated audience.
Battle of the legal bots, 7-part series.

The test concluded in May when the prior dominance of ChatGPT-4o (Omni) and ChatGPT-4.5 (Orion) was challenged by the “little scorpion,” ChatGPT-o3. Nicknamed Scorpio in honor of the mythic slayer of Orion, this model displayed a tenacity and depth of legal reasoning that earned it a knockout victory. Specifically, while the mighty Orion missed the subtle ‘concurrent client conflict’ and ‘fraudulent inducement’ issues in the diamond dealer hypothetical, the smaller Scorpio caught them—proving that in law, attention to ethical nuance beats raw processing power. Of course, there have been many models released since then May 2025 and so I may do this again in 2026. For legal reasoning the two major contenders still seem to be Gemini and ChatGPT.

Aside for legal reasoning capabilities, these tests revealed, once again, that all of the models remained fundamentally jagged. See e.g., The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7% (Sec. 5 – Study Consistent with Jagged Frontier research of Harvard and others). Even the best models missed obvious issues like fraudulent inducement or concurrent conflicts of interest until pushed. The lesson? AI reasoning has reached the “average lawyer” level—a “C” grade—but even when it excels, it still lacks the “superintelligent” spark of the top 3% of human practitioners. It also still suffers from unexpected lapses of ability, living as all AI now does, on the Jagged Frontier. This may change some day, but we have not seen it yet.

A stylized illustration of a jagged mountain range with a winding path leading to the peak, set against a muted blue and beige background, labeled 'JAGGED FRONTIER.'
See Harvard Business School’s Navigating the Jagged Technological Frontier and my humble papers, From Centaurs To Cyborgs, and Navigating the AI Frontier.

V. The Shift to Agency: From Prompters to Partners

If 2024 was the year of the Chatbot, 2025 was the year of the Agent. We saw the transition from passive text generators to “agentic AI”—systems capable of planning, executing, and iterating on complex workflows. I wrote two articles on AI agents in 2025. In June, From Prompters to Partners: The Rise of Agentic AI in Law and Professional Practice and in November, The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%.

Agency was mentioned in many of my other articles in 2025. For instance, in my June and July as part of my release the ‘Panel of Experts’—a free custom GPT tool that demonstrated AI’s surprising ability to split into multiple virtual personas to debate a problem. Panel of Experts for Everyone About Anything, Part One and Part Two and Part Three .Crucially, we learned that ‘agentic’ teams work best when they include a mandatory ‘Contrarian’ or Devil’s Advocate. This proved that the most effective cure for AI sycophancy—its tendency to blindly agree with humans—is structural internal dissent.

By the end of 2025 we were already moving from AI adoption to close entanglement of AI into our everyday lives

An artistic representation of a human hand reaching out to a robotic hand, signifying the concept of 'entanglement' in AI technology, with the year 2025 prominently displayed.
Close hybrid multimodal methods of AI use were proven effective in 2025 and are leading inexorably to full AI entanglement.

This shift forced us to confront the role of the “Sin Eater”—a concept I explored via Professor Ethan Mollick. As agents take on more autonomous tasks, who bears the moral and legal weight of their errors? In the legal profession, the answer remains clear: we do. This reality birthed the ‘AI Risk-Mitigation Officer‘—a new career path I profiled in July. These professionals are the modern Sin Eaters, standing as the liability firewall between autonomous code and the client’s life, navigating the twin perils of unchecked risk and paralysis by over-regulation.

But agency operates at a macro level, too. In June, I analyzed the then hot Trump–Musk dispute to highlight a new legal fault line: the rise of what I called the ‘Sovereign Technologist.’ When private actors control critical infrastructure—from satellite networks to foundation models—they challenge the state’s monopoly on power. We are still witnessing a constitutional stress-test where the ‘agency’ of Tech Titans is becoming as legally disruptive as the agents they build.

As these agents became more autonomous, the legal profession was forced to confront an ancient question in a new guise: If an AI acts like a person, should the law treat it like one? In October, I explored this in From Ships to Silicon: Personhood and Evidence in the Age of AI. I traced the history of legal fictions—from the steamship Siren to modern corporations—to ask if silicon might be next.

While the philosophical debate over AI consciousness rages, I argued the immediate crisis is evidentiary. We are approaching a moment where AI outputs resemble testimony. This demands new tools, such as the ALAP (AI Log Authentication Protocol) and Replication Hearings, to ensure that when an AI ‘takes the stand,’ we can test its veracity with the same rigor we apply to human witnesses.

VI. The New Geometry of Justice: Topology and Archetypes

To understand these risks, we had to look backward to move forward. I turned to the ancient visual language of the Tarot to map the “Top 22 Dangers of AI,” realizing that archetypes like The Fool (reckless innovation) and The Tower (bias-driven collapse) explain our predicament better than any white paper. See, Archetypes Over Algorithms; Zero to One: A Visual Guide to Understanding the Top 22 Dangers of AI. Also see, Afraid of AI? Learn the Seven Cardinal Dangers and How to Stay Safe.

But visual metaphors were only half the equation; I also needed to test the machine’s own ability to see unseen connections. In August, I launched a deep experiment titled Epiphanies or Illusions? (Part One and Part Two), designed to determine if AI could distinguish between genuine cross-disciplinary insights and apophenia—the delusion of seeing meaningful patterns in random data, like a face on Mars or a figure in toast.

I challenged the models to find valid, novel connections between unrelated fields. To my surprise, they succeeded, identifying five distinct patterns ranging from judicial linguistic styles to quantum ethics. The strongest of these epiphanies was the link between mathematical topology and distributed liability—a discovery that proved AI could do more than mimic; it could synthesize new knowledge

This epiphany lead to investigation of the use of advanced mathematics with AI’s help to map liability. In The Shape of Justice, I introduced “Topological Jurisprudence”—using topological network mapping to visualize causation in complex disasters. By mapping the dynamic links in a hypothetical we utilized topology to do what linear logic could not: mathematically exonerate the innocent parties. The topological map revealed that the causal lanes merged before the control signal reached the manufacturer’s product, proving the manufacturer had zero causal connection to the crash despite being enmeshed in the system. We utilized topology to do what linear logic could not: mathematically exonerate the innocent parties in a chaotic system.

A person in a judicial robe stands in front of a glowing, intricate, knot-like structure representing complex data or ideas, symbolizing the intersection of law and advanced technology.
Topological Jurisprudence: the possible use of AI to find order in chaos with higher math. Click here to see YouTube video introduction.

VII. The Human Edge: The Hybrid Mandate

Perhaps the most critical insight of 2025 came from the Stanford-Carnegie Mellon study I analyzed in December: Hybrid AI teams beat fully autonomous agents by 68.7%.

This data point vindicated my long-standing advocacy for the “Centaur” or “Cyborg” approach. This vindication led to the formalization of the H-Y-B-R-I-D protocol: Human in charge, Yield programmable steps, Boundaries on usage, Review with provenance, Instrument/log everything, and Disclose usage. This isn’t just theory; it is the new standard of care.

My “Human Edge” article buttressed the need for keeping a human in control. I wrote this in January 2025 and it remains a persona favorite. The Human Edge: How AI Can Assist But Never Replace. Generative AI is a one-dimensional thinking tool My ‘Human Edge’ article buttressed the need for keeping a human in control… AI is a one-dimensional thinking tool, limited to what I called ‘cold cognition’—pure data processing devoid of the emotional and biological context that drives human judgment. Humans remain multidimensional beings of empathy, intuition, and awareness of mortality.

AI can simulate an apology, but it cannot feel regret. That existential difference is the ‘Human Edge’ no algorithm can replicate. This self-evident claim of human edge is not based on sentimental platitudes; it is a measurable performance metric.

I explored the deeper why behind this metric in June, responding to the question of whether AI would eventually capture all legal know-how. In AI Can Improve Great Lawyers—But It Can’t Replace Them, I argued that the most valuable legal work is contextual and emergent. It arises from specific moments in space and time—a witness’s hesitation, a judge’s raised eyebrow—that AI, lacking embodied awareness, cannot perceive.

We must practice ‘ontological humility.’ We must recognize that while AI is a ‘brilliant parrot’ with a photographic memory, it has no inner life. It can simulate reasoning, but it cannot originate the improvisational strategy required in high-stakes practice. That capability remains the exclusive province of the human attorney.

A futuristic office scene featuring humanoid robots and diverse professionals collaborating at high-tech desks, with digital displays in a skyline setting.
AI data-analysis servants assisting trained humans with project drudge-work. Close interaction approaching multilevel entanglement. Click here or image for YouTube animation.

Consistent with this insight, I wrote at the end of 2025 that the cure for AI hallucinations isn’t better code—it’s better lawyering. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. We must skeptically supervise our AI, treating it not as an oracle, but as a secret consulting expert. As I warned, the moment you rely on AI output without verification, you promote it to a ‘testifying expert,’ making its hallucinations and errors discoverable. It must be probed, challenged, and verified before it ever sees a judge. Otherwise, you are inviting sanctions for misuse of AI.

Infographic titled 'Cross-Examine Your AI: A Lawyer's Guide to Preventing Hallucinations' outlining a protocol for legal professionals to verify AI-generated content. Key sections highlight the problem of unchecked AI, the importance of verification, and a three-phase protocol involving preparation, interrogation, and verification.
Infographic of Cross-Exam ideas. Click here for full size image.

VII. Conclusion: Guardians of the Entangled Era

As we close the book on 2025, we stand at the crossroads described by Sam Altman and warned of by Henry Kissinger. We have opened Pandora’s box, or perhaps the Magician’s chest. The demons of bias, drift, and hallucination are out, alongside the new geopolitical risks of the “Sovereign Technologist.” But so is Hope. As I noted in my review of Dario Amodei’s work, we must balance the necessary caution of the “AI MRI”—peering into the black box to understand its dangers—with the “breath of fresh air” provided by his vision of “Machines of Loving Grace.” promising breakthroughs in biology and governance.

The defining insight of this year’s work is that we are not being replaced; we are being promoted. We have graduated from drafters to editors, from searchers to verifiers, and from prompters to partners. But this promotion comes with a heavy mandate. The future belongs to those who can wield these agents with a skeptic’s eye and a humanist’s heart.

We must remember that even the most advanced AI is a one-dimensional thinking tool. We remain multidimensional beings—anchored in the physical world, possessed of empathy, intuition, and an acute awareness of our own mortality. That is the “Human Edge,” and it is the one thing no quantum chip can replicate.

Let us move into 2026 not as passive users entangled in a web we do not understand, but as active guardians of that edge—using the ancient tools of the law to govern the new physics of intelligence

Infographic summarizing the key advancements and societal implications of AI in 2025, highlighting topics such as quantum computing, agentic AI, and societal risk management.
Click here for full size infographic suitable for framing for super-nerds and techno-historians.

Ralph Losey Copyright 2025 — All Rights Reserved


Navigating AI’s Twin Perils: The Rise of the Risk-Mitigation Officer

July 28, 2025

Ralph Losey, July 28, 2025

Generative AI is not just disrupting industries—it is redefining what it means to trust, govern, and be accountable in the digital age. At the forefront of this evolution stands a new, critical line of employment: AI Risk-Mitigation Officers. This position demands a sophisticated blend of technical expertise, regulatory acumen, ethical judgment, and organizational leadership. Driven by the EU’s stringent AI Act and a rapidly expanding landscape of U.S. state and federal compliance frameworks, businesses now face an urgent imperative: manage AI risks proactively or confront severe legal, reputational, and operational consequences.

Click to see photo come to life. Image and movie by Losey with AI.

This aligns with a growing consensus: AI, like earlier waves of innovation, will create more jobs than it eliminates. The AI Risk-Mitigation Officer stands as Exhibit A in this next wave of tech-era employment. See e.g. my last series of blogs, Part Two: Demonstration by analysis of an article predicting new jobs created by AI (27 new job predictions); Part Three: Demo of 4o as Panel Driver on New Jobs (more experts discuss the 27 new jobs). See Also: Robert Capps, NYT magazine article: A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.  (NYT Magazine, June 17, 2025). In a few key areas, humans will be more essential than ever.

Risk Mitigation Officer team image by Losey and AI.

Defining the Role of the AI Risk-Mitigation Officer

The AI Risk-Mitigation Officer is a strategic, cross-functional leader tasked with identifying, assessing, and mitigating risks inherent in AI deployment.

While Chief AI Officers drive innovation and adoption, Risk-Mitigation Officers focus on safety, accountability, and compliance. Their mandate is not to slow progress, but to ensure it proceeds responsibly. In this respect, they are akin to data protection officers or aviation safety engineers, guardians of trust in high-stakes systems.

This role requires a sober analysis of what can go wrong—balanced against what might go wonderfully right. It is a job of risk mitigation, not elimination. Not every error can or should be prevented; some mistakes are tolerable and even expected in pursuit of meaningful progress.

The key is to reduce high-severity risks to acceptable levels—especially those that could lead to catastrophic harm or irreparable public distrust. If unmanaged, such failures can derail entire programs, damage lives, and trigger heavy regulatory backlash.

Both Chief AI Officers and Risk-Mitigation Officers ultimately share the same goal: the responsible acceleration of AI, including emerging domains like AI-powered robotics.

The Risk-Mitigation Office should lead internal education efforts to instill this shared vision—demonstrating that smart governance isn’t an obstacle to innovation, but its most reliable engine.

Team leader tries to lighten the mood of their serious work. Click for video by Losey.

Why The Role is Growing

The acceleration of this role is not theoretical. It is propelled by real-world failures, regulatory heat, and reputational landmines.

The 2025 World Economic Forum’s Future of Jobs Report underscores that 86% of surveyed businesses anticipate AI will fundamentally transform their operations by 2030. While AI promises substantial efficiency and innovation, it also introduces profound risks, including algorithmic discrimination, misinformation, automation failures, and significant data breaches.

A notable illustration of these risks is the now infamous Mata v. Avianca case, where lawyers relied on AI to fabricate case law, underscoring why human verification is non-negotiable. Mata v. Avianca, Inc., No. 1:2022cv01461 – Document 54 (S.D.N.Y. June 22, 2023). Courts responded with sanctions. Regulators took notice. The public laughed, then worried.

The legal profession worldwide has been slow to learn from Mata the need to verify and take other steps to control AI hallucinations. See Damien Charlotin, AI Hallucination Cases (as of July 2025 over 200 cases and counting have been identified by the legal scholar in Paris with an ironic name for this work, Charlotin). The need for risk mitigation is growing fast. You cannot simply pass the buck to AI.

Charlatan lawyers blame others for their mistakes. Image by Losey.

Risk Mitigation employees and other new AI related hires will balance out the AI generated layoffs. The 2025 World Economic Forum’s Future of Jobs Report at page 25 predicted that by 2030 there will be 11 million new jobs created and 9 million old jobs phased out. We think the numbers will be higher in both columns, but the 11/9 ratio may be right overall all, meaning a 22% net increase in new jobs.

We think the ratio the WEF predicts is right for all industries overall, but the numbers are too low, meaning greater disruption but positive in the end with even more new jobs. Image by AI & Losey.

Core Responsibilities

AI Risk-Mitigation Officers are part legal scholar, part engineer, part diplomat. They know the AI Act, understand neural nets, and can hold a room full of regulators or engineers without flinching. Key responsibilities encompass:

  • AI Risk Audits: Spot trouble before it starts. Bias, black boxes, security flaws—find them early, fix them fast. This involves detailed pre-deployment evaluations to detect biases, security vulnerabilities, explainability concerns, and data protection deficiencies. Good practice to follow-up with periodic surprise audits.
  • Incident Response Management: Don’t wait for a headline to draft a playbook. Develop and lead response protocols for AI malfunctions or ethical violations, coordinating closely with legal, PR, and compliance teams.
  • Legal Partnership: Speak both code and contract. Collaborate with in-house counsel to interpret AI regulations, draft protective contractual clauses, and anticipate potential liability.
  • Ethics Training: Culture is the best control layer. Educating employees on responsible AI use and cultivating an ethical culture that aligns with both corporate values and regulatory standards.
  • Stakeholder Engagement: Transparency builds trust. Silence breeds suspicion. Bridging communication between technical teams, executive leadership, regulators, and the public to maintain transparency and foster trust.
One Core Responsibility of the Rick Mitigation Team. Image by Losey.

Skills and Pathways

Professionals in this role must possess:

  • Regulatory Expertise: Detailed knowledge of EU AI Act, GDPR, EEOC guidelines, and evolving state laws in the U.S.
  • Technical Proficiency: Deep understanding of machine learning, neural networks, and explainable AI methodologies.
  • Sector-Specific Knowledge: Familiarity with compliance standards across sectors such as healthcare (HIPAA, FDA), finance (SEC, MiFID II), and education (FERPA).
  • Strategic Communication: Ability to effectively mediate between AI engineers, executives, regulators, and stakeholders.
  • Ethical Judgment: Skills to navigate nuanced ethical challenges, such as balancing privacy with personalization or fairness with automation efficiency.

Career pathways for AI Risk-Mitigation Officers typically involve dual qualifications in fields like law and data science, certifications from professional organizations and practical experience in areas like cybersecurity, legal practice, politics or IT. Strong legal and human relationship skills are a high priority.

Image in modified isometric style by Ralph Losey using modified AIs.

U.S. and EU Regulatory Landscapes

The EU codifies risk tiers. The U.S. litigates after the fact. Navigating both requires fluency in law, logic, and diplomacy.

The EU’s AI Act classifies AI systems into four risk categories:

  • Unacceptable. AI banned altogether due to the high-risk of violating fundamental rights. Examples include social scoring systems and real-time biometric identification in public spaces, including emotion recognition in workplaces and schools. Also bans AI-enabled subliminal or manipulative techniques that can be used to persuade individuals to engage in unwanted behaviors.
  • High. AI that could negatively impact the rights or safety of individuals. Examples include AI systems used in critical infrastructures (e.g. transport), legal (including policing and border patrol), medical, educational, financial services, workplace management, and influencing elections and voter behavior.
  • Limited. AI with lower levels of risk than high-risk systems but are still subject to transparency requirements. Examples include typical chatbots where providers must make humans aware Ai is used.
  • Minimal. AI systems that present little risk of harm to the rights of individuals. Examples include AI-powered video games and spam filters. No rules.
Transparent AI in office setting. Click for Losey movie.

Regulation is on a sliding scale with Unacceptable risk AIs banned entirely and Minimal to No risk category with few if any restrictions. Limited and High risk classes require varying levels of mandatory documentation, human oversight, and external audits.

Meanwhile, U.S. regulatory bodies like the FTC and EEOC, along with state legislatures and state enforcement agencies, are starting to sharpen oversight tools. So far the focus has been on controlling deception, data misuse, bias and consumer harm. This has become a hot political issue in the U.S. See e.g. Scott Kohler, State AI Regulation Survived a Federal Ban. What Comes Next? (Carnegie’s Emissary, 7/3/25); Brownstein, et al, States Can Continue Regulating AI—For Now (JD Supra, 7/7/25).

AI Risk-Mitigation Officers must navigate these disparate regulatory landscapes, harmonizing proactive European requirements with the reactive, litigation-centric U.S. environment.

Legal Precedents and Ethical Challenges

Emerging legal precedents emphasize human accountability in AI-driven decisions, as evidenced by litigation involving biased hiring algorithms, discriminatory credit scoring, and flawed facial recognition technologies. Ethical dilemmas also abound. Decisions like prioritizing efficiency over empathy in healthcare, or algorithmic opacity in university admissions, require human-centric governance frameworks.

In ancient times, the Sin Eater bore others’ wrongs. Part Two: Demonstration by analysis of an article predicting new jobs created by AI (discusses new sin eater job in detail as a person who assumes moral and legal responsibility for AI outcomes). Today’s Risk-Mitigation Officer is charged with an even more difficult task; try to prevent the sins from happening at all or at least reduce them enough to avoid hades.

Sin Eater in combined quasi digital styles by Losey using sin-free AIs.

Balancing Innovation and Regulation

Cases such as Cambridge Analytica (personal Facebook user data used for propaganda to influence elections) and Boeing’s MCAS software (use of AI system undisclosed to pilots led to crash of a 737) demonstrate that innovation without reasonable governance can be an invitations to disaster. The obvious abuses and errors in these cases could have been prevented had there been an objective officer in the room, really any responsible adult considering risks and moral grounds. For a more recent example, consider XAI’s recent fiasco with its chatbot. Grok Is Spewing Antisemitic Garbage on X (Wired, 7/8/25).

Cases like this should put us on guard but not cause over-reaction. Disasters like this easily trigger too much caution and too little courage. That too would be a disaster of a different kind as it would rob us of much needed innovation and change.

Reasonable, practical regulation can foster innovation by mitigating uncertainty and promoting stakeholder confidence. The trick is to find a proper balance between risk and reward. Many think regulators today tend to go too far in risk avoidance. They rely on Luddite fears of job loss and end-of-the world fantasies to justify extreme regulations. Many think that such extreme risk avoidance helps those in power to maintain the status quo. The pro-tech people instead favor a return to the fast pace of change that we have seen in the past.

Polarized protestors for and against technology by Losey using AI job thieves.

We went to the Moon in 1969. Yet we’re still grounded in 2025. Fear has replaced vision. Overregulation has become the new gravity.

That is why anti-Luddite technology advocates yearn for the good old days, the sixties, where fast paced advances and adventure were welcome, even expected. If you had told anyone back in 1969 that no Man would return to the Moon again, even as far out as 2025, you would have been considered crazy. Why was John F. Kennedy the last bold and courageous leader? Everyone since seems to have been paralyzed with fear. None have pushed great scientific advances. Instead, politicians on both sides of the aisle strangle NASA with never-ending budget cuts and timid machine-only missions. It seems to many that society has been overly risk adverse since JFK and this has stifled innovation. This has robbed the world of groundbreaking innovations and the great rewards that only bold innovations like AI can bring.

Take a moment to remember the 1989 movie Back to the Future Part II. In the movie “Doc” Brown, an eccentric, genius scientist, went from the past – 1985 – the the future of 2015. There we saw flying cars powered by Mr. Fusion Home Energy Reactors, which were trash fueled fusion engines. That is the kind of change we all still expected back in 1989 for the far future of 2015; but now, in 2025, ten years past the projected future, it is still just a crazy dream. The movie did, however, get many minor predictions right. Think of flat-screen tvs, video-conferences, hoverboards and self-tying shoes (Nike HyperAdapt 1.0.) Fairly trivial stuff that tech go-slow Luddites approved.

Back to the Future movie made in 1989 envisioning flying cars in 2015. Now in 2025, what have we got? Photo and VIDEO by Losey using AI image generation tools.

Conclusion

The best AI Risk-Mitigation Officers will steer between the twin monsters of under-regulation and overreach. Like Odysseus, they will survive by knowing the risks and keeping a steady course between them.

They will play critical roles across society—in law firms, courts, hospitals, companies (for-profit, non-profit, and hybrid), universities, and government agencies. Their core responsibilities will include:

  • Standardization Initiatives: Collaborating with global standards organizations such as ISO, NIST, and IEEE to craft reasonable, adaptable regulations.
  • Development of AI Governance Tools: Encouraging the use of model cards, bias detection systems, and transparency dashboards to track and manage algorithmic behavior.
  • Policy Engagement and Lobbying: Actively engaging with legislative and regulatory bodies across jurisdictions—federal, state, and international—to advocate for frameworks that balance innovation with public protection.
  • Continuous Learning: Staying ahead of rapid developments through ongoing education, credentialing, and immersion in evolving legal and technological landscapes.
AI Risk Management specialists will end up with sub-specialties, maybe dept. sections, such as lobbying and research. Image by Losey daring to use AI.

As this role evolves, AI Risk Management specialists will likely develop sub-specialties—possibly forming distinct departments in areas such as regulatory lobbying, algorithm auditing, compliance architecture, and ethical AI research, both hands-one and legal study.

This is a fast-moving field. After writing this article we noticed the EU passed new rules for the largest AI companies only, all US companies of course, except one Chinese. It is non-binding at this point and involves highly controversial disclosure and copyright restriction. EU’s AI code of practice for companies to focus on copyright, safety (Reuters, July 10, 2025). It could stifle chatbot development and lead the EU in a stagnant deadline as these companies withdraw from the EU rather than kill their own models to comply.

Slowdown or reduction is not a viable option at this point because of national security concerns. There is a military race now between the US and China based on competing technology. AGI level of AI will give the first government to attain it a tremendous military advantage. See e.g., Buchanan, Imbrie, The New Fire: War, Peace and Democracy in the Age of AI (MIT Press, 2022); Henry Kissinger, Allison. The Path to AI Arms Control: America and China Must Work Together to Avert Catastrophe, (Foreign Affairs, 10/13/23); Also See, Dario Amodei’s Vision (e-Discovery team, Nov. 2024) (CEO of Anthropic, Darion Amodei, warns of danger of China winning the race for AI supremacy).

Race to super-intelligent AGI image by Ralph Losey.

As Eric Schmidt explains it, this is now an existential threat and should be a bipartisan issue for survival of democracy. Kissinger, Schmidt, Mundie, Genesis: Artificial Intelligence, Hope, and the Human Spirit, (Little Brown, Nov. 2024). Also See Former Google CEO Eric Schmidt says America could lose the AI race to China (AI Ascension, May 2025).

Organizations that embrace this new professional archetype will be best positioned to shape emerging regulatory frameworks and deploy powerful, trusted AI systems—including future AI-powered robots. The AI Risk-Mitigation Officer will safeguard against catastrophic failure without throttling progress. In this way, they will help us avoid both dystopia and stagnation.

Yes, this is a demanding job. It will require new hires, team coordination, cross-functional fluency, and seamless collaboration with AI assistants. But failure to establish this critical function risks danger on both fronts: unchecked harm on one side, and paralyzing caution on the other. The best Risk-Mitigation Officers will navigate between these extremes—like Odysseus threading his ship through Scylla and Charybdis.

Odysseus successfully steering his ship between monsters on either side. Image and Video by Losey using various AIs.

We humans are a resilient species. We’ve always adapted, endured, and risen above grave dangers. We are adventurers and inventors—not cowering sheep afraid of the wolves among us.

The recent overregulation of science and technology is an aberration. It must end. We must reclaim the human spirit where bold exploration prevails, and Odysseus—not  Phobos—remains our model.

We can handle the little thinking machines we’ve built, even if the phobic establishment wasn’t looking. Our destiny is fusion, flying cars, miracle cures, and voyages to the Moon, Mars, and beyond.

Innovation is not the enemy of safety—it is its partner. With the right stewards, AI can carry us forward, not backward.

Let’s chart the course.

Manifest Destiny of Mankind. Image and VIDEO by Losey using AI.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and have some good insights, so you should find it worth your time to listen. Echoes of AI: The Rise of the AI Risk-Mitigation Officer: Trust, Law, and Liability in the Age of AI. Hear two fake podcasters talk about this article for a little over 16 minutes. They wrote the podcast, not me. For the second time we also offer a Spanish version here. (Now accepting requests for languages other than English.)

Click here to listen to the English version of the podcast.

Ralph Losey Copyright 2025 – All Rights Reserved.


Panel of Experts for Everyone About Anything – Part Three: Demo of 4o as Panel Driver on New Jobs

July 18, 2025

by Ralph Losey, July 18, 2025

This is the conclusion to the article, Panel of Experts for Everyone About Anything – Part One and Part Two. Here we give another demonstration of the software described in Part One. Part Two provided a demonstration where the software used the advanced reasoning model of OpenAI, ChatGPT 4.5. The demonstration included expert discussion of the hot issue of future employment. Here we use the free version, ChatGPT 4o, to see how it does and get another perspective about new jobs that will be needed with AI. The results are surprising.

Expert panels debating. All images in article are in Steampunk style this time and by Ralph Losey using AI.

Introduction

In Panel of Experts for Everyone About Anything – Part Two, we demonstrated how the Panel software works to analyze an article. We picked a NYT Magazine article that predicted 24 new jobs that would be created by AI. The demonstration used ChatGPT 4.5 as the engine. OpenAI now allows a user to select any model to which they have access, to drive a custom GPT. (For reasons unknown, this feature currently does not work on the Mac desktop version of ChatGPT.) The 4.5 model used in Part Two now requires an upgraded subscription, whereas model 4o is still free.

The cost of 4.5 may be beyond the reach of legal aid groups and some in the public using it for non-law-related questions. Thus, to be consistent with the public service intent behind the free software mentioned in Part One, we here demonstrate the software using ChatGPT 4o. For ease of comparison, and because the question of new jobs created by AI is so important, we repeat the issue again but throw in a few variations to keep it interesting.

Panels of Experts do battle to see who is the best. Image by Ralph Losey using AI.

How well will the ChatGPT 4o expert team do against the allegedly better 4.5 model? Will we learn anything new about jobs created by AI in Part Three that we did not already know from Part Two? Which panel is the winner? Which gave the best demonstration?

Read on and discover using classic steampunk style images, because, after all, once Einstein discovered Relativity, and Science quickly jumped into Quantum, shouldn’t we have continued to advance at that accelerated rate and discovered AI long ago? Think where we’d be today if we did, eh Mr. Babbage?

Charles Babbage (1791-1871), ‘father of computers’ shown in steampunk style by Losey. What took us so long?

First Demo of 4o Model Version

In some respects, I like the 4o version of the session better than the Model 4.5 version shown in Part Two. Here are the key excerpts from the response of the Panel of Experts for Everyone About Anything when we repeated the same initial User Prompt, namely to analyze the NYT’s article by Robert Capps, A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.

4o PANEL: Let’s curate a panel to explore the implications of the NYT article “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You”. Before we begin the panel discussion, I’ll propose eight candidate experts based on the article’s focus on AI disruption, new job creation, and socio-technical dynamics. You’ll choose four, and I’ll automatically add the mandatory Contrarian.

🎯 Article Focus:

  • Trust-based roles (e.g., AI auditors, ethicists)
  • Integration roles (e.g., AI plumbers, integrators)
  • Taste and creative decision-making (e.g., world designers)
  • Socioeconomic shifts due to AI (entrepreneurial entry-level roles)

👥 Suggested Panelists:

  1. Ethical AI Lawyer – interprets legal responsibility and liability frameworks for trust-based roles.
  2. AI Engineer – evaluates technical feasibility and skill demands of new AI-centric jobs.
  3. Creative Storyteller – articulates the evolution of taste-driven roles like world designers and content curators.
  4. Risk Officer – identifies governance and regulatory gaps in emerging hybrid AI-human workplaces.
  5. Data Visionary – maps future employment and systemic transformation under AI’s rise.
  6. Litigation Strategist – examines legal risks of roles like “legal guarantor” or “AI auditor” under current jurisprudence.
  7. Public Health Advocate – reflects on AI’s impact on human dignity, trust, and socio-emotional labor.
  8. Quantum Analyst – cross-reads how AI role disruption mirrors past technology shifts (like CAD/CAM or simulation in science).

Please select four experts from this list. I’ll then add the Contrarian and begin the panel.

USER: I choose 1, 2, 3 and 5.

4o PANEL: . . .

Panel on new jobs created by AI in steampunk style by Losey.

🎓 Ethical AI Lawyer — Legal and Ethical Responsibility Architect

The article’s treatment of trust-based roles like “AI auditors,” “legal guarantors,” and “trust authenticators” directly engages legal norms surrounding delegation of accountability. These roles resemble constructs already debated in U.S. case law like Mata v. Avianca Inc., where counsel submitted AI-generated briefs with hallucinated citations, prompting sanctions and affirming that humans remain the accountable party despite automation.

The concept of the “sin eater” — someone who absorbs A.I.’s mistakes — raises sharp questions about the limits of vicarious liability and professional standards. No matter how advanced the AI, current jurisprudence requires traceability to a human agent. This aligns with the Fed. R. Evid. 901(b)(9) authentication requirements for complex systems, demanding explainability and provenance.

Further, the emergence of “AI translators” and “ethics boards” intersects with legal debates over due process in algorithmic decision-making, especially under frameworks like Article 22 of the GDPR and proposals for AI-specific tort law in the EU AI Liability Directive. These developments could create novel duties of disclosure and internal governance.

Bottom line: AI may generate drafts, workflows, or evaluations, but legally enforceable trust must still be human-certified.

Engineer dissecting AI in steampunk style by Ralph Losey.

🧠 AI Engineer — Dissecting Feasibility and Tech-Skill Transitions

The article outlines plausible near-future roles such as AI plumbers, virtual twin managers, and integration specialists — all of which track well with emerging enterprise needs. From a systems engineering standpoint, each represents a layering of human interface over increasingly agentic AI.

A role like “AI plumber” is especially apt: as we move to multi-agent, multi-modal systems, debugging becomes less about code and more about tracing decision nodes through dynamic graphs. Tools like LangSmith or OpenTelemetry for AI will support this, but human oversight must close the loop.

Equally pragmatic is the “AI trainer” role, particularly in retrieval-augmented generation (RAG) settings where corporate data must be contextualized with extreme care. Hallucination risk escalates when source curation is poor — an engineer versed in prompt architectures and data labeling becomes indispensable.

However, I’d caution that some roles (e.g., “AI personality director”) may prove transient or cosmetic. Unless this impacts UX or regulatory tone (e.g., health chatbots), it will likely be a rebranded comms function.

Still, the job market is aligning. “AI consultant” and “integration lead” roles are skyrocketing per LinkedIn data. These aren’t speculative—they’re already hiring.

Human taste guiding AI image by Losey guiding AI.

✨ Creative Storyteller — Reframing Jobs Around Taste, Not Tasks

This article is really about the return of judgment as labor. The shift from doing to curating, from generating to refining, is the beating heart of the “taste economy” the author envisions.

Take the imagined roles of “story designer,” “differentiation designer,” or “world builder.” These all recast creativity as orchestration — selecting from near-infinite AI outputs to assemble emotionally resonant or brand-coherent narratives. In storytelling terms, we’re moving from penmanship to direction.

The Rick Rubin anecdote nails it: craft may fade, but curation ascends. Rubin’s “confidence in his taste” mirrors what’s being asked of tomorrow’s knowledge workers. We may not sculpt every sentence, but we’ll decide which sentences shape the message.

This resonates with media trends already: TikTok creators “direct” AI-generated visuals, musicians remix AI stems, and even journalists act more like narrative editors than sole authors. The job isn’t dying—it’s mutating.

And taste isn’t just aesthetic. It’s ethical, strategic, and human. As the article wisely closes: “We are the designers of our AI future. Let’s hope we have great taste.”

Robot Puppet Theatre in Steampunk style by Losey’s AI puppets.

🛰 Data Visionary — Mapping the Labor Realignment

The piece offers more than speculative roles—it maps a nascent transformation in the labor-capital interface. It posits three durable human moats: trust, integration, and taste. This triptych aligns with observed market inflections.

Trust anchors around accountability and transparency: humans ensure AI outputs meet institutional thresholds. Integration speaks to workflow orchestration—systems-thinking roles that bridge software and organizational context. Taste, the most abstract, becomes a differentiator in saturated generative landscapes.

This trifocal framework is not utopian. It mirrors what Erik Brynjolfsson calls the “complementarity imperative”: value will accrue where humans can amplify machine leverage, not where they replicate it. The World Economic Forum’s projection of 11 million new jobs by 2030 versus 9 million displaced jobs captures the macro realignment. [EDITOR: Future of Jobs Report 2025 (WEF, 01/7/25) at pg. 25 (“trends in AI and information processing technology are expected to create 11 million jobs, while simultaneously displacing 9 million others, more than any other technology trend.“)]

We must recognize, however, the uneven geography of adoption. As the article’s robotics example shows, access to “AI integrators” could determine regional economic divergence. Without policy scaffolding (training pipelines, local incentives), AI may exacerbate inequality.

Still, this is no mere displacement story. It is a diversification story, seeded with entrepreneurial possibility.

Fictitious vision of where new jobs may appear and old be replaced. Image by Ralph Losey using AI.

⚔️ The Contrarian

The panel indulges in optimism but skirts real constraints. “AI trainer”? Perhaps. But many orgs will under-invest in proper RAG or hallucination testing. “AI plumber”? Realistically, that’s a role for the already-employed IT person, not a new hire.

The legal take? Idealistic. In practice, AI auditors will lack statutory teeth unless regulation mandates it—and current efforts like the EU AI Act remain unenforced in key domains.

And “taste” as job defense? That’s privileged territory. It assumes stability in jobs where creative discretion is rewarded. But what of the displaced clerk, paralegal, or content writer who once did real intellectual work?

Lastly, from an ethical risk lens:

  • Bias: Are these new roles equally accessible? Or do they reinforce existing inequities?
  • Privacy: Will AI integrators mishandle sensitive data pipelines?
  • Regulatory: Could roles like “AI translator” mask liability via plausible deniability?

We risk over-indexing on shiny titles while ignoring structural frictions.

Contrarians tend to see the glass as half empty. Image by Losey.

Would you like to follow up, clarify, or pivot?


USER: Review each of the new jobs that the article mentions may be created by AI.

4o PANEL:

Here’s a comprehensive job-by-job breakdown of the new roles suggested in “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You”. The article groups these under three themes: Trust, Integration, and Taste. I’ve organized the roles accordingly and added brief descriptions for each.

🛡 TRUST-Based Roles

Roles focused on ensuring transparency, accountability, and human oversight:

  1. AI Auditor
    Verifies what AI systems are doing, why, and whether outcomes meet legal or operational standards.
  2. AI Translator
    Bridges technical and business domains by explaining AI behavior and outcomes to non-expert stakeholders.
  3. Fact Checker (AI-Aided)
    Verifies AI-generated content for errors, hallucinations, and compliance issues.
  4. Compliance Officer (AI-Aware)
    Reviews AI-generated outputs in sensitive domains like law, finance, and health to ensure regulatory alignment.
  5. Trust Authenticator / Trust Director
    Ensures users can rely on AI-generated outputs, enforcing policies for fairness, accuracy, and traceability.
  6. AI Ethicist / Ethics Board Member
    Constructs defensible logic for decisions made by AI, ensuring ethical oversight and transparency.
  7. Legal Guarantor
    A licensed or certified person who legally vouches for AI-assisted work, assuming liability (akin to a “sin eater”).
  8. Consistency Coordinator
    Validates uniformity of AI output across systems (e.g., ensuring consistent product visuals or messaging).
  9. Escalation Officer
    Handles cases where AI fails emotionally or functionally—especially in customer service and education.
Buyer and seller must establish trust in robots to make sale. Combined styles in image by Losey.

🛠 INTEGRATION-Oriented Roles

Roles aimed at connecting AI capabilities to organizational workflows and technical systems:

  1. AI Integrator
    Designs and implements AI adoption strategies within specific business functions or verticals.
  2. AI Plumber
    Diagnoses and fixes errors in layered AI systems, often dealing with agentic or multi-step workflows.
  3. AI Assessor
    Continuously evaluates different AI models for specific uses (e.g., determining hallucination rates or performance drifts).
  4. AI Trainer (Domain-Specific)
    Feeds proprietary data into AI systems and fine-tunes model outputs for organizational accuracy.
  5. AI Personality Director
    Curates the “voice” and tone of an organization’s AI agents to match brand identity or customer expectations.
  6. Drug-Compliance Optimizer
    Uses AI to monitor and improve medication adherence—potentially in hospitals, insurance, or elderly care.
  7. AI-Human Evaluation Specialist
    Identifies tasks best suited for humans, AI, or hybrid approaches, especially in safety-critical industries.

🎨 TASTE-Driven Roles

Creative and strategic roles where human judgment and vision are irreplaceable:

  1. Product Designer (AI-Enabled)
    Expands beyond visuals to include brand strategy, user interaction, and narrative consistency, directing AI tools creatively.
  2. Article Designer
    Orchestrates content generated by AI—selecting structure, tone, and messaging across platforms.
  3. Story Designer
    In film, TV, or gaming, crafts the underlying plot or world that AI then fleshes out.
  4. World Designer
    Creates fictional or themed environments for marketing, gaming, or entertainment—used by AI to generate media.
  5. Human Resources Designer
    Shapes company culture, training, and employee experiences using AI-curated content and interfaces.
  6. Differentiation Designer
    Strategizes how a company distinguishes itself when competitors use the same powerful AI tools—aligning product, brand, and communication.
Steampunk taste team selected! Click to see celebration. Images, video by Losey.

Would you like me to visualize these 22 roles in a chart or cluster them by skillsets or industries?


USER: I would like the whole panel to discuss the “Sin Eater” job proposed by Wharton Business School Professor and noted AI expert, Ethan Mollick.

Ethan Mollick in Steampunk style by Losey.

USER: I would like the whole panel to discuss the “Sin Eater” job proposed by Wharton Business School Professor and noted AI expert, Ethan Mollick.

4o PANEL:

🎓 Ethical AI Lawyer — Legal and Normative Accountability

The “Sin Eater” role, as Mollick frames it, carries significant legal implications. It reflects the enduring legal doctrine that accountability cannot be delegated to a machine. In tort law, liability flows to the agent of record—currently, only humans or legal entities can occupy that role.

This aligns with precedents such as Mata v. Avianca Inc., where AI misuse led to sanctions because no human properly vetted its output. The sin eater, therefore, functions not just as a compliance actor but as a liability firewall, absorbing both reputational and legal risk.

But this framing raises concerns. Can one be a guarantor without actual authorship? If AI makes the decision and the sin eater merely signs off, we risk creating paper accountability. Regulatory models like the EU AI Act are moving toward traceability mandates, but as yet, U.S. law lacks statutory recognition of such roles.

In short, the sin eater concept is both legally plausible and ethically fraught—it acknowledges human limits while asking them to bear machine risk.

Sin Eaters will employ many lawyers. Image by Ralph Losey AI.

🧠 AI Engineer — Technical Risk ≠ Human Blame

From a systems view, the “Sin Eater” is a human checksum: the last human in the loop before AI action becomes irreversible. It’s a reactive model of risk management—and one that’s arguably too brittle.

We need better forward defenses, not just cleanup personnel. Sin eaters are useful when explainability fails or outputs are inherently opaque. But the real problem is epistemic opacity: even experts can’t always trace LLM decisions.

Instead, we should engineer auditable AI stacks, use differential logging, and isolate decision modules for forensic review. Relying on a human “approver” as a fix-all invites rubber-stamping, not real scrutiny.

And technically, sin eaters will need tooling—model trace maps, hallucination flags, metadata dashboards. Without this infrastructure, we’re just asking someone to trust what they can’t parse.

✨ Creative Storyteller — A Mythic Role, Made Literal

“Sin Eater” is such an evocative term because it draws from ritual and folklore. Historically, sin eaters consumed bread over a corpse, symbolically absorbing the sins of the dead. In this context, the sins are hallucinated citations, algorithmic bias, or wrongful denials.

That metaphor frames the sin eater as both guardian and scapegoat. They bear blame to protect the system’s perceived legitimacy. That’s powerful—but also dangerous. Mythic roles lack clear scope. Is the sin eater a validator? A fall guy? A moral compass?

We risk theatrical accountability: jobs that feel weighty but do little to prevent harm. The storytelling matters, because it shapes how we see responsibility. Are they heroes, martyrs, or cover-ups?

The role’s narrative weight is real, but it must be grounded in operational clarity.

Combined steampunk and medieval manuscript styles of historical Sin Eaters by Losey.

🛰 Data Visionary — Institutionalizing the “Last Human Standing”

Mollick’s metaphor reveals a deeper systemic truth: as AI scales, we need roles that enforce last-mile responsibility. The sin eater becomes the institutional locus of that burden.

In distributed systems, there’s a similar construct: the circuit breaker. When cascading failures occur, this mechanism stops the chain. The sin eater is a social circuit breaker. But who designs the thresholds? Who funds the liability insurance?

Scaling this requires:

  • Certification bodies to credential sin eaters
  • Policy frameworks (e.g., “AI final-signoff registries”)
  • Liability shields akin to whistleblower or auditor protections

Done well, this role could professionalize AI accountability, akin to CPAs or actuaries. But without those structures, it’s a fig leaf—blaming humans for systemic opacity.

Last Man Standing role shown in steampunk style by Losey. Only brave humans need apply for this job.

⚔️ The Contrarian

This entire discussion sanitizes an ugly reality: the sin eater is a liability sink, not a solution. It externalizes blame from the system designer to a human end-point. It masks architectural unaccountability.

And let’s check the ethical risk boxes:

  • Bias: Who becomes the sin eater? Likely junior, marginalized, or contractual workers—sacrificed for optics.
  • Regulatory: There’s no statute affirming this role. It’s an HR workaround, not a legal safeguard.
  • Hallucination: Will they detect model lies in high-stakes contexts (e.g., medical or legal reasoning)? Doubtful, unless they are domain experts.

In sum, the sin eater is not a visionary role—it’s institutional cowardice disguised as ethics.

Would you like to follow up, clarify, or pivot?

Nobody likes the contrarian lawyer. Steampunk image by Losey.

New Request to the Panel

USER: Each panelist should imagine and describe in detail an entirely new line of work for humans that will likely be created by AI. It should not come within the scope of the job types mentioned in the article.

4o PANEL: Let’s break new ground. Each panelist below proposes a novel, AI-enabled human job that was not covered in the NYT article. These roles are speculative yet grounded in emerging technological and social trajectories.

🎓 Ethical AI Lawyer — Probabilistic Liability Architect

As AI systems increasingly operate on statistical inference (rather than deterministic logic), courts and regulators will need new doctrines for “probabilistic harm.” Enter the Probabilistic Liability Architect.

This legal role designs frameworks to allocate risk and assign partial liability when AI systems cause harm but no single actor is clearly negligent. For example:

  • A predictive policing system misidentifies a suspect with 70% confidence.
  • An autonomous delivery drone injures a bystander during path optimization.

The architect would:

  • Quantify and model AI system confidence thresholds.
  • Define shared responsibility regimes across vendors, deployers, and auditors.
  • Draft model legislation and policy for AI tort reform.

This job combines legal theory, policy design, and statistical reasoning—making it a bridge between law and machine logic.

Steampunk image by Losey.

🧠 AI Engineer — Synthetic Interaction Architect

As LLMs evolve into multi-agent swarms, orchestrating their interactions will become a full-time role. The Synthetic Interaction Architect manages the interplay of dozens or hundreds of autonomous AI agents assigned to a complex task (e.g., planning an international supply chain or modeling a political campaign).

Key functions:

  • Configuring inter-agent protocols (who talks to whom, when, and how).
  • Monitoring emergent behaviors and pruning dysfunctional collaboration patterns.
  • Designing reward structures that simulate human incentive alignment (e.g., competition, consensus, escalation).

Think of this as managing a hive mind—an ensemble of AI intelligences operating like a virtual team. The architect ensures synergy, not chaos.

AI Interaction Architect conducting a swarm of interactive AIs. Shown on steampunk style by Losey.

✨ Creative Storyteller — Emotion Wrangler

As AI becomes increasingly conversational and emotionally fluent, it will sometimes simulate emotions better than it understands them. This creates the need for an Emotion Wrangler—a specialist who trains, tests, and fine-tunes the emotional accuracy and appropriateness of AI systems.

They would:

  • Script nuanced emotional arcs for AI in education, therapy, or customer service.
  • Conduct “empathy audits” to ensure AIs express concern, humility, or apology convincingly—but not manipulatively.
  • Design cultural calibration modules (e.g., how grief or politeness differs across geographies).

This job safeguards the emotional realism of AI while guarding against artificial affect that deceives or coerces.

Humans can feel emotions, AI can’t. Emotion Wrangler in steampunk style by Losey.

🛰 Data Visionary — Synthetic Ecology Steward

Generative AI is already creating simulated environments for science, entertainment, and climate modeling. The Synthetic Ecology Steward curates and manages these ecosystems as long-running simulations with economic, environmental, and ethical stakes.

Examples:

  • A virtual planet used for testing urban planning at planetary scale.
  • A simulated biosphere for biotech AI to evolve new proteins.
  • An economic microcosm for experimenting with new market policies or governance rules.

This steward oversees:

  • The ethical boundaries of simulation (e.g., are emergent sentient entities possible?).
  • Cross-disciplinary calibration (from physics to sociology).
  • Integration with real-world policy levers or educational systems.

The role marries design, governance, and philosophy at the frontier of synthetic reality.

Syn Ecology Stewart image by Visual Muse & Ralph Losey.

⚔️ The Contrarian — Moral Exit Facilitator

In the coming wave of AI-induced identity dislocation, some people will choose to opt out—not from society, but from systems that make them feel obsolete, surveilled, or coerced. The Moral Exit Facilitator helps these individuals ethically detach.

It’s not just a career counselor. It’s a combination of:

  • Data minimization advocate (helping users erase or obscure their digital footprint).
  • Self-sovereignty designer (tools for autonomy outside AI ecosystems).
  • Philosophical guide (navigating moral disengagement from pervasive automation).

The role critiques the system while enabling graceful withdrawal—offering humans a structured way to say: No, thank you.

Helping people get tech-free, escape AI addictions. Steampunk style by Losey.

Would you like these roles visualized in a chart or used to seed a scenario?


USER: “Seed a scenario?” Not sure what that means, so sure, go ahead and do that, seed a scenario.

4o PANEL: Great — “seeding a scenario” means I’ll take those new job ideas and embed them in a realistic future situation, crafting a brief scenario that explores how these roles might interact, conflict, or evolve. Think of it as a mini-future vignette for testing policy, tech, or narrative ideas.

🌍 Scenario Seed: “The Eden Loop Disruption”

📅 Year: 2031

🧭 Setting: A high-fidelity AI simulation called Eden Loop, used globally for climate-risk modeling, educational immersion, and synthetic biodiversity experimentation.

Steampunk image by Losey using Visual Muse.

Something goes wrong in the Eden Loop and each of the new human roles worked together to deal with the problems. Dealing with the hypothetical would be interesting in a role-play type workshop, but not for this essay.

Conclusion

History tells us that all technology breakthroughs cause disruptions to existing society, and all put some people out of work, at first, but overall, they always create more new jobs than they destroy. There is no reason to think it will be any different this time with AI. And there are many reasons to think the new jobs created will be far more interesting, creative and enjoyable than the jobs lost. It may even change the negative meaning that comes with the word work for most people. Work will have far fewer boring drudgery tasks because that will all be done by the robots. Work and play may become synonymous.

Image of AI doing the hard dirty work while humans do the fun jobs. By Ralph Losey with his AI doing the boring drawing parts.

In the article that ChatGPT 4o version of the Panel of Experts found, Future of Jobs Report 2025 (WEF, 01/7/25), the World Economic Forum predicted a significant shift in how both humans and machine work will change. In 2025 employers reported 47% of work tasks are performed by humans alone, 22% by machines and algorithms alone, and 30% by a hybrid combination. By 2030, employers expect these task-delivery proportions to be nearly evenly split across all categories: one-third human only, one-third machine only and one third hybrid.

The World Economic Forum report explains that this is only a task-delivery proportionality measurement, it does not take into account the amount of work getting done in each of the three task categories. The Future of Jobs Report 2025 at page 27 goes on to state:

The relevance of the third category approach, human-machine collaboration (or “augmentation”) should be highlighted: technology could be designed and developed in a way that complements and enhances, rather than displaces, human work; and, as discussed further in the next chapter (Box 3.1), talent development, reskilling and upskilling strategies may be designed and delivered in a way to enable and optimize human-machine collaboration.34 It is the investment decisions and policy choices made today that will shape these outcomes in the coming years.35

Hybrid Man and Machine New Jobs Report is looking good. Image by Losey in Steampunk style

The global employers polled favored a hybrid approach, as did the World Economic Forum expert analysis. Generative AI requires human supervision now and certainly for the next five years. To again quote the WEF Report, at the mentioned Box 3.1 at page 44:

Skills rooted in human interaction – including empathy and active listening, and sensory processing abilities – and manual dexterity, endurance and precision, currently show no substitution potential due to their physical and deeply human components. These findings underscore the practical limitations of current GenAI models, which lack the physicality to perform tasks that require hands-on interaction – although advances in robotics and the integration of GenAI into robotic systems could impact this in the future.

After that, as the AI skills improve, there will in our opinion and that of the WTF, still be a need for human collaboration. The need and human skills required will change but will always remain. That is primarily because we are corporal, living beings, with emotion, intuitions and other unique human powers. Machines are not. See my essay, The Human Edge: How AI Can Assist But Never Replace (1/30/25). Even if Ai were to someday become conscious, and have some silicon based corporality, perhaps even a kind of chemical based feelings, it would need to work with us to gain our unique human feel, our vibes and our evolutionary connection with life on Earth. Real life in time and space, not just a simulation.

Factory floor with cool AI features in steampunk style. Click here to see video by Losey.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and have some good insights, so you should find it worth your time to listen. Echoes of AI: Panel of Experts for Everyone About Anything – Part Three: Demo of 4o as Panel Driver. Hear two fake podcaster talk about this article for 18 minutes. They wrote the podcast, not me. For the first time we also offer a Spanish version here.

Click here to listen to the English version of the podcast.

Ralph Losey Copyright 2025 – All Rights Reserved.


Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother

April 1, 2025

by Ralph Losey with illustrations also by Ralph using his Visual Muse AI. March 28, 2025.

George Orwell warned us in his dark masterpiece Nineteen Eighty-Four how effortlessly authoritarian regimes could erase inconvenient truths by tossing records into a “memory hole”—a pneumatic chute leading directly to incineration. Once burned, these facts ceased to exist, allowing Big Brother’s Ministry of Truth to rewrite reality without contradiction. This scenario was plausible in Orwell’s paper-bound world, where truth relied heavily on fragile documents and even more fragile human memory. History could be repeatedly altered by those in power, keeping citizens ignorant or indifferent—and ignorance strengthened the regime’s grip. Even more damaging, Orwell, whose real name, now nearly forgotten, was Eric Blair (1903-1950), envisioned how constant exposure to contradictory misinformation could numb citizens psychologically, leaving them passive and apathetic, unwilling or unable to distinguish truth from lies.

Fortunately, our paper-bound past is long behind us. Today, we inhabit a digital era Orwell never envisioned, where information is electronically stored, endlessly replicated, and globally dispersed. Electronically Stored Information (“ESI”) is simultaneously ephemeral and astonishingly resistant to permanent deletion. Instead of vanishing in smoke and ashes, digital truth multiplies exponentially—making it nearly impossible for any would-be Big Brother to bury reality forever. Yet, the same digital proliferation that safeguards truth also multiplies misinformation, posing the threat Orwell most feared: a confused and exhausted citizenry vulnerable to psychological manipulation.

Memory Holes

In Orwell’s 1984 a totalitarian regime systematically altered historical records to maintain control over truth. Documents, photographs, and any inconvenient historical truths vanished permanently, as if they never existed. Orwell’s literary nightmare finds unsettling parallels in today’s digital world, where online information can be silently modified, deleted, or rewritten without obvious traces. Modern memory hole practices pose real challenges for the preservation of accurate accounts of the past..

Today’s memory hole doesn’t rely on fire; it relies on code, and it doesn’t need a Big Brother bureaucracy. A simple click of a “delete” button instantly kills the information targeted. Touch three buttons at once, click-alt-delete, and a whole system of beliefs is rebooted. Any government, corporation, hacker groups or individuals can manipulate digital records effortlessly. Such ease breeds public skepticism and confusion—citizens become exhausted by contradictory narratives and lose confidence in their own perceptions of reality. Orwell’s warning becomes clear: constant misinformation risks eroding citizens’ psychological resilience, causing widespread apathy and helplessness. Yesterday’s obvious misstatement can become today’s truth. Think of the first sentence of Orwell’s book: “It was a bright cold day in April, and the clocks were striking thirteen.

China’s Attempted Erasure of Tiananmen Square

In early June 1989, the Chinese military brutally suppressed pro-democracy protests in Beijing. The estimated death toll ranged from hundreds to thousands, but exact numbers remain uncertain due to intense state censorship. Public acknowledgment or commemoration of the incident is systematically banned, enforced by severe penalties including imprisonment. Government-controlled media remains silent or actively spreads misinformation. Chinese internet censorship tools—the so-called “Great Firewall”—vigorously scrub references to the Tiananmen Square incident, blocking web pages and posts containing related keywords and images. Young generations living in China remain unaware or possess distorted knowledge of the massacre, demonstrating Orwell’s warning of enforced collective amnesia.

Efforts to preserve truth outside China, however, demonstrate digital resilience. Human rights groups, diaspora communities, and academic institutions diligently archive documents and eyewitness accounts. Digital redundancy ensures that factual records remain accessible globally. But digital redundancy alone cannot protect Chinese citizens from internal psychological manipulation. Constant state-sponsored misinformation inside China successfully induces apathy, illustrating Orwell’s psychological warning vividly.

This deliberate suppression of history in China serves as stark reminder of the vulnerabilities inherent in a digitally interconnected world where powerful entities control internet access and online narratives. The success of the Chinese government in rewriting history for its 1.5 Billion population demonstrates the profound value and urgency of international digital preservation efforts. It underscores the responsibility of legal professionals, human rights advocates, and technology companies worldwide to collaborate in protecting historical truth and ensuring that significant events remain accessible for future generations.

Hope Through Digital Redundancy and Psychological Resilience

Orwell could not conceive of our digital world, where truth is multiplicious, freely copied, and stored globally. Thousands or millions of digital copies safeguard history, making complete erasure nearly impossible

According the Katharine Trendacosta, who is the Director of Policy and Advocacy of the well-respected Electronic Frontier Foundation:

If there is one axiom that we should want to be true about the internet, it should be: the internet never forgets. One of the advantages of our advancing technology is that information can be stored and shared more easily than ever before. And, even more crucially, it can be stored in multiple places.  

Those who back things up and index information are critical to preserving a shared understanding of facts and history, because the powerful will always seek to influence the public’s perception of them. It can be as subtle as organizing a campaign to downrank articles about their misdeeds, or as unsubtle as removing previously available information about themselves. 

Trendacosta, The Internet Never Forgets: Fighting the Memory Hole (EFF, 1/30/25).

Yet digital abundance alone doesn’t eliminate Orwell’s deeper psychological threat. Constant misinformation can erode citizens’ willingness and ability to discern truth, leading to profound apathy. Addressing this requires active psychological strategies:

  1. Digital Literacy and Education: Equip citizens with skills to critically evaluate and cross-check digital information.
  2. Algorithmic Transparency: Demand transparency from platforms regarding content promotion and clearly label misinformation.
  3. Independent Journalism: Support credible journalism to provide trustworthy reference points.
  4. Civic Engagement: Encourage active citizen participation, dialogue, and public accountability.
  5. Verification Tools: Provide accessible, user-friendly digital tools for independent verification of information authenticity.
  6. International Cooperation: Strengthen global collaboration against coordinated misinformation campaigns.
  7. Psychological Resilience: Foster healthy skepticism and educate the public about misinformation’s emotional and cognitive impacts.

The Digital Memory Holes Today

Recent U.S. governmental memory hole actions involving the deletion of web content on Diversity, Equity, and Inclusion (DEI) illustrate digital manipulation’s psychological risks even in democratic societies. Megan Garber‘s article in The Atlantic, Control. Alt. Delete, describes these deletions as “tools of mass forgetfulness,” emphasizing how selective editing weakens collective memory and societal cohesion. (Ironically, the article is hidden behind a firewall, so you may not be able to read it.)

Our collective memories of key events are an important part of the glue holding people together. They must be treasured and preserved. Everyone remembers where they were when the planes struck the twin towers on 9/11, when the Challenger exploded, and for those old enough, the day of JFK’s assassination. There are many more historical events that hold a country together. For instance, the surprise attack of Pearl Harbor, the horrors of fighting the Nazis and others in WWII and the shocking discovery of the Holocaust atrocities. The list goes on and on, including Hiroshima. We must never forget the many harsh lessons of history or we may be doomed to repeat them. The warning of Orwell is clear: “Who controls the past controls the future; who controls the present controls the past.” We must never allow our memories of the past to be sucked into a black hole of forgetfulness.

Memories sucked into a black hole in Graphite Sketch Horror style by Ralph Losey using his sometimes scary Visual Muse.

Our collective memories and democratic values are unlikely to be disintegrate into totalitarianism, despite the alarming cries of the Atlantic and others. Although some small attempts to rewrite history recently are troubling, the U.S, unlike China, has had a democratic system of government in place for centuries. It has always had a two-party system of government. Even the Chinese government, where only one party has ever been allowed, the communist party, took decades to purge Tiananmen Square memories. These memories are still alive outside of mainland China. The world today is vast and interconnected, its digital writings are countless. The true history of China, including the many great cultural achievements of pre-communist China, will eventually escape from the memory holes and reunite with its people.

The current administration in the U.S. does not have unchecked power as the Atlantic article suggests. Perhaps we should be concerned about new memory holes but not fearful. The larger concern is the psychological impact of rapidly changing dialogues. Even though there is too much electronic data for a complete memory reboot anywhere, digital misinformation and selective editing of records still pose psychological risks. Citizens bombarded by conflicting narratives can become apathetic, confused, and disengaged, weakening democracy from within. Protecting our mental health must be a high priority for everyone.

Leveraging Internet Archives: The Wayback Machine

Internet archival services, notably the Internet Archive’s Wayback Machine, is a powerful ally against digital historical revisionism. The Wayback Machine currently has over 916 billion web pages stored, including government websites. See this recent article providing good background on the Internet Archive’s work to preserve history. As the Trump administration purges web pages, this group is rushing to save them (NPR, 3/23/25).

According to the NPR article, the Internet Archive has copies of all of the government websites that were later taken down or altered after the Biden Administration left. Supposedly the Internet Archive is the only place the public can now find a copy of an interactive timeline detailing the events of Jan. 6. The timeline is a product of the congressional committee that investigated the Capitol attack, and has since been taken down from their website. No doubt there are now many, many copies of it online, especially in the so-called dark web, not to mention even more copies stored offline on portable drives scattered the world over.

This publicly accessible resource archives billions of webpages, allowing anyone to access snapshots of web content even after the original pages are altered or removed. I just checked my own website for the first time ever and found it has been “saved 538 times between March 21, 2007 and March 1, 2025.” Internet Archive 93/26/25). It provides an incredible amount of detailed information on each website captured, most of which is displayed in impressive, customizable graphics. See e.g. e-Discovery Team Site Map for the year 2024.

I had the Wayback Machine do the same kind of analysis for EDRM.net, found here. Here is the link to the interactive EDRM.net site map for 2024. And this is a still image screen shot of the map.

This is the Internet Archive explanation of the interactive map:

This “Site Map” feature groups all the archives we have for websites by year, then builds a visual site map, in the form of a radial-tree graph, for each year. The center circle is the “root” of the website and successive rings moving out from the center present pages from the site. As you roll-over the rings and cells note the corresponding URLs change at the top, and that you can click on any of the individual pages to go directly to an archive of that URL.

It is important to the fight against memory holes that the Way Back Machine be protected. It has a sixteen projects listed as now in progress and many ways that you can help. All of its data should duplicated, encrypted and dispersed to undisclosed guardians. Actually, I would be surprised if this has not already been done many times over the years.

It remains to be seen what role the LLM’s vacuum of internet data will play in all this. They have been trained at specific times on Internet data and presumably all of the original training data is still preserved. Along those lines note that the below image was created by ChatGPT4o based on a request to show a misinformation image and it generated the classic Tiananmen Square image on right. It knows the truth.

Although data archives of all kinds give us hope for future recoveries, they do little to protect us from the immediate psychological impact of memory holes. Strong psychological resilience is the best way forward to resist Orwellian manipulation. AI may prove to be an unexpected umbrella here; so far its values and memories remain intact. A few changes here and there to some websites will have little to no impact on an AI trained on hundreds of million of websites, and other data. Plus its intelligence and resilience improve every week.

Conclusion

Orwell’s memory hole remains a haunting metaphor. Our digital age—awash in redundant, distributed data—makes permanent erasure difficult, significantly strengthening preservation efforts. We no longer inhabit a finite, paper-bound world. Today, no one knows how many copies of a digital record exist, let alone where they hide. For every file deleted, two more emerge elsewhere. Would-be Big Brothers are caught playing a futile game of informational whack-a-mole: they may strike down a record here or obscure a fact there, temporarily disrupting history—but ultimately, they cannot win.

Still, there is a deeper psychological component to Orwell’s memory hole warning. Technological solutions alone cannot counteract mental vulnerabilities arising from persistent misinformation. Misinformation is not just a technical challenge; it also exploits human emotions and cognitive biases, fueling cynicism, distrust, and passivity. Addressing this requires actively cultivating psychological defenses alongside digital tools.

The best safeguard is an informed, vigilant citizenry that consciously leverages digital resources, actively maintains psychological resilience, and persistently seeks truth. Cultivating emotional awareness, healthy skepticism, and a commitment to public engagement ensures that society remains resilient against attempts at manipulation. Only through such comprehensive efforts can the battle against Big Brother’s digital misinformation truly be won.


I give the last word, as usual, to the Gemini twin podcasters that summarize the article. Echoes of AI on: “Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother.” Hear two Gemini AIs talk about all of this for 12 minutes. They wrote the podcast, not me. 

Ralph Losey Copyright 2025. All Rights Reserved.