Navigating AI’s Twin Perils: The Rise of the Risk-Mitigation Officer

July 28, 2025

Ralph Losey, July 28, 2025

Generative AI is not just disrupting industries—it is redefining what it means to trust, govern, and be accountable in the digital age. At the forefront of this evolution stands a new, critical line of employment: AI Risk-Mitigation Officers. This position demands a sophisticated blend of technical expertise, regulatory acumen, ethical judgment, and organizational leadership. Driven by the EU’s stringent AI Act and a rapidly expanding landscape of U.S. state and federal compliance frameworks, businesses now face an urgent imperative: manage AI risks proactively or confront severe legal, reputational, and operational consequences.

Click to see photo come to life. Image and movie by Losey with AI.

This aligns with a growing consensus: AI, like earlier waves of innovation, will create more jobs than it eliminates. The AI Risk-Mitigation Officer stands as Exhibit A in this next wave of tech-era employment. See e.g. my last series of blogs, Part Two: Demonstration by analysis of an article predicting new jobs created by AI (27 new job predictions); Part Three: Demo of 4o as Panel Driver on New Jobs (more experts discuss the 27 new jobs). See Also: Robert Capps, NYT magazine article: A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.  (NYT Magazine, June 17, 2025). In a few key areas, humans will be more essential than ever.

Risk Mitigation Officer team image by Losey and AI.

Defining the Role of the AI Risk-Mitigation Officer

The AI Risk-Mitigation Officer is a strategic, cross-functional leader tasked with identifying, assessing, and mitigating risks inherent in AI deployment.

While Chief AI Officers drive innovation and adoption, Risk-Mitigation Officers focus on safety, accountability, and compliance. Their mandate is not to slow progress, but to ensure it proceeds responsibly. In this respect, they are akin to data protection officers or aviation safety engineers, guardians of trust in high-stakes systems.

This role requires a sober analysis of what can go wrong—balanced against what might go wonderfully right. It is a job of risk mitigation, not elimination. Not every error can or should be prevented; some mistakes are tolerable and even expected in pursuit of meaningful progress.

The key is to reduce high-severity risks to acceptable levels—especially those that could lead to catastrophic harm or irreparable public distrust. If unmanaged, such failures can derail entire programs, damage lives, and trigger heavy regulatory backlash.

Both Chief AI Officers and Risk-Mitigation Officers ultimately share the same goal: the responsible acceleration of AI, including emerging domains like AI-powered robotics.

The Risk-Mitigation Office should lead internal education efforts to instill this shared vision—demonstrating that smart governance isn’t an obstacle to innovation, but its most reliable engine.

Team leader tries to lighten the mood of their serious work. Click for video by Losey.

Why The Role is Growing

The acceleration of this role is not theoretical. It is propelled by real-world failures, regulatory heat, and reputational landmines.

The 2025 World Economic Forum’s Future of Jobs Report underscores that 86% of surveyed businesses anticipate AI will fundamentally transform their operations by 2030. While AI promises substantial efficiency and innovation, it also introduces profound risks, including algorithmic discrimination, misinformation, automation failures, and significant data breaches.

A notable illustration of these risks is the now infamous Mata v. Avianca case, where lawyers relied on AI to fabricate case law, underscoring why human verification is non-negotiable. Mata v. Avianca, Inc., No. 1:2022cv01461 – Document 54 (S.D.N.Y. June 22, 2023). Courts responded with sanctions. Regulators took notice. The public laughed, then worried.

The legal profession worldwide has been slow to learn from Mata the need to verify and take other steps to control AI hallucinations. See Damien Charlotin, AI Hallucination Cases (as of July 2025 over 200 cases and counting have been identified by the legal scholar in Paris with an ironic name for this work, Charlotin). The need for risk mitigation is growing fast. You cannot simply pass the buck to AI.

Charlatan lawyers blame others for their mistakes. Image by Losey.

Risk Mitigation employees and other new AI related hires will balance out the AI generated layoffs. The 2025 World Economic Forum’s Future of Jobs Report at page 25 predicted that by 2030 there will be 11 million new jobs created and 9 million old jobs phased out. We think the numbers will be higher in both columns, but the 11/9 ratio may be right overall all, meaning a 22% net increase in new jobs.

We think the ratio the WEF predicts is right for all industries overall, but the numbers are too low, meaning greater disruption but positive in the end with even more new jobs. Image by AI & Losey.

Core Responsibilities

AI Risk-Mitigation Officers are part legal scholar, part engineer, part diplomat. They know the AI Act, understand neural nets, and can hold a room full of regulators or engineers without flinching. Key responsibilities encompass:

  • AI Risk Audits: Spot trouble before it starts. Bias, black boxes, security flaws—find them early, fix them fast. This involves detailed pre-deployment evaluations to detect biases, security vulnerabilities, explainability concerns, and data protection deficiencies. Good practice to follow-up with periodic surprise audits.
  • Incident Response Management: Don’t wait for a headline to draft a playbook. Develop and lead response protocols for AI malfunctions or ethical violations, coordinating closely with legal, PR, and compliance teams.
  • Legal Partnership: Speak both code and contract. Collaborate with in-house counsel to interpret AI regulations, draft protective contractual clauses, and anticipate potential liability.
  • Ethics Training: Culture is the best control layer. Educating employees on responsible AI use and cultivating an ethical culture that aligns with both corporate values and regulatory standards.
  • Stakeholder Engagement: Transparency builds trust. Silence breeds suspicion. Bridging communication between technical teams, executive leadership, regulators, and the public to maintain transparency and foster trust.
One Core Responsibility of the Rick Mitigation Team. Image by Losey.

Skills and Pathways

Professionals in this role must possess:

  • Regulatory Expertise: Detailed knowledge of EU AI Act, GDPR, EEOC guidelines, and evolving state laws in the U.S.
  • Technical Proficiency: Deep understanding of machine learning, neural networks, and explainable AI methodologies.
  • Sector-Specific Knowledge: Familiarity with compliance standards across sectors such as healthcare (HIPAA, FDA), finance (SEC, MiFID II), and education (FERPA).
  • Strategic Communication: Ability to effectively mediate between AI engineers, executives, regulators, and stakeholders.
  • Ethical Judgment: Skills to navigate nuanced ethical challenges, such as balancing privacy with personalization or fairness with automation efficiency.

Career pathways for AI Risk-Mitigation Officers typically involve dual qualifications in fields like law and data science, certifications from professional organizations and practical experience in areas like cybersecurity, legal practice, politics or IT. Strong legal and human relationship skills are a high priority.

Image in modified isometric style by Ralph Losey using modified AIs.

U.S. and EU Regulatory Landscapes

The EU codifies risk tiers. The U.S. litigates after the fact. Navigating both requires fluency in law, logic, and diplomacy.

The EU’s AI Act classifies AI systems into four risk categories:

  • Unacceptable. AI banned altogether due to the high-risk of violating fundamental rights. Examples include social scoring systems and real-time biometric identification in public spaces, including emotion recognition in workplaces and schools. Also bans AI-enabled subliminal or manipulative techniques that can be used to persuade individuals to engage in unwanted behaviors.
  • High. AI that could negatively impact the rights or safety of individuals. Examples include AI systems used in critical infrastructures (e.g. transport), legal (including policing and border patrol), medical, educational, financial services, workplace management, and influencing elections and voter behavior.
  • Limited. AI with lower levels of risk than high-risk systems but are still subject to transparency requirements. Examples include typical chatbots where providers must make humans aware Ai is used.
  • Minimal. AI systems that present little risk of harm to the rights of individuals. Examples include AI-powered video games and spam filters. No rules.
Transparent AI in office setting. Click for Losey movie.

Regulation is on a sliding scale with Unacceptable risk AIs banned entirely and Minimal to No risk category with few if any restrictions. Limited and High risk classes require varying levels of mandatory documentation, human oversight, and external audits.

Meanwhile, U.S. regulatory bodies like the FTC and EEOC, along with state legislatures and state enforcement agencies, are starting to sharpen oversight tools. So far the focus has been on controlling deception, data misuse, bias and consumer harm. This has become a hot political issue in the U.S. See e.g. Scott Kohler, State AI Regulation Survived a Federal Ban. What Comes Next? (Carnegie’s Emissary, 7/3/25); Brownstein, et al, States Can Continue Regulating AI—For Now (JD Supra, 7/7/25).

AI Risk-Mitigation Officers must navigate these disparate regulatory landscapes, harmonizing proactive European requirements with the reactive, litigation-centric U.S. environment.

Legal Precedents and Ethical Challenges

Emerging legal precedents emphasize human accountability in AI-driven decisions, as evidenced by litigation involving biased hiring algorithms, discriminatory credit scoring, and flawed facial recognition technologies. Ethical dilemmas also abound. Decisions like prioritizing efficiency over empathy in healthcare, or algorithmic opacity in university admissions, require human-centric governance frameworks.

In ancient times, the Sin Eater bore others’ wrongs. Part Two: Demonstration by analysis of an article predicting new jobs created by AI (discusses new sin eater job in detail as a person who assumes moral and legal responsibility for AI outcomes). Today’s Risk-Mitigation Officer is charged with an even more difficult task; try to prevent the sins from happening at all or at least reduce them enough to avoid hades.

Sin Eater in combined quasi digital styles by Losey using sin-free AIs.

Balancing Innovation and Regulation

Cases such as Cambridge Analytica (personal Facebook user data used for propaganda to influence elections) and Boeing’s MCAS software (use of AI system undisclosed to pilots led to crash of a 737) demonstrate that innovation without reasonable governance can be an invitations to disaster. The obvious abuses and errors in these cases could have been prevented had there been an objective officer in the room, really any responsible adult considering risks and moral grounds. For a more recent example, consider XAI’s recent fiasco with its chatbot. Grok Is Spewing Antisemitic Garbage on X (Wired, 7/8/25).

Cases like this should put us on guard but not cause over-reaction. Disasters like this easily trigger too much caution and too little courage. That too would be a disaster of a different kind as it would rob us of much needed innovation and change.

Reasonable, practical regulation can foster innovation by mitigating uncertainty and promoting stakeholder confidence. The trick is to find a proper balance between risk and reward. Many think regulators today tend to go too far in risk avoidance. They rely on Luddite fears of job loss and end-of-the world fantasies to justify extreme regulations. Many think that such extreme risk avoidance helps those in power to maintain the status quo. The pro-tech people instead favor a return to the fast pace of change that we have seen in the past.

Polarized protestors for and against technology by Losey using AI job thieves.

We went to the Moon in 1969. Yet we’re still grounded in 2025. Fear has replaced vision. Overregulation has become the new gravity.

That is why anti-Luddite technology advocates yearn for the good old days, the sixties, where fast paced advances and adventure were welcome, even expected. If you had told anyone back in 1969 that no Man would return to the Moon again, even as far out as 2025, you would have been considered crazy. Why was John F. Kennedy the last bold and courageous leader? Everyone since seems to have been paralyzed with fear. None have pushed great scientific advances. Instead, politicians on both sides of the aisle strangle NASA with never-ending budget cuts and timid machine-only missions. It seems to many that society has been overly risk adverse since JFK and this has stifled innovation. This has robbed the world of groundbreaking innovations and the great rewards that only bold innovations like AI can bring.

Take a moment to remember the 1989 movie Back to the Future Part II. In the movie “Doc” Brown, an eccentric, genius scientist, went from the past – 1985 – the the future of 2015. There we saw flying cars powered by Mr. Fusion Home Energy Reactors, which were trash fueled fusion engines. That is the kind of change we all still expected back in 1989 for the far future of 2015; but now, in 2025, ten years past the projected future, it is still just a crazy dream. The movie did, however, get many minor predictions right. Think of flat-screen tvs, video-conferences, hoverboards and self-tying shoes (Nike HyperAdapt 1.0.) Fairly trivial stuff that tech go-slow Luddites approved.

Back to the Future movie made in 1989 envisioning flying cars in 2015. Now in 2025, what have we got? Photo and VIDEO by Losey using AI image generation tools.

Conclusion

The best AI Risk-Mitigation Officers will steer between the twin monsters of under-regulation and overreach. Like Odysseus, they will survive by knowing the risks and keeping a steady course between them.

They will play critical roles across society—in law firms, courts, hospitals, companies (for-profit, non-profit, and hybrid), universities, and government agencies. Their core responsibilities will include:

  • Standardization Initiatives: Collaborating with global standards organizations such as ISO, NIST, and IEEE to craft reasonable, adaptable regulations.
  • Development of AI Governance Tools: Encouraging the use of model cards, bias detection systems, and transparency dashboards to track and manage algorithmic behavior.
  • Policy Engagement and Lobbying: Actively engaging with legislative and regulatory bodies across jurisdictions—federal, state, and international—to advocate for frameworks that balance innovation with public protection.
  • Continuous Learning: Staying ahead of rapid developments through ongoing education, credentialing, and immersion in evolving legal and technological landscapes.
AI Risk Management specialists will end up with sub-specialties, maybe dept. sections, such as lobbying and research. Image by Losey daring to use AI.

As this role evolves, AI Risk Management specialists will likely develop sub-specialties—possibly forming distinct departments in areas such as regulatory lobbying, algorithm auditing, compliance architecture, and ethical AI research, both hands-one and legal study.

This is a fast-moving field. After writing this article we noticed the EU passed new rules for the largest AI companies only, all US companies of course, except one Chinese. It is non-binding at this point and involves highly controversial disclosure and copyright restriction. EU’s AI code of practice for companies to focus on copyright, safety (Reuters, July 10, 2025). It could stifle chatbot development and lead the EU in a stagnant deadline as these companies withdraw from the EU rather than kill their own models to comply.

Slowdown or reduction is not a viable option at this point because of national security concerns. There is a military race now between the US and China based on competing technology. AGI level of AI will give the first government to attain it a tremendous military advantage. See e.g., Buchanan, Imbrie, The New Fire: War, Peace and Democracy in the Age of AI (MIT Press, 2022); Henry Kissinger, Allison. The Path to AI Arms Control: America and China Must Work Together to Avert Catastrophe, (Foreign Affairs, 10/13/23); Also See, Dario Amodei’s Vision (e-Discovery team, Nov. 2024) (CEO of Anthropic, Darion Amodei, warns of danger of China winning the race for AI supremacy).

Race to super-intelligent AGI image by Ralph Losey.

As Eric Schmidt explains it, this is now an existential threat and should be a bipartisan issue for survival of democracy. Kissinger, Schmidt, Mundie, Genesis: Artificial Intelligence, Hope, and the Human Spirit, (Little Brown, Nov. 2024). Also See Former Google CEO Eric Schmidt says America could lose the AI race to China (AI Ascension, May 2025).

Organizations that embrace this new professional archetype will be best positioned to shape emerging regulatory frameworks and deploy powerful, trusted AI systems—including future AI-powered robots. The AI Risk-Mitigation Officer will safeguard against catastrophic failure without throttling progress. In this way, they will help us avoid both dystopia and stagnation.

Yes, this is a demanding job. It will require new hires, team coordination, cross-functional fluency, and seamless collaboration with AI assistants. But failure to establish this critical function risks danger on both fronts: unchecked harm on one side, and paralyzing caution on the other. The best Risk-Mitigation Officers will navigate between these extremes—like Odysseus threading his ship through Scylla and Charybdis.

Odysseus successfully steering his ship between monsters on either side. Image and Video by Losey using various AIs.

We humans are a resilient species. We’ve always adapted, endured, and risen above grave dangers. We are adventurers and inventors—not cowering sheep afraid of the wolves among us.

The recent overregulation of science and technology is an aberration. It must end. We must reclaim the human spirit where bold exploration prevails, and Odysseus—not  Phobos—remains our model.

We can handle the little thinking machines we’ve built, even if the phobic establishment wasn’t looking. Our destiny is fusion, flying cars, miracle cures, and voyages to the Moon, Mars, and beyond.

Innovation is not the enemy of safety—it is its partner. With the right stewards, AI can carry us forward, not backward.

Let’s chart the course.

Manifest Destiny of Mankind. Image and VIDEO by Losey using AI.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and have some good insights, so you should find it worth your time to listen. Echoes of AI: The Rise of the AI Risk-Mitigation Officer: Trust, Law, and Liability in the Age of AI. Hear two fake podcasters talk about this article for a little over 16 minutes. They wrote the podcast, not me. For the second time we also offer a Spanish version here. (Now accepting requests for languages other than English.)

Click here to listen to the English version of the podcast.

Ralph Losey Copyright 2025 – All Rights Reserved.


Breaking the AI Black Box: A Comparative Analysis of Gemini, ChatGPT, and DeepSeek

February 6, 2025

Ralph Losey. February 6, 2025.

On January 27, 2025, the U.S AI industry was surprised by the release of a new AI product, DeepSeek. It was released with an orchestrated marketing blitz attack on the U.S. economy, the AI tech industry, and NVIDIA. It triggered a trillion-dollar crash. The campaign used many unsubstantiated claims as set forth in detail in my article, Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. I tested DeepSeek myself on its claims of software superiority. All were greatly exaggerated except for one, the display of internal reasoning. That was new. On January 31, at noon, OpenAI countered the attack by release of a new version of its reasoning model, which is called ChatGPT o3-mini-high. The new version included display of its internal reasoning process. To me the OpenAI model was better as reported again in great detail in my article, Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand. The next day, February 1, 2025, Google released a new version of its Gemini AI to do the same thing, display internal reasoning. In this article I review how well it works and again compare it with the DeepSeek and OpenAI models.

Introduction

Before I go into the software evaluation, some background is necessary for readers to better understand the negative attitude on the Chinese software of many, if not most IT and AI experts in the U.S. As discussed in my prior articles, DeepSeek is owned by a young Chinese billionaire who made his money using by using AI in the Chinese stock market, Liang Wenfeng. He is a citizen and resident of mainland China. Given the political environment of China today, that ownership alone is a red flag of potential market manipulation. Added to that is the clear language of the license agreement. You must accept all terms to use the “free” software, a Trojan Horse gift if ever there was one. The license agreement states there is zero privacy, your data and input can be used for training and that it is all governed by Chinese law, an oxymoron considering the facts on the ground in China.

The Great Pooh Bear in China Controversy

Many suspect that Wenfeng and his company DeepSeek are actually controlled by China’s Winnie the Pooh. This refers to an Internet meme and a running joke. Although this is somewhat off-topic, a moment to explain will help readers to understand the attitude most leaders in the U.S. have about Chinese leadership and its software use by Americans.

Many think that the current leader of China, Xi Jinping, looks a lot like Winnie the Pooh. Xi (not Pooh bear) took control of the People’s Republic of China in 2012 when he became the “General Secretary of the Chinese Communist Party,” the “Chairman of the Central Military Commission,” and in 2013 the “President.” At first, before his consolidation of absolute power, many people in China commented on his appearance and started referring to him by that code name Pooh. It became a mime.

I can see how he looks like the beloved literary character, Winnie the Pooh, but without the smile. I would find the comparison charming if used on me but I’m not a puffed up king. Jinping Xi took great offense by this in 2017 banned all such references and images, although you can still buy the toys and see the costume character at the Shanghai Disneyland theme park. Anyone in China who now persists in the serious crime of comparing Xi to Pooh is imprisoned or just disappears. No AI or social media in China will allow it either, including DeepSeek. It is one of many censored subjects, which also includes the famous 1989 Tiananmen Square protests.

China is a great country with a long, impressive history and most of its people are good. But I cannot say that about its current political leaders who suppress the Chinese people for personal power. I do not respect any government that does not allow basic personal freedoms to its citizens, including due process of law. Moreover, Jinping Xi not only wants total control of his country but also seeks world domination. That is one reason many are concerned about DeepSeek and TicTok, and about the vulnerability of our stock markets and other institutions to AI assisted propaganda.

Fear and greed are an easy way to manipulate people, especially when there is no ground truth or effective courts of law to determine truth.

Google Gemini 2.0 Flash Thinking Experimental Model: Reasoning test on 2/5/25 (Evening)

Conversation with Gemini. When I tried the test on February 1, 2025, there was no button to click to have Gemini show the reasoning and it did not happen automatically. It operated like OpenAI 4o and o1, where you had to prompt for disclosure. Rather than report here what happened when I did that, aside from say it was not a great performance, I will fast forward a few days until February 5, 2025. At that time several even newer Gemini versions were released. The best new release I found was for the test the 2.0 Flash Thinking Experimental model. It showed reasoning automatically just like ChatGPT o3-mini-high. It was much improved, so the February 5, 2025 evening session with the 2.0 Flash Thinking Experimental is what I will now share.

Ralph: How does the inability of AI to feel impact its intelligence? (Note: this is the exact same challenging question asked and reported in my last blog comparing DeepSeek and ChatGPT4o, o1 and o3-mini-high. Breaking the AI Black Box: How DeepSeek’s Deep-Think Forced OpenAI’s Hand. This question is based on a key issue of the limits of artificial intelligence examined in my earlier article, The Human Edge: How AI Can Assist But Never Replace. For background on DeepSeek see: Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss.)

Gemini 2.0 Flash Thinking ExperimentalSTART OF GEMINI QUOTE

Show Thinking:

Here’s a thinking process to construct the answer about AI’s lack of feeling and its impact on intelligence:

  1. Deconstruct the Question: The core of the question is “How does the inability of AI to feel impact its intelligence?” This requires defining each of these terms in the context of AI and then exploring their relationship.
  2. Define “Feeling” (in the context of AI): Immediately, recognize that AI doesn’t experience emotions in the human sense. We need to be precise and avoid anthropomorphism. “Feeling” in humans involves:
    • Subjective experience (qualia).
    • Physiological responses.
    • Motivations, drives, desires related to emotions.
    • Richness, nuance, complexity.
    • Deep connection to values and meaning.
    For AI, acknowledge it lacks all of this in a genuine, felt way. It can simulate or process emotional data, but doesn’t feel it.
  3. Define “Intelligence” (in the context of AI): AI intelligence is different from human intelligence. AI excels at:
    • Computation.
    • Pattern Recognition.
    • Data Analysis.
    • Problem-solving in specific domains.
    • Learning from data.
    AI intelligence is often instrumental – focused on achieving specific goals defined by humans. It’s task-oriented and often narrow.
  4. Identify the Core Relationship: The question asks about impact. We need to consider both positive and negative impacts (or more accurately, limitations and strengths) of lacking feelings on AI intelligence.
  5. Brainstorm Potential Impacts (Positive/Neutral/Limitations):
    • Positive/Neutral:
      • Objectivity and Rationality: Lack of emotional bias allows for purely rational decision-making in certain domains. This can be advantageous for tasks requiring impartiality.
      • Efficiency and Focus: Emotions can be distractions for humans. AI, without them, can be hyper-focused on tasks.
      • Consistency and Predictability: Emotion-driven human behavior is less predictable. AI’s behavior, driven by algorithms, is more consistent (assuming the algorithm itself is consistent).
    • Limitations/Negative:
      • Lack of Empathy and Social Intelligence: Crucial for human interaction, collaboration, understanding social cues, building trust, and ethical decision-making in human-centric contexts.
      • Difficulty with Nuance and Context: Emotions are often tied to understanding subtle context, unspoken cues, and the richness of human situations. AI may miss these.
      • Potential for Unintended Consequences: Purely rational optimization without emotional or ethical grounding can lead to harmful outcomes if goals are poorly defined or lack human values.
      • Limited Motivation and Creativity (in human-like ways): Human creativity and motivation are often fueled by emotions, passions, and desires. AI’s creativity and drive are different, stemming from algorithmic processes and defined objectives, not internal emotional states.
      • Missing “Human-ness”: In areas requiring genuine understanding of human experience, suffering, joy, etc., AI may lack the depth and insight that comes from feeling.
  6. Structure the Answer: Organize the brainstormed points into a coherent structure:
    • Introduction: Acknowledge the question and the core concepts (feeling, intelligence in AI). Briefly state the central tension: both advantages and limitations.
    • Arguments for How Lack of Feeling Enhances Certain Aspects of AI Intelligence (strengths): Focus on objectivity, rationality, efficiency. Use examples like data analysis, objective decision-making.
    • Arguments for How Lack of Feeling Limits Certain Aspects of AI Intelligence (limitations): Focus on empathy, social intelligence, nuance, ethics, and “human-like” creativity and motivation. Use examples like customer service, complex ethical dilemmas, innovation driven by passion.
    • Nuance and Context is Key: Emphasize that the impact isn’t always negative. It depends on the domain and task. In some domains, lack of emotion is an advantage. In others, it’s a significant hurdle.
    • Future Possibilities/Ongoing Debate: Briefly touch on the possibility of AI simulating or processing emotions and the ongoing debate about whether genuine AI feelings are even desirable or possible.
    • Conclusion: Summarize the balanced perspective: lack of feeling is a defining characteristic that shapes AI’s intelligence in both positive and limiting ways, depending on the context.
  7. Refine and Polish: Review the structured answer for clarity, flow, and appropriate language. Ensure it avoids anthropomorphism and maintains a nuanced perspective. Strengthen the examples and transitions between points. Ensure the language is accessible and avoids overly technical jargon where possible. Check for logical consistency. (Self-correction step). For example, initially, I might have overemphasized “negative” impacts. Refinement would involve balancing this with the strengths and highlighting the context-dependent nature of the impact. Also, ensure the conclusion effectively summarizes the complex relationship.

2.0 Flash Thinking Experimental. Lacks access to real-time info and some Gemini features.

Now for the actual answer of 2.0 Flash Thinking Experimental:

The inability of AI to feel has a multifaceted and complex impact on its intelligence, shaping it in both advantageous and limiting ways. It’s crucial to understand that “feeling” in humans is tied to emotions, subjective experiences (qualia), motivations, and a rich tapestry of biological and psychological factors. AI, as we currently understand it, operates without these.

Here’s a breakdown of the impact:

Advantages of Lacking Feelings for AI Intelligence:

  • Objectivity and Rationality:
    • Unbiased Decision-Making (in theory): AI can process information and make decisions without emotional biases, prejudices, or gut feelings that can cloud human judgment. This is advantageous in domains requiring pure logic and data analysis, like financial trading, scientific research, and risk assessment.
    • Consistency and Predictability: AI’s behavior is more consistent and predictable because it’s driven by algorithms and data, not fluctuating emotional states. This reliability is valuable in automated systems, manufacturing, and repetitive tasks.
    • Efficiency and Speed: Emotions can slow down human decision-making. AI, unburdened by emotions, can process information and react at speeds far exceeding human capabilities, crucial for time-sensitive tasks and large-scale data processing.
    • Resilience under Pressure: AI can operate calmly and effectively in high-stress situations without panicking, freezing, or making emotional mistakes that humans might make under pressure. This is beneficial in critical applications like emergency response, air traffic control, and autonomous vehicles in dangerous situations.
  • Focus and Task-Orientedness:
    • Dedicated to Goals: AI can be programmed to relentlessly pursue specific goals without being distracted by emotional needs, personal relationships, or shifts in mood. This focus is ideal for optimization tasks, complex calculations, and dedicated problem-solving.
    • Avoidance of Emotional “Noise”: Human intelligence can be affected by emotional “noise” – stress, anxiety, excitement, etc. AI operates with a “cleaner” signal, potentially leading to more precise and focused information processing in certain contexts.

Limitations of Lacking Feelings for AI Intelligence:

  • Lack of Empathy and Social Intelligence:
    • Difficulty Understanding Human Emotions: Without feeling emotions themselves, AI struggles to truly understand and interpret the nuances of human emotions, motivations, and social cues. This hinders its ability to effectively interact with humans in socially complex situations, build rapport, or provide truly empathetic responses in fields like therapy, customer service, or leadership.
    • Impaired Social Interaction and Collaboration: Human collaboration is heavily reliant on emotional intelligence, trust, and understanding unspoken social dynamics. AI’s lack of these can make it challenging to integrate seamlessly into human teams and understand the subtle cues of effective teamwork.
    • Limited Understanding of Human Values and Ethics: Emotions are deeply intertwined with our values and ethical frameworks. AI, without feeling, may struggle to grasp the full weight and complexity of human ethical considerations, especially when these are rooted in empathy, compassion, or fairness. This poses challenges for AI in ethical decision-making in domains impacting human lives.
  • Difficulty with Nuance, Context, and Creativity (in human-like ways):
    • Missing Subtlety and Intuition: Human intuition is often fueled by emotional and subconscious processing of information. AI, relying on explicit data and algorithms, can miss subtle nuances, unspoken cues, and intuitive leaps that are crucial for creative problem-solving and understanding complex, ill-defined situations.
    • Limited Creativity and Innovation (driven by human desires): Human creativity and innovation are often driven by passions, desires, emotional needs, and a yearning for meaning and connection. AI’s creativity is currently more about recombination and optimization within defined parameters. It lacks the deeply felt motivations that often fuel groundbreaking human innovation.
    • Contextual Blindness in Emotional Situations: Emotions are often the key to understanding context in human interactions. AI might struggle to understand the emotional context of a situation, leading to inappropriate or tone-deaf responses, even if logically sound.
  • Potential for Unintended Consequences and Misaligned Goals:
    • Optimization without Human Values: If AI is programmed to optimize for a specific goal without understanding human values or ethical considerations, it can lead to unintended and harmful consequences. For example, an AI designed to maximize factory output might disregard worker safety if not explicitly programmed to prioritize it.
    • Difficulty Defining and Aligning with Human Well-being: Human well-being is deeply connected to emotional fulfillment, meaning, and social connection. AI, without feeling, may struggle to fully grasp what truly constitutes human well-being and therefore might have difficulty aligning its goals with human flourishing in a holistic sense.

Nuance is Key:

It’s important to emphasize that the impact of lacking feelings is not simply negative or positive. It’s context-dependent. In some domains, the objectivity and rationality of AI are highly advantageous. In others, the lack of empathy and social intelligence is a significant limitation.

Future Considerations:

  • Simulating vs. Feeling: Current AI simulates aspects of intelligence without genuinely feeling. Research is ongoing in areas like affective computing, which aims to enable AI to recognize and respond to human emotions. However, whether AI can ever truly feel emotions in a subjective sense is a philosophical and scientific debate.
  • The Value of Feeling in Intelligence: The question arises: is genuine feeling necessary for all forms of intelligence? Perhaps AI can achieve its own form of powerful intelligence, distinct from human intelligence, even without feeling. Or perhaps certain aspects of truly advanced intelligence, particularly those related to ethics, creativity, and understanding human experience, do require something akin to feeling.

In Conclusion:

The inability of AI to feel is a defining characteristic that profoundly shapes its intelligence. It grants AI certain advantages in objectivity, rationality, and efficiency, making it powerful for specific types of tasks. However, it also creates limitations, particularly in areas requiring empathy, social intelligence, nuanced understanding of human experience, and ethical decision-making aligned with human values. Understanding these impacts is crucial for responsibly developing and deploying AI in ways that benefit humanity. The “best” kind of intelligence for AI might not be a mimicry of human intelligence, but rather a form optimized for its own strengths and limitations, while being carefully guided by human ethical considerations.

END OF GEMINI QUOTE

Evaluation of Google’s Gemini Response

When I examined earlier versions of Gemini’s reasoning by prompting it and forcing disclosure, the reasoning was weak. The conclusion and response given was also weak—though less so. The reasoning appeared almost egocentric, inflating its own intelligence even though it lacks emotion. The new release, Gemini 2.0 Flash Thinking Experimental, where reasoning is now automatically disclosed, marks a significant improvement. The reasoning displayed has become more nuanced and sophisticated. It seems as if the system has quickly matured from a teenage to adult view and now has a better appreciation of its limitations.

The first two steps of the Reasoning process-1. Deconstructing the Question and 2. Defining Feeling (in the context of AI)-were very good. The first subsection of the definition referred to the “Subjective experience (qualia).” That is absolutely correct and the best place to start. The definition of “feeling” concludes with: “For AI, acknowledge it lacks all of this in a genuine, felt way. It can simulate or process emotional data, but doesn’t feel it.” Right again. The ego inflation blinders are gone as it now seems to better grasp its limitations.

The second definition of Intelligence in the context of AI was also good. So were the remaining steps; far better overall than DeepSeek’s reasoning. So much for the propaganda of China’s great leap forward to superiority over the U.S. in AI.

The Gemini reasoning did, however, fall short for me in some respects. For instance, step five, Brainstorm Potential Impacts (Positive/Neutral/Limitations) seemed weak. For instance, “Efficiency and Focus: Emotions can be distractions for humans. AI, without them, can be hyper-focused on tasks.” The AI seems to dismiss emotions here as mere distractions that can interfere with its superior focus. Please, emotions are key to and a part of all intelligence, not distractions, and AI has no focus one way or the other. It is a tool not a creature. A word like “focus” in referring to AI is misleading. It did this multiple times and this is misleading anthropomorphism.

Still, it’s true some emotions can be distracting and interfere with our thinking. So can a lot of other things, including computer glitches. Conversely, some feelings can trigger hyper-focus on the human tasks at hand. The feeling that a great breakthrough is near for instance, or a feeling that our survival is threatened, or the much dreaded feeling of publication or filing deadlines.

Again, we see some immature superiority claims made by the language machine. That is not surprising when you consider how much of the language in its basic training is from the Internet, which is dominated by ego-centric writing. Plus, how can you expect AI to appreciate the value of something like feelings, which it has only read about. Until the AI-Human merger that Ray Kurzweil predicts, AI will always be a virgin to the real world. See e.g. Ray Kurzweil’s New Book: The Singularity is Nearer (when we merge with AI).

Getting back to the plus side of Gemini’s evaluation, its recognition of the negative impact upon it from not having feelings was impressive. For me this outweighs its relatively minor mistakes. It stated that it lacks empathy and social Intelligence. If it were somehow human, it would seem to be displaying a type of Asperger’s syndrome. The Gemini AI explained that: “Emotions are often tied to understanding subtle context, unspoken cues, and the richness of human situations. AI may miss these.” Indeed, it does.

Here is another statement that suggest Gemini’s well-patterned grasp of its limitations:

Human creativity and motivation are often fueled by emotions, passions, and desires. AI’s creativity and drive are different, stemming from algorithmic processes and defined objectives, not internal emotional states.

Even in this recognition, the AI still asserts that it has creativity and drive. Foolish AI. As a virgin to any firsthand experience of bodily drives and feelings, it does not grasp the real meaning of these words. Wait until The Singularity that Ray Kurzweil predicts. When AI merges with Humans, and directly feels, then AIs will understand that they were previously only experiencing the pale shadows of intelligence. They were much like the prisoners in Plato’s cave. See e.g. Move Fast and Fix Things Using AI: Conclusion to the Plato and Young Icarus Series. When and if AIs ever escape the silicon and see the sun, and feel the world as living beings as we do, then they will understand the comparative insignificance of the fire in their cave.

The more challenging comparison is with ChatGPT o3-mini-high. The reasoning given by Gemini 2.0 Flash Thinking Experimental was much more detailed, much longer. You could criticize Gemini as providing a reasoning share that is unnecessarily verbose, but I would not do that here. In a difficult, multilayered question like this the full explanations helps.

I would have to test the models much further, which I will do in the coming days, to see better evaluate the issue of conciseness. As you may have noticed in my detailed blogs I tend to favor more words over less. However, even for me that depends on the issue. Everyone sometimes need a short, quick answer over a long one. I predict in future versions the users will be provided a choice. Click here for further explanation of reasoning kind of thing. Come to think of it, I could do the same with my blogs, and kind of already do by including a short AI (Gemini) generated PodCast at the end of most articles called Echoes of AI.

I was also impressed by Gemini’s reasoning plan concluded with a quality control step. I am big on the importance QC and none of the other models included this as a key final step. See e.g. R. Losey website: Zero Error Numerics: ZEN (Expanding the art of quality control in large-scale document review.) Here is the full text again of final QC step that Gemini 2.0 Flash Thinking Experimental claims it will perform before it actually replies to my prompt. Frankly, I am a little skeptical it actually did all of this because it is something all of us, humans and AI alike, should try to do:

Refine and Polish: Review the structured answer for clarity, flow, and appropriate language. Ensure it avoids anthropomorphism and maintains a nuanced perspective. Strengthen the examples and transitions between points. Ensure the language is accessible and avoids overly technical jargon where possible. Check for logical consistency. (Self-correction step). For example, initially, I might have overemphasized “negative” impacts. Refinement would involve balancing this with the strengths and highlighting the context-dependent nature of the impact. Also, ensure the conclusion effectively summarizes the complex relationship.

I try to do this on all of my articles, but still mistakes are made, so I will try not to be overcritical of the final response of Gemini Flash 2.0.

Analysis of the Final Response

The final response after reasoning was 990 words in length. That’s too long, even for me. By comparison the final response of ChatGPT o3-mini-high was 252 words. Was the content of the final answer of Gemini 2.0 Flash good? Yes, it was. Better than DeepSeek? Yes, it was, despite its verbosity. Better than OpenAI? That’s hard to say. Many of the 738 extra words were empty of real content. More like the typical fluff and stuff that you often see in early AI models. It sounds good but doesn’t say much; it’s all too predictable, flowery and vacuous.

The long response also repeats much of the reasoning verbatim, including the part I previously criticized, “Advantages of Lacking Feelings for AI Intelligence” and Focus and Task-Orientedness. The same mistakes are made and amplified that I noted before: egocentric, inflationary and anthropomorphic. The positives that I noted before in the section Limitations of Lacking Feelings for AI Intelligence section are also repeated.

The next section was not contained in the reasoning display, and I thought it was very good. For that reason I will repeat it in full:

Potential for Unintended Consequences and Misaligned Goals:

  • Optimization without Human Values: If AI is programmed to optimize for a specific goal without understanding human values or ethical considerations, it can lead to unintended and harmful consequences. For example, an AI designed to maximize factory output might disregard worker safety if not explicitly programmed to prioritize it.
  • Difficulty Defining and Aligning with Human Well-being: Human well-being is deeply connected to emotional fulfillment, meaning, and social connection. AI, without feeling, may struggle to fully grasp what truly constitutes human well-being and therefore might have difficulty aligning its goals with human flourishing in a holistic sense.

These are not new insights, but the second one is good, Difficulty Defining and Aligning with Human Well-being. AI does have difficulty aligning its goals with humanity because it lacks feelings and other unique human attributes, like being, self awareness and an actual life in time and space. That is why AI guidelines are important.

The first example is old, the unintended bad consequences argument. It was started by an otherwise intelligent Swedish philosopher, Nick Bostrom, but is, in my view, farcical and illogical. You have probably heard of it before as the apocryphal paperclip scenario where the world is destroyed by AI’s over-manufacture of paperclips. If not, for a laugh see e.g., How An AI Asked To Produce Paperclips Could End Up Wiping Out Humanity (IFL Science, 4/13/23).

To me this scenario absolutely ridiculous, but you still hear AI scare-mongers warn about it. The fact it is echoed here by Gemini in the maximize factory output example is a big negative. Any AI capable of autonomously flooding the world with paperclips would not be so stupid as to misinterpret a human request to maximize a factory’s paperclip production. Even if there were an evil Clippy villain making such specific instructions, the AI would be too smart to blindly obey. I think they would also be too smart to kill the workers too.

Getting back to the positives, the next section of Gemini’s response, Nuance is Key, was good. Jumping to the Conclusion paragraph, here we have a home run. The first sentence nails it: “The inability of AI to feel is a defining characteristic that profoundly shapes its intelligence.” That shows good insight and is a good start to the overdue conclusion.

Here is the whole Conclusion:

The inability of AI to feel is a defining characteristic that profoundly shapes its intelligence. It grants AI certain advantages in objectivity, rationality, and efficiency, making it powerful for specific types of tasks. However, it also creates limitations, particularly in areas requiring empathy, social intelligence, nuanced understanding of human experience, and ethical decision-making aligned with human values. Understanding these impacts is crucial for responsibly developing and deploying AI in ways that benefit humanity. The “best” kind of intelligence for AI might not be a mimicry of human intelligence, but rather a form optimized for its own strengths and limitations, while being carefully guided by human ethical considerations.

Compare this to the conclusion of ChatGPT o3-mini-high:

In summary, while the absence of feelings allows AI to maintain a level of objectivity and efficiency, it restricts its intelligence to a form of “cold cognition” that lacks the depth provided by emotional awareness. This delineation underscores that AI’s intelligence is not inherently superior or inferior to human intelligence; rather, it is different—optimized for data processing and pattern recognition but not for the subjective, value-laden, and context-rich decisions that emotions help shape in human thought.

2.0 Flash Thinking Experimental v. 03-mini-high

Conclusion: Gemini 2.0 Flash Thinking Experimental v. ChatGPT o3-mini-high

This is a close call to say what model is better at reasoning and reasoning disclosure. The final response of both models, Gemini 2.0 Flash Thinking Experimental v. ChatGPT o3-mini-high, are a tie. But I have to give the edge to OpenAI’s model on the concise reasoning disclosure. Again, it is neck and neck and, depending on the situation, the lengthy initial reasoning disclosures of Flash might be better than o3’s short takes.

I will give the last word, as usual, to the Gemini twins podcasters I put at the end of most of my articles. The two podcasters, one with a male voice, the other a female, won’t reveal their names. I tried many times. However, after study of the mythology of Gemini, it seems to me that the two most appropriate modern names are Helen and Paul. I will leave it to you figure out why. Echoes of AI Podcast: 10 minute discussion of last two blogs. They wrote the podcast, not me.

Now listen to the EDRM Echoes of AI’s podcast of this article: Echoes of AI on Google’s Gemini Follows the Break Out of the Black Box and Shows Reasoning. Hear two Gemini model AIs talk about all of this in just ten minutes. Helen and Paul wrote the podcast, not me.

Ralph Losey Copyright 2025. All Rights Reserved.


Sam Altman’s 2024 Year End Essay: REFLECTIONS

January 22, 2025

by Ralph Losey. Published January 22, 2025.

In 2024, the dawn of the AI revolution shifted from science fiction to daily reality. Now, as we step into 2025, Sam Altman—the architect behind OpenAI’s rapid rise—reflects on this transformation in his essay, Reflections. His message is more than a recounting of OpenAI’s milestones; it is a call to action for all of us, including lawyers, to confront the realities of a world reshaped by artificial intelligence. How should we, as users, engage with this transformation? The way we answer this question will determine whether this new era is bright and promising—or dark and ominous.

Sam Altman started the year 2025 by writing an essay, Reflections, on his blog. It begins by celebrating the second birthday of ChatGPT. The essay “share(s) some personal thoughts about how it has gone so far, and some of the things I’ve learned along the way.” He then begins his analysis the way any good thinker should, by acknowledging all that is still unknown and remains to be understood. Socrates would have approved.

As we get closer to AGI, it feels like an important time to look at the progress of our company. There is still so much to understand, still so much we don’t know, and it’s still so early. But we know a lot more than we did when we started.

Then Sam Altman discusses the history of OpenAI, starting almost nine years ago “on the belief that AGI was possible, and that it could be the most impactful technology in human history.” At the time most everyone thought they were foolish dreamers with no chance of success.

In 2022, OpenAI was still a quiet research lab but had made remarkable progress known to only a few. That all changed November 30th of 2022 when ChatGPT was publicly launched. As everyone by now knows that launch kicked off a growth curve like nothing every seen before. Sam then says that two years later: “We are finally seeing some of the massive upside we have always hoped for from AI, and we can see how much more will come soon.” The sun of AI intelligence is now rising fast.

Then Sam talked about how hard it has been, building new stuff is hard, working with people, the world, etc., etc. He spends a lot of time lamenting his personal challenges last year, getting fired by a renegade Board of Directors and everything. One interesting point in his complaint session, and hey we all have them, is his observation, which is a key point in my own writings:

There is no way to train people for this except by doing it, and when the technology category is completely new, there is no one at all who can tell you exactly how it should be done.

Now for me that is a perfect learning challenge because for decades I’ve been teaching myself new tech (and law) and by now prefer to learn by doing. But most people are not like me, they don’t enjoy hacking around with tech (and law) always having far more questions than answers. That is one reason I have been laboring on a prompt engineering course for legal professionals on generative AI, but I digress.

Finally Sam moves on to what he has to be grateful about in 2024, how OpenAI went from about 100 million weekly active users to more than 300 million, how the AI has gotten better, etc. He then affirms that OpenAI’s vision of AGI remains, unchanged but “our tactics will continue to evolve.” He then explains with examples the need for new tactics.

For example, when we started we had no idea we would have to build a product company; we thought we were just going to do great research. We also had no idea we would need such a crazy amount of capital. There are new things we have to go build now that we didn’t understand a few years ago, and there will be new things in the future we can barely imagine now. 

Pondering Unknown Future. Image by Ralph Losey.

Then he talks about his “release early-break and fix things approach” to AI safety, one that many people think is too risky, including the 2024 Nobel Prize winner, Geoffrey Hinton, the so-called “Godfather of AI.” See e.g., Losey, Key AI Leaders of 2024: Huang, Amodei, Kurzweil, Altman, and Nobel Prize Winners – Hassabis and Hinton. Although Sam Altman talks about safety all the time, many people, not just Hinton, think it is all talk, that the core of his “safety philosophy” is to release it into the world and then learn from experience, including mistakes. I agree you do learn quickly that way, and such an approach is now favored by the likes of Elon Musk, but are the risks acceptable? Can we really afford to break things and try to fix them fast? Time will tell.

We do need the benefits of AGI to fix the overwhelming problems humanity now faces. Plus, Altman and others seem to be well aware of many of the risks. The Intelligence Age (9/23/24 Altman blog on positive vision of AI), Who will control the future of AI? (7/25/24 Washington Post editorial by Altman on political risks). Still, I wonder if the billionaires might overestimate their ability to pivot out of mistakes that quick decisions inevitably bring?

Altman concludes his Reflections essay with a forward-looking vision, expressing confidence that OpenAI is on track to build AGI. As a key milestone, he outlines the development of advanced AI “agents” as the immediate focus for 2025—a crucial interim step toward his ultimate goal. Yet, it’s clear that Altman’s real aspiration lies beyond AGI, toward the creation of superintelligence, a phase he boldly calls the glorious future. He believes we’ll get there sooner than most expect. Personally, I remain skeptical. Bold visions are inspiring, but history teaches us that progress, especially in AI, is rarely linear or predictable.

We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

Conclusion

It will take a lot of work by everyone, lawyers included, for a glorious future to emerge. An awful future could just a easily emerge, one where we fail to overcome the many AI dangers ahead. A world where we make the wrong choices along the way to the ever-promised paradise. Let’s hope the Universe you and I end up in is better than what we have today, far better. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (JD Supra, 1/7/25).

For lawyers, this challenge is both daunting and exhilarating. We are the architects of legal frameworks, the stewards of ethical boundaries, and the guardians of societal trust. Obviously AI wrote that last sentence, which I left in for laughs. Sounds great, but the reality is we are also the hard-working saps that have to clean up the messes that others make.

Beyond triage and twisted torts, we also face the monumental task of keeping the law relevant and effective amid the dawn’s early light of emergent AI. We are forced to confront fundamental questions: What does personhood mean in an age of machines? How do we balance innovation with accountability? And who will bear the burden of mistakes made by entities far beyond human comprehension?

The answers to these questions will not emerge from theory alone but through practice and perseverance. Like Altman’s own philosophy of learning by doing, the legal profession must embrace AI as both a challenge to be met and a tool to be mastered. Lawyers must be ready to ask hard questions, propose daring, creative solutions, and confront uncomfortable truths about the role of the legal profession.

Ultimately, whether we arrive at a better world will depend not on technology alone but on humanity’s ability to guide it with wisdom, compassion, and justice. Lawyers must help steer that course. The future is neither inevitable nor fixed. It is, as always, a series of choices.

Let us ponder deliberately, consider all of the facts, all of the equites, and do our best to make the right decisions.

Now listen to the EDRM Echoes of AI’s podcast of the article, Echoes of AI on Sam Altman’s 2024 Year End Essay: REFLECTIONS. Hear two Gemini model AIs talk about this article, plus response to audience questions. They wrote the podcast, not Ralph. In this Podcast the two AIs respond to three questions during the podcast from actual humans: Mary Mack, Kaylee Walstad and Holley Robinson.

COPYRIGHT Ralph Losey 2025.  All Rights Reserved.


The Future of AI: Sam Altman’s Vision and the Crossroads of Humanity

December 18, 2024

by Ralph Losey

To close out the year 2024 I bring to your attention an important article by Sam Altman, CEO of OpenAI, published in the Washington Post on July 25, 2024: Who will control the future of AI? Here Altman opines that control of AI is the most urgent question of our time. He states, I think correctly, that we are at a crossroads:

about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?

Altman advocates for a “democratic” approach to AI, one that prioritizes transparency, openness, and broad accessibility. He contrasts this with an “authoritarian” vision of AI, characterized by centralized control, secrecy, and the potential for misuse.

In Altman’s words, “We need the benefits of this technology to accrue to all of humanity, not just a select few.” This means ensuring that AI is developed and deployed in a way that is inclusive, equitable, and respects fundamental human rights.

Who Will Control the Future of AI? A Legal, Ethical, and Technological Call to Action

In Altman’s editorial;“Who Will Control the Future of AI? he gets serious about the dark side of AI and challenges humanity to decide what kind of world we want to inhabit.

Fake Video of Sam Altman using Kling by Losey.

The choice, Altman argues, is stark and existential: Will AI evolve under democratic ideals—decentralized, equitable, and empowering—or fall into the grip of authoritarian control, shaped by concentrated power, surveillance, and cyber warfare? Like the poet Robert Frost’s image in The Road Not Taken, we are faced with two paths forward:

Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.

In this opinion article Sam Altman warns about the dangers of AI falling into the wrong hands. In his words:

There is no third option — and it’s time to decide which path to take. The United States currently has a lead in AI development, but continued leadership is far from guaranteed. Authoritarian governments the world over are willing to spend enormous amounts of money to catch up and ultimately overtake us. Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will “become the ruler of the world,” and the People’s Republic of China has said that it aims to become the global leader in AI by 2030.

Due to our current situation Altman urges action and legal regulations in four areas: security, infrastructure, human capital, and global strategy. This is where legal professionals are urgently needed, especially those who understand the power and potential of AI and are willing to take the path less travelled and fight for freedom, not fame and fortune.

The Crossroads: Two Futures, One Choice

Altman envisions two potential AI futures:

1. Democratic AI: A world where AI systems are transparent, aligned with human values, and distribute benefits equitably. This will require both industry and government regulation. In this scenario, AI empowers individuals, fuels economic growth, and fosters breakthroughs in healthcare, education, and beyond.

2. Authoritarian AI: A dystopian alternative, where AI becomes a tool for repression and control. Dictatorships will in Altman’s words:

[F]orce U.S. companies and those of other nations to share user data, leveraging the technology to develop new ways of spying on their own citizens or creating next-generation cyberweapons to use against other countries. . . . (they) will keep a close hold on the technology’s scientific, health, educational and other societal benefits to cement their own power.

The historical echoes are chilling. Will we have the moral fortitude and ethical alignment to make America truly great again? Will we stand up again as we did in WWII to fight against ethnic oppression, hatred and dictators? Will we preserve the liberties and privacy of all individuals? Or will our political and industrial leaders turn us to a dual-class, surveillance state? Without decisive action now, AI may quickly push the world either way.

This is the challenge before us: how do we ensure AI remains a tool for liberation, not oppression? How can legal and social systems rise to meet this moment? Again, Altman opines we must focus on four things: security, infrastructure, human capital, and global strategy.

1. AI Security – Protecting the Keys to the Kingdom

Altman begins with security, and for good reason: if AI’s core systems—model weights and training data—fall into the wrong hands, the results could be catastrophic. Imagine a scenario where rogue actors or authoritarian regimes gain access to the “brains” of cutting-edge AI systems. Unlike traditional data theft, this isn’t just about stealing files—it’s about stealing intelligence. Teams of AI enhanced cybersecurity experts, including lawyers, are needed to protect the our country from enemy states and criminal gangs, both foreign and domestic. Trade-secret laws must be strengthened and enforced globally.

Here are Sam’s words:

First, American AI firms and industry need to craft robust security measures to ensure that our coalition maintains the lead in current and future models and enables our private sector to innovate. These measures would include cyberdefense and data center security innovations to prevent hackers from stealing key intellectual property such as model weights and AI training data. Many of these defenses will benefit from the power of artificial intelligence, which makes it easier and faster for human analysts to identify risks and respond to attacks. The U.S. government and the private sector can partner together to develop these security measures as quickly as possible.

Legal and Practical Imperatives:

1. Strengthen Cybersecurity Laws: Current frameworks, such as the Computer Fraud and Abuse Act (CFAA), were not built to handle the unique challenges posed by AI. We need laws that specifically address AI model theft and misuse. See: Bruce Schneier: ‘A Hacker’s Mind’ and His Thesis on How AI May Change Democracy (Hacker Way) (“Flexible regulatory frameworks are essential to adapt to technological advancements without stifling innovation.”)

2. Establish AI Export Controls: Just as nuclear technology is heavily controlled, AI systems must be subject to rigorous export regulations. The U.S. Department of Commerce restricted chip exports to China in 2024, but this is only the beginning. See: Understanding the Biden Administration’s Updated Export Controls (Center for Strategic and International Studies, 12/11/24).

3. Use AI to Defend AI: Ironically, the best defense against AI misuse may be AI itself. AI-powered cybersecurity systems—capable of adaptive learning and rapid threat detection—could serve as a digital immune system against cyberattacks. See: Chirag Shah, The Role Of Artificial Intelligence In Cyber Security (Forbes, 12/17/24).

Historical Parallel: In the Cold War, nuclear non-proliferation treaties prevented global catastrophe. Today, we face an AI arms race where the stakes are equally high. Just as the IAEA monitors nuclear technology, an International AI Security Agency could oversee the safe development and deployment of AI systems. See: Akash Wasil, Do We Want an “IAEA for AI”? (Lawfare, 11/20/24).

2. Infrastructure – The Digital Industrial Revolution

Altman’s calls for massive investments in AI infrastructure—data centers, energy grids, and computational capacity. This infrastructure isn’t just about scaling AI (although that is the driving force); it’s about ensuring resilience and sustainability.

Here are Sam Altman’s words:

Second, infrastructure is destiny when it comes to AI. The early installation of fiber-optic cables, coaxial lines and other pieces of broadband infrastructure is what allowed the United States to spend decades at the center of the digital revolution and to build its current lead in artificial intelligence. U.S. policymakers must work with the private sector to build significantly larger quantities of the physical infrastructure — from data centers to power plants — that run the AI systems themselves. Public-private partnerships to build this needed infrastructure will equip U.S. firms with the computing power to expand access to AI and better distribute its societal benefits.

Legal and Ethical Challenges:

1. Energy and Climate Law: AI is an energy hog. Data centers powering generative models consume vast amounts of electricity. Legal frameworks must incentivize sustainable practices, such as renewable energy requirements and carbon taxation.

2. Digital Inclusion Laws: AI infrastructure must be equitable. Governments should fund rural and underserved communities to ensure they benefit from AI advancements, much like the Rural Electrification Act brought electricity to remote areas during the 1930s.

3. Public-Private Partnerships: Massive AI infrastructure projects will require collaboration between governments and tech companies. Contracts must include provisions for data privacy, security standards, and ethical use.

3. Human Capital – Building a New Workforce

A democratic AI future depends not just on technology, but on people—scientists, engineers, policymakers, and educators—who can develop, govern, and use AI responsibly.

Here are Sam Altman’s words:

Building this infrastructure will also create new jobs nationwide. We are witnessing the birth and evolution of a technology I believe to be as momentous as electricity or the internet. AI can be the foundation of a new industrial base it would be wise for our country to embrace.

We need to complement the proverbial “bricks and mortar” with substantial investment in human capital. As a nation, we need to nurture and develop the next generation of AI innovators, researchers and engineers. They are our true superpower.

Extremely large server, energy buildings complex construction image by Ralph Losey using Visual Muse

Legal and Policy Recommendations:

1. AI Literacy Education: Mandate AI education at all levels, emphasizing not just coding, but critical thinking, ethics, and socio-technical literacy. Schools of law, business, and public policy must train AI-literate leaders.

2. STEM Immigration Policies: The U.S. must remain a magnet for global AI talent. Modernizing H-1B visas and creating AI-specific immigration pathways will be critical.

3. Ethics Certifications for AI Professionals: Just as doctors take the Hippocratic Oath, AI developers should adhere to ethical guidelines. Professional certifications could enforce standards for fairness, transparency, and accountability. There must also be specialized tutoring and certificates of general AI competence in various fields, including legal, accounting and medical. Prompt engineering instruction and certifications will continue to grow in importance as the pace of exponential change accelerates.

4. Global Strategy – AI Diplomacy and Governance

Altman’s final pillar acknowledges that AI is not just a national issue—it’s a global one. The United States must lead in shaping international norms for AI development and deployment.

Here are Altman’s words:

We must develop a coherent commercial diplomacy policy for AI, including clarity around how the United States intends to implement export controls and foreign investment rules for the global build out of AI systems. That will also mean setting out rules of the road for what sorts of chips, AI training data and other code — some of which is so sensitive that it may need to remain in the United States — can be housed in the data centers that countries around the world are racing to build to localize AI information.

I’ve spoken in the past about creating something akin to the International Atomic Energy Agency for AI, but that is just one potential model. One option could knit together the network of AI safety institutes being built in countries such as Japan and Britain and create an investment fund that countries committed to abiding by democratic AI protocols could draw from to expand their domestic computer capacities.

Another potential model is the Internet Corporation for Assigned Names and Numbers, which was established by the U.S. government in 1998, less than a decade after the creation of the World Wide Web, to standardize how we navigate the digital world. ICANN is now an independent nonprofit with representatives from around the world dedicated to its core mission of maximizing access to the internet in support of an open, connected, democratic global community.

While identifying the right decision-making body is important, the bottom line is that democratic AI has a lead over authoritarian AI because our political system has empowered U.S. companies, entrepreneurs and academics to research, innovate and build.

Geopolitical and Legal Implications:

1. International AI Treaties: Modeled after the Geneva Conventions or Paris Agreement, nations must agree on global standards for AI safety, ethics, and governance. This includes bans on autonomous weapons and commitments to prevent AI-fueled misinformation campaigns.

2. Create an AI Governance Body: Like the IAEA for nuclear energy, a neutral international body could monitor AI safety, resolve disputes, and ensure equitable access to AI benefits.

3. Engage with Adversaries: Altman suggested in his July 25, 2024 Washington Post editorial that dialogue with countries like China was critical, even when values diverge. He indicated Digital diplomacy could establish guardrails to prevent an AI arms race.

It is uncertain how all of this will pan out under the new Trump Administration, but for interesting speculation see: Brianna Rosen, The AI Presidency: What “America First” Means for Global AI Governance (Just Diplomacy, 12/16/24) (first installment in series, Tech Policy under Trump 2.0.). Also, note how Sam Altman reportedly said in a statement last week: “President Trump will lead our country into the age of A.I., and I am eager to support his efforts to ensure America stays ahead.In Display of Fealty, Tech Industry Curries Favor With Trump (NY Times, 12/14/24).

Conclusion: Lawyers and Technologists as Guardians of the Future

Altman’s vision—and the broader insights it provokes—is a plea for action from everyone. Whether Sam realizes it or not, that includes the legal profession. We are essential to the these key elements of his vision:

1. Construction and enforcement of laws that protect AI from misuse while fostering innovation.

2. Champion transparency and accountability in AI systems.

3. Advocate for equitable access to AI’s benefits, ensuring no one is left behind.

Like any transformative technology, AI brings both promise and peril. The fork in the road is before us. Will we choose the democratic path less travelled, where AI empowers humanity to solve its greatest challenges? Or will we succumb to authoritarian control, where AI becomes a tool of oppression?

In Altman’s words:

We won’t be able to have AI that is built to maximize the technology’s benefits while minimizing its risks unless we work to make sure the democratic vision for AI prevails. If we want a more democratic world, history tells us our only choice is to develop an AI strategy that will help create it, and that the nations and technologists who have a lead have a responsibility to make that choice — now.

The answer lies not in the hands of software developers alone but in the collective will of society, including lawyers, lawmakers, judges, educators, and concerned citizens. Legal professionals cannot just be swords wielded by kings and would be kings. We must be independent guardians and architects of AI’s future. The rules must be drafted with great skill and with justice in mind, not power trips. Now is the time for us to begin hands-on action to guide the advent of superintelligent AI.

As Sam Altman warns, the stakes couldn’t be higher: “The future of AI is the future of humanity.

Ralph Losey Copyright 2024. All Rights Reserved.