Key AI Leaders of 2024: Huang, Amodei, Kurzweil, Altman, and Nobel Prize Winners – Hassabis and Hinton

January 15, 2025

Written by Ralph Losey and published on January, 2025.

Introduction

The year 2024 marked a pivotal time in artificial intelligence, where aspirations transformed into concrete achievements, cautions became more urgent, and the distinction between human and machine began to blur. Many visionary leaders led the way this exponentially loaded year but here are my top six picks: Jensen Huang, Dario Amodei, Ray Kurzweil, Sam Altman, and the Nobel Prize Winners – Geoffrey Hinton and Demis Hassabis. These people propelled AI’s development at a dizzying pace, speaking and writing of both the benefits and dangers; notably, all but one remain optimistic.

Huang built the supercomputers powering AI systems, while Amodei shifted from AI skeptic to hopeful visionary. Kurzweil reaffirmed his positive bold predictions of human-machine singularity, while Altman laid out a democratic path for global AI governance. Hinton and Hassabis, two of the four Nobel prize winners in AI in 2024, revolutionized fields ranging from neural networks to molecular biology. Everyone was surprised when Geoffrey Hinton left Google in mid-year to yell fire about the AI child he, more than anyone, brought into this world. No one was surprised when he received the Nobel Prize at the end of the year and end of his long career.

The combined work of these six reminds us of AI’s dual nature—its profound ability to benefit humanity while creating new risks requiring constant vigilance. For the legal profession, 2024 underscored the need for proactive leadership, constant education and collaboration with AI to help guide its responsible development.

Jensen Huang: Architect of AI’s Future

Jensen Huang, the CEO and co-founder of NVIDIA, continues to define what’s possible in AI by delivering the computational power to fuel its rise. Huang’s journey began humbly: a young boy from Taiwan accidentally sent to a Kentucky reform school, later waiting tables at Denny’s to make ends meet. These formative experiences shaped his relentless drive and focus, qualities mirrored in NVIDIA’s culture of intensity, innovation and the ever-present whiteboard. For in-depth background on the life of Jenson see my article, Jensen Huang’s Life and Company – NVIDIA: building supercomputers today for tomorrow’s AI, his prediction of AGI by 2028 and his thoughts on AI safety, prosperity and new jobs. Also see, Tae Kim, The Nvidia Way: Jensen Huang and the Making of a Tech Giant (Norton, 12/10.24) (highly recommend).

In 2024, Huang and NVIDIA solidified their role as the backbone of AI progress. Their latest generation of AI supercomputers—capable of unprecedented processing power—drove breakthroughs in generative AI, scientific research, and medical discovery.

That is the prime reason that the stock price of NVIDIA in 2024 increased from approximately $48 per share to $137, a gain of over 180% for the year. The 2024 year-end market capitalization of NVIDIA was over $3.4 trillion, second only to Apple.

In early July 2024 at the World AI Forum in Shanghai, Huang made waves by predicting Artificial General Intelligence (AGI)—machines as smart as humans across all tasks—would be a reality by 2028. Huang framed AGI as inevitable, driven by exponential progress in hardware and algorithms: “We’re not just accelerating AI. We’re accelerating intelligence itself.” His foresight in pivoting NVIDIA from gaming GPUs to AI infrastructure years ago is a testament to his long-term vision.

Huang remains pragmatic about the risks, although still very optimistic. He advocates for AI safety frameworks modeled on aviation: “No one stopped airplanes because they could crash. We made them safer. AI is no different.” I can’t say I agree there is “no difference” because the dangers of AI are far more severe and AI is already in use by hundreds of millions of people.

Nelson Huang sees AI creating prosperity by automating repetitive jobs while generating new industries that didn’t exist before. In Huang’s eyes, the legal profession—often burdened by time-consuming tasks—stands to benefit enormously.

Dario Amodei’s Surprising Optimism in 2024

Dario Amodei is a prominent figure in the field of artificial intelligence. He has a Ph.D. in biophysics from Princeton and conducted postdoctoral research at Stanford, focusing on the study of the brain. He then joined OpenAI, a leading AI research lab, where he specialized in AI safety. In 2021, he co-founded Anthropic, the company behind the AI assistant Claude. Amodei was known for his cautious stance on AI and his warnings about potential dangers, including the risks posed by super intelligent AI.

In October 2024 Amodei’s public stance took a surprising turn. He released his 28-page essay “Machines of Loving Grace.” The title is inspired by a poem by Richard Brautigan, which envisions a harmonious coexistence between humans, nature, and technology. Amodei’s work aims to guide us towards a future where AI lives up to that ideal—a future where AI enhances our lives, empowers us to solve our most pressing problems, and helps us create a more just and equitable world. For in-depth background see my article, Dario Amodei’s Vision: A Hopeful Future ‘Through AI’s Loving Grace,’ Is Like a Breath of Fresh Air.

Amodei outlined his surprisingly optimistic vision focusing on five key areas where AI could revolutionize our world:

1. Biology and Physical Health: Amodei, drawing on his expertise in biophysics, believes AI could accelerate medical breakthroughs, leading to the prevention and treatment of infectious diseases, the elimination of most cancers, cures for genetic diseases, and even the extension of the human lifespan.

2. Neuroscience and Mental Health: Amodei sees AI as a powerful tool for understanding the brain and revolutionizing mental health care, potentially leading to cures for conditions like depression, anxiety, and PTSD.

3. Economic Development and Poverty: Amodei is a strong advocate for equitable access to AI technologies and believes AI can be used to alleviate poverty, improve food security, and promote economic development, particularly in the developing world.

4. Peace and Governance: While acknowledging the potential for AI to be misused in warfare and surveillance, Amodei also sees its potential to strengthen democracy, promote transparency in government, and facilitate conflict resolution. Amodei’s essay includes a discussion of the internal conflict between democracy and autocracy within countries. He offers a hopeful perspective—one that aligns with the sentiments I often express in my AI lectures:

We all hope that America remains at the forefront of this fight, maintaining its leadership in pro-democratic policies.

5. Work and Meaning: Amodei recognizes concerns about AI-driven job displacement but believes AI will also create new jobs and industries. He envisions a future where people are free to pursue their passions and find meaning outside of traditional work structures.

Amodei did not become blind to the challenges and risks associated with AI. He still states that realizing his optimistic vision requires careful and responsible development. Amodei stresses the importance of ethical guidelines and international agreements to prevent the misuse of AI in areas like autonomous weapons systems. He also advocates for ensuring equal access to AI benefits to avoid exacerbating existing inequalities and calls for proactive efforts to mitigate AI bias and ensure fairness in AI applications.

Amodei’s combination of scientific knowledge, industry experience, and nuanced understanding of both the potential and the perils of AI makes him a crucial voice in shaping the future of this technology. His advocacy for responsible AI development, ethical considerations, and equitable access is essential for ensuring that AI benefits all of humanity.

His new, much more optimistic vision shared in 2024 is an inspiration and a call to action for researchers, policymakers, and the public to work towards creating a future where AI is a force for good.

Ray Kurzweil: The Singularity Is Nearer

Ray Kurzweil’s name remains synonymous with bold predictions, and 2024 proved no exception. According to Bill Gates: “Ray Kurzweil is the best person I know at predicting the future of AI.” I was quick to read and write about his new book of predictions. The Singularity is Nearer (when we merge with AI) (Viking, June 25, 2024). It is a long awaited sequel to his 2005 book, The Singularity is Near, but stands on its own. Ray does not change his original prediction times – AGI by 2029. That’s just four years from now. And Kurzweil still predicts The Singularity will arrive in 2045. He thinks it will be great, but I’m not so sure.

Ray Kurzweil’s 2024 statement on his book captures the essence of his hopeful message that The Singularity will arrive in 2045.

“The robots will take over,” the movies tell us. I don’t see it that way. Computers aren’t in competition with us. They’re an extension of us, accompanying us on our journey. . . . By 2045 we will have taken the next step in our evolution. Imagine the creativity of every person on the planet linked to the speed and dexterity of the fastest computer. It will unlock a world of limitless wisdom and potential. This is The Singularity.

Kurzweil himself acknowledges the many difficulties ahead to reach AGI level by 2029. He thinks that generative AI models like Gemini (Kurzweil works for Google) and ChatGPT are still being held back from attaining AGI by: (1) contextual memory limitations (too small of a memory causing GPT to forget earlier input; token limits), (2) common sense (lacks robust model of how real word works); and, (3) social interaction (lacks social nuances not well represented in text, such as tone of voice and humor). The Singularity is Nearer at pages 54-58. Kurzweil thinks these will be simple to overcome by 2029. I have been working on the humor issue myself and am inclined to agree.

Kurzweil also points to the problem with AI hallucinations. The Singularity is Nearer at page 65. This is something we lawyers worry about a lot too. “My AI Did It!” Is No Excuse for Unethical or Unprofessional Conduct (Florida Bar approved CLE). I personally see great progress being made in this area now by software improvements and by more careful user prompting. See my OMNI Version – ChatGPT4o – Retest of the Panel of AI Experts – Part Three.

As usual, whether we attain AGI, or not, by 2029, depends on what you mean by AGI, how you define the term. The same applies to the famous Turing test of intelligence. See e.g. New Study Shows AIs are Genuinely Nicer than Most People – ‘More Human Than Human’ (e-Discovery Team, 2/26/24).

To Kurzweil AGI means attaining the highest human level in all fields of knowledge, which is not something that any single human has ever achieved. The Ai would be as smart as the top experts in every specialized field of law, medicine, liberal arts, science, engineering, mathematics, and software and AI design and programming experts. The Singularity is Nearer at pages 63-69 (Turing Test).

When an AI can do that, match or exceed the best of the best humans in all fields, then Kurzweil thinks AGI will have been attained, not before. That is a very stringent definition-test of AGI. It explains why Kurzweil is sticking with his original 2029 prediction and not moving it up to an earlier time of 2025 or 2026 as Elon Musk and others have done. Their quicker prediction might only work if you compare it to the above average human in every field, not the Einsteins on the peaks.

Further, Kurzweil predicts that once an AI is as smart as the most expert humans in all fields, the AGI level, then incremental increases in AI intelligence will appear to end. Instead, there will appear to be a “sudden explosion of knowledge.” AI intelligence will jump to a “profoundly superhuman” level. The Singularity is Nearer at page 69. Then, the next level of The Singularity will become attainable as our knowledge increases unimaginably fast. Many great new inventions and accomplishments will occur in quick order as AGI level AI and the human handlers rush towards the Singularity.

Kurzweil defines The Singularity as a hybrid state of human AI merger, where human minds unite with the “profoundly superhuman” level of AI minds. The unification sought requires the human neocortex to extend into the cloud. This presents great technical and cultural hurdles that will have to be overcome.

While Kurzweil’s predictions raise profound legal and ethical questions. Who governs AI when its decisions exceed human comprehension? Will you have to be enhanced to practice law, to judge, to create AI policies and laws? What happens when AI rewrites concepts of identity, privacy, and accountability?

Sam Altman: Champion of Democratic AI

Sam Altman, the CEO of OpenAI, in addition to leading his company, wrote two important pieces in 2024 and promoted their ideas:

Sam Altman is overall very optimistic, but he does consider both the dark and light side of AI. He can hardly ignore the dark side as he is asked questions about it every day. In the July editorial he states, I think correctly, that we are at a crossroads:

 about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?

Altman of course advocates for a “democratic” approach to AI, one that prioritizes transparency, openness, and broad accessibility. He contrasts this with an “authoritarian” vision of AI, characterized by centralized control, secrecy, and the potential for misuse. In Altman’s words, “We need the benefits of this technology to accrue to all of humanity, not just a select few.” This means ensuring that AI is developed and deployed in a way that is inclusive, equitable, and respects fundamental human rights.

To attain these ends Altman advocates for a global governance body—akin to the International Atomic Energy Agency—to establish norms and prevent misuse. His vision prioritizes AI for global challenges: climate change, pandemics, and economic inequality.

In Sam Altman’s next essay, The Intelligence Age, he presents the positive view. He predicts the exponential growth we’ve seen in generative AI will continue, unlocking astonishing advancements in science, society, and beyond. According to Altman, AI-driven breakthroughs could soon lead us into a virtual utopia—one where the possibilities seem limitless. While this optimistic vision may come across as a sales pitch, to his credit, there’s more depth to it. Altman’s predictions are rooted in science and insider knowledge, which means his ideas should be taken seriously—but with a healthy dose of skepticism. See my article, Can AI Really Save the Future? A Lawyer’s Take on Sam Altman’s Optimistic Vision.

Sam’s The Intelligence Age essay has a personal touch and some poetic qualities. I especially like the reference at the start of this quote to silicon as melted sand.

Here is one narrow way to look at human history: after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence.

This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.

I agree AI is the most important invention in human history. Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time. To keep it real, however, all of the past technology shifts were buffered by incremental growth. AI’s exponential acceleration may very well overwhelm our existing systems. We need to try to manage this transition and to do that we need to consider the problems that come with it. See Seven Problems of AI: an incomplete list with risk avoidance strategies and help from “The Dude” (8/6/24). To be fair to Sam, his prior editorial did address some of the dark side but not all.

Sam Altman’s essay, The Intelligence Age, offers a compelling vision of how AI can dramatically improve life in the coming decades. However, as captivating as Altman’s optimism is, we must balance it with a dose of realism. There is more to it than a political struggled between democracy and dictatorship discussed in his earlier editorial. The road ahead is incredible but filled with many hurdles that cannot be ignored.

Geoffrey Hinton and Demis Hassabis: Nobel Laureates of AI

2024 brought well-deserved recognition to Geoffrey E. Hinton and Demis Hassabis, as both received Nobel Prizes for their contributions to science. This is also a big deal for the field of AI itself as there has never been an award of AI prior to the four given this year, two in Physics and two in Chemistry.

Geoffrey Hinton: Godfather of AI

Who is Geoffrey E. Hinton? Hinton, often referred to as the “godfather of AI,” was awarded the 2024 Nobel Prize in Physics alongside John Hopfield for their foundational work on artificial neural networks. Hinton’s key contribution was the development of the backpropagation algorithm in the 1980s, the significance of which was unrecognized for almost thirty years. But this new approach to AI revolutionized machine learning by allowing computers to “learn” through a process of identifying and correcting errors. This breakthrough paved the way for the development of powerful AI systems like large language models we have today.

Geoffrey Hinton is a British Canadian born in 1947, in Wimbledon, London, England. He comes from an academically very accomplished family. His great-great-grandfather was George Boole, the mathematician and logician who developed Boolean algebra, and his great-grandfather was Charles Howard Hinton, a mathematician and inventor. His parents were involved in science and social work, Geoffrey studied experimental psychology at King’s College, Cambridge, earning a Bachelor of Arts in 1970. Next, he earned a Ph.D. in Artificial Intelligence at the University of Edinburgh in 1978.

Hinton then moved to the United States to work at institutions such as the University of California, San Diego (1978–1980) and then back to the U.K. to work at the University of Sussex (1980–1982), then back to U.S. to Carnegie Mellon University (1982–1987), finally ending in Canada at the University of Toronto (1987-2013 and thereafter to today as an Emeritus Professor).

His early work was focused on neural networks, a concept that was largely dismissed by the AI community during the 1970s and 1980s in favor of symbolic AI approaches. Hinton’s development of the Boltzmann Machine in 1983 while at Carnegie was largely ignored. It was a stochastic neural network capable of learning internal representations and solving complex combinatorial problems which later led to the Nobel Prize. Despite being ignored for decades, Geoffrey Hinton stubbornly preserved. It turns out he was right and almost everyone else in AI establishment was wrong (although many to this day still refuse to admit it and still hold generative AI in low regard). Professor Hinton eventually flourished at the U. of Toronto where he established himself as a global leader in neural networks and deep learning research and supervised a number of later influential students, including Yann LeCun (Meta), and Ilya Sutskever (OpenAI).

In 2012 Hinton co-founded DNNResearch Inc., a startup focused on deep learning algorithms, with two of his students, Alex Krizhevsky and Ilya Sutskever. The company played a key role in developing AlexNet, which won the 2012 ImageNet Challenge and demonstrated the power of deep convolutional neural networks. In 2013, DNNResearch was acquired by Google, and Hinton joined Google Brain as a Distinguished Research Scientist. At Google, Hinton worked on large-scale neural networks, deep learning innovations, and their applications in AI.

In May 2024 Geoffrey Hinton shocked the world by resigning from Google so that he could freely warn the world about the scary fast advancement of generative AI and its potential risks.

Image of fear and regret by Ralph Losey using Visual Muse
  • AI could become more intelligent than humans. Hinton now says that AI could become so intelligent that it could take control and eliminate human civilization. He has also said that AI could become more efficient at cognitive tasks than humans. I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”
  • AI could be used for disinformation. Hinton has warned that AI could be used to create and spread disinformation campaigns that could interfere with elections. 
  • AI could make the rich richer and the poor poorer. Hinton has said that AI could take jobs in a decent society, which could make the rich richer and the poor poorer. 
  • AI could make it impossible to know what’s true. Hinton has said that AI could make it impossible to know what’s true by having so many fakes. 
  • Government Regulations. Hinton has said that he thinks politicians need to give equal time and money into developing guardrails. Further, “I think we should be very careful about how we build these things.” “There’s no guaranteed path to safety as artificial intelligence advances.”
  • Potential for AI Misuse: “It is hard to see how you can prevent the bad actors from using it for bad things.” “Any new technology, if it’s used by evil people, bad things can happen.” 
  • AI’s Impact on Employment: saying The idea that this stuff could actually get smarter than people—a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” 
  • The God Father’s Guilt: “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.

Although much of the AI world was shocked and surprised by the seemingly excessive dark fears by Geoffrey E. Hinton, they were not surprised to see the Godfather of generative AI win the Nobel Prize. We will wait and see how his concerns about AI change in the future. As a Nobel Prize winner his opinion will be taken more seriously than ever.

Demis Hassabis: Gamer and Future King of AI?

Who is Demis Hassabis? He is the co-founder of DeepMind who many think represents the best of the next generation of AI scientist leaders. In 2024 he was awarded the Nobel Prize in Chemistry, alongside his colleague at DeepMind, John Jumper, for developing AlphaFold2.

There award was no surprise in the AI community. Hassabis been made no secret of his desire for Deep Mind to win this award, and expects to receive many more in the coming years. The Nobel Prize Committee recognized that the AlphaFold2 AI algorithm’s ability to accurately predict protein structures was a groundbreaking achievement. It has far-reaching implications for medicine and other scientific research.

Unlike Professor Hinson, Demis Hassabis still works for Google, actually the holding company Alphabet, in London and is not a doom and gloom type. He is a gamer, not a professor, and he is used to competition and risks. Although also deeply involved and concerned about AI ethics, Hassabis and his teams at DeepMind are working aggressively on many more scientific breakthroughs and inventions. Unlike the older pioneer Hinton, Hassabis, about thirty years his junior, is just not yet in his prime. Some speculate he may not only receive more Nobel Prizes, but may also someday lead Alphabet itself, not just its subsidiary, DeepMind.

Demis Hassabis’ life story is very interesting and explains his interdisciplinary approach. He was born in 1976 in London to a family that combined Greek Cypriot and Chinese Singaporean heritage. Young Demis was quickly recognized as a child-prodigy at age four by his gaming skills at chess. He attained the rank of Chess Master at age 13. He is used to competing for awards and winning. Chess also instilled in him a deep interest in strategy, problem-solving, and human cognition.

That led to playing complex computers games and continued winning. By age 17 he was designing video games, including co-design of the highly regarded video game Theme Park (1994, Atari and PC)). This was a simulation game where players built different types of theme parks. It sparked a whole genre of space management type simulation games revolving around similar ideas.

Demis, who obviously enjoyed Roller Coaster rides and theme parks, created this groundbreaking video game while working at the UK game company Bullfrog Productions. At the same time, he attended Christ’s College, Cambridge, studying AI, where he passed his final exams two years early at age 16. Advised by Cambridge to take a gap year off, he started his own video game company, Elixir Studios. This allowed him to combine his love for games and AI to create political simulation games. Elixir released two games: Republic: The Revolution (2003, PC and Mac) (simulation where you rise to power in a former Soviet state using political and coercive means) and Evil Genius (2004, PC) (comedic take on a spy fiction). Elixir started as a winning enterprise failed in 2005 in a video game downturn. Rumor has it they were working at the time on a video game version of The Matrix. Can you imagine the irony if Hassabis had completed it?

The still young genius, Demis Hassabis then shifted his interests to the scientific study of human intelligence and enrolled in the Neuroscience Ph.D. program at University College in London. Due to his strong hands-on AI computational background, and his incredible genius, his academic work immediately impressed the professors and MIT and Harvard where he also studied. Hassabis explored the brain neurology of how humans reconstruct past experiences to imagine the future. He was looking for inspiration in the human brain for new types of AI algorithms.

The research by Hassabis on the hippocampus and its role in memory and imagination was published in leading journals like Nature, even before his earning a Ph.D. in Cognitive Neuroscience. His very first academic work, Patients with hippocampal amnesia cannot imagine new experiences, (PNAS, 1/30/2007), became a landmark paper that showed systematically for the first time that patients with damage to their hippocampus, known to cause amnesia, were also unable to imagine themselves in new experiences. 

Soon after completing his studies, Demis Hassabis, already an AI legend, co-founded DeepMind with Shane Legg and Mustafa Suleyman in 2010. The purpose of Deep Mind was to build artificial general intelligence (AGI) capable of solving complex problems. They planned to do so by combining insights from neuroscience, mathematics, and computer science into AI research. Only four years later Deep Mind was considered the best AI lab in the UK and was acquired by Google in 2014 for £400 million. Google at the time made promises that Deep Mind would retain its autonomy, but after years of negotiations, this never really panned out, but they did get the money they needed to retain the top scientists and hire more. In 2016 Deep Mind quickly developed AlphaGo AI, which defeated world champion Lee Sedol in 2016. Then Deep Mind then created AlphaZero in 2018, called the “One program to rule them all.” As the abstract to the paper announcing the discovery stated:

A single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.

David Silver, Demi Hassabis, et al A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play (Science, 12/07/18). The incredible fact is that AlphaZero attained superintelligence without human input and did so in multiple games, not just chess.

In 2018 Deep Mind went beyond game creation to invent AlphaFold AI based software. In 2020 DeepMind released AlphaFold2, which achieved groundbreaking results at CASP14, solving the protein folding problem that had challenged scientists for decades. This breakthrough version two of AlphaGo software was created by DeepMind under the leadership, both management and scientific, of Demis Hassabis, and lead scientists on this project, DeepMind’s John Jumper. See, Hassabis, Jumper, et al, Highly accurate protein structure prediction with AlphaFold, (Nature, 7/15/21) (paper on AlphaFold2 by DeepMind scientists has been cited more than 27 thousand times, even before the Nobel Prize award to Hassabis and Jumper for this discovery).

AlphaFold2 utilizes transformer neural networks, a type of AI architecture, and was trained on vast datasets of known protein structures. The algorithm’s ability to accurately predict protein structures significantly accelerates scientific discovery in various fields, including drug development, disease research, and bioengineering. This protein folding achievement led to Hassabis and Jumper sharing the Nobel Prize in Chemistry in 2024.

Demis Hassabis remains as CEO of Google DeepMind, that is now a subsidiary of Alphabet Inc. According to the excellent, well researched new book Supremacy: AI, ChatGPT, and the Race that Will Change the World by Parmy Olson, which is on Demis Hassabis and Sam Altman, Hassabis was satisfied with his position at Alphabet, even though Google Deep Mind never received its once promised complete autonomy. According to Olson, Hassabis is considered a likely successor to Sundar Pichai by many. The book was completed a few months before the Nobel Prize announcement in late 2024.

I personally doubt Demis Hassabis would still want that position now that he has tasted Nobel Prize victory at age 48. He has always seems to have been more focused on winning than money, and five people in history have won two Nobel Prizes. Marie Curie was the first person to win two, one in Physics in 1903 and another in Chemistry in 1911. John Bardeen won the Nobel Prize in Physics twice, in 1956 for inventing transistors in 1972 for discovering superconductivity. Frederick Sanger and Karl Barry Sharpless both won the Nobel Prize in Chemistry twice. The incredible Linus Pauling, an American, won a Nobel Prize in Chemistry in 1954 and get this, the Nobel Peace Prize in 1962 for his activism against nuclear weapons.

Linus Pauling (1901-1994) is the only person to win two Nobel Prizes on his own. The rest of the double winners shared credit with another, as Demis Hassabis did when winning his first Nobel Prize. I bet Demis goes for another Nobel on his own or a triple that is shared. Any gamer would try to win and a true genius like Demis Hassabis, unlike other alleged genius wannabes, knows the stupidity of competing for mere money and power.

Conclusion: A Call to Action

The events of year 2024 have shown the incredible potential of AI and how critical the next few years may be. What if ex-Google Geoffrey Hinton is right and current Google’s Ray Kurzweil and Demis Hassabis are wrong. No one knows for sure. Maybe everything will turn out fine and maybe we will be in crises mode for decades. Maybe we will need more lawyers than ever before, maybe far less. It seems to me the probabilities are on more because private enterprise and governments need lawyers to enact and follows rules and regulations on AI. Lawyers will likely have to work in teams with AI experts and policy experts to ensure that AI systems align with fairness, ethics, and human values. If so, and if we do our jobs right, maybe Demis Hassabis will win three golds.

I am pretty sure Demis will rise to the challenge, but will we? Will we be able to keep up with the technology leaders of AI? Most lawyers are very competitive and so I say yes we can. We can play that game and we can win.

Legal professionals can help humanity to navigate the difficult times ahead. Lawyers are uniquely qualified by training and temperament to provide balanced advice. We know there are always two sides to a story. We know there are always dark forces and dangers about. We know we must both think ahead and plan and stay current on the facts at hand to adjust to the unexpected. Life is far more complicated than any games of Chess or Go. We will have to work hard to keep our hand in the game for as long as we can. Perhaps with AI’s help we can all win the game together.

Now listen to the EDRM Echoes of AI’s podcast of the article, Echoes of AI on the Key AI Leaders of 2024. Hear two Gemini model AIs talk about this article, plus response to audience questions. They wrote the podcast, not Ralph.

Ralph Losey Copyright 2025. All Rights Reserved.


The Future of AI: Sam Altman’s Vision and the Crossroads of Humanity

December 18, 2024

by Ralph Losey

To close out the year 2024 I bring to your attention an important article by Sam Altman, CEO of OpenAI, published in the Washington Post on July 25, 2024: Who will control the future of AI? Here Altman opines that control of AI is the most urgent question of our time. He states, I think correctly, that we are at a crossroads:

about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?

Altman advocates for a “democratic” approach to AI, one that prioritizes transparency, openness, and broad accessibility. He contrasts this with an “authoritarian” vision of AI, characterized by centralized control, secrecy, and the potential for misuse.

In Altman’s words, “We need the benefits of this technology to accrue to all of humanity, not just a select few.” This means ensuring that AI is developed and deployed in a way that is inclusive, equitable, and respects fundamental human rights.

Who Will Control the Future of AI? A Legal, Ethical, and Technological Call to Action

In Altman’s editorial;“Who Will Control the Future of AI? he gets serious about the dark side of AI and challenges humanity to decide what kind of world we want to inhabit.

Fake Video of Sam Altman using Kling by Losey.

The choice, Altman argues, is stark and existential: Will AI evolve under democratic ideals—decentralized, equitable, and empowering—or fall into the grip of authoritarian control, shaped by concentrated power, surveillance, and cyber warfare? Like the poet Robert Frost’s image in The Road Not Taken, we are faced with two paths forward:

Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.

In this opinion article Sam Altman warns about the dangers of AI falling into the wrong hands. In his words:

There is no third option — and it’s time to decide which path to take. The United States currently has a lead in AI development, but continued leadership is far from guaranteed. Authoritarian governments the world over are willing to spend enormous amounts of money to catch up and ultimately overtake us. Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will “become the ruler of the world,” and the People’s Republic of China has said that it aims to become the global leader in AI by 2030.

Due to our current situation Altman urges action and legal regulations in four areas: security, infrastructure, human capital, and global strategy. This is where legal professionals are urgently needed, especially those who understand the power and potential of AI and are willing to take the path less travelled and fight for freedom, not fame and fortune.

The Crossroads: Two Futures, One Choice

Altman envisions two potential AI futures:

1. Democratic AI: A world where AI systems are transparent, aligned with human values, and distribute benefits equitably. This will require both industry and government regulation. In this scenario, AI empowers individuals, fuels economic growth, and fosters breakthroughs in healthcare, education, and beyond.

2. Authoritarian AI: A dystopian alternative, where AI becomes a tool for repression and control. Dictatorships will in Altman’s words:

[F]orce U.S. companies and those of other nations to share user data, leveraging the technology to develop new ways of spying on their own citizens or creating next-generation cyberweapons to use against other countries. . . . (they) will keep a close hold on the technology’s scientific, health, educational and other societal benefits to cement their own power.

The historical echoes are chilling. Will we have the moral fortitude and ethical alignment to make America truly great again? Will we stand up again as we did in WWII to fight against ethnic oppression, hatred and dictators? Will we preserve the liberties and privacy of all individuals? Or will our political and industrial leaders turn us to a dual-class, surveillance state? Without decisive action now, AI may quickly push the world either way.

This is the challenge before us: how do we ensure AI remains a tool for liberation, not oppression? How can legal and social systems rise to meet this moment? Again, Altman opines we must focus on four things: security, infrastructure, human capital, and global strategy.

1. AI Security – Protecting the Keys to the Kingdom

Altman begins with security, and for good reason: if AI’s core systems—model weights and training data—fall into the wrong hands, the results could be catastrophic. Imagine a scenario where rogue actors or authoritarian regimes gain access to the “brains” of cutting-edge AI systems. Unlike traditional data theft, this isn’t just about stealing files—it’s about stealing intelligence. Teams of AI enhanced cybersecurity experts, including lawyers, are needed to protect the our country from enemy states and criminal gangs, both foreign and domestic. Trade-secret laws must be strengthened and enforced globally.

Here are Sam’s words:

First, American AI firms and industry need to craft robust security measures to ensure that our coalition maintains the lead in current and future models and enables our private sector to innovate. These measures would include cyberdefense and data center security innovations to prevent hackers from stealing key intellectual property such as model weights and AI training data. Many of these defenses will benefit from the power of artificial intelligence, which makes it easier and faster for human analysts to identify risks and respond to attacks. The U.S. government and the private sector can partner together to develop these security measures as quickly as possible.

Legal and Practical Imperatives:

1. Strengthen Cybersecurity Laws: Current frameworks, such as the Computer Fraud and Abuse Act (CFAA), were not built to handle the unique challenges posed by AI. We need laws that specifically address AI model theft and misuse. See: Bruce Schneier: ‘A Hacker’s Mind’ and His Thesis on How AI May Change Democracy (Hacker Way) (“Flexible regulatory frameworks are essential to adapt to technological advancements without stifling innovation.”)

2. Establish AI Export Controls: Just as nuclear technology is heavily controlled, AI systems must be subject to rigorous export regulations. The U.S. Department of Commerce restricted chip exports to China in 2024, but this is only the beginning. See: Understanding the Biden Administration’s Updated Export Controls (Center for Strategic and International Studies, 12/11/24).

3. Use AI to Defend AI: Ironically, the best defense against AI misuse may be AI itself. AI-powered cybersecurity systems—capable of adaptive learning and rapid threat detection—could serve as a digital immune system against cyberattacks. See: Chirag Shah, The Role Of Artificial Intelligence In Cyber Security (Forbes, 12/17/24).

Historical Parallel: In the Cold War, nuclear non-proliferation treaties prevented global catastrophe. Today, we face an AI arms race where the stakes are equally high. Just as the IAEA monitors nuclear technology, an International AI Security Agency could oversee the safe development and deployment of AI systems. See: Akash Wasil, Do We Want an “IAEA for AI”? (Lawfare, 11/20/24).

2. Infrastructure – The Digital Industrial Revolution

Altman’s calls for massive investments in AI infrastructure—data centers, energy grids, and computational capacity. This infrastructure isn’t just about scaling AI (although that is the driving force); it’s about ensuring resilience and sustainability.

Here are Sam Altman’s words:

Second, infrastructure is destiny when it comes to AI. The early installation of fiber-optic cables, coaxial lines and other pieces of broadband infrastructure is what allowed the United States to spend decades at the center of the digital revolution and to build its current lead in artificial intelligence. U.S. policymakers must work with the private sector to build significantly larger quantities of the physical infrastructure — from data centers to power plants — that run the AI systems themselves. Public-private partnerships to build this needed infrastructure will equip U.S. firms with the computing power to expand access to AI and better distribute its societal benefits.

Legal and Ethical Challenges:

1. Energy and Climate Law: AI is an energy hog. Data centers powering generative models consume vast amounts of electricity. Legal frameworks must incentivize sustainable practices, such as renewable energy requirements and carbon taxation.

2. Digital Inclusion Laws: AI infrastructure must be equitable. Governments should fund rural and underserved communities to ensure they benefit from AI advancements, much like the Rural Electrification Act brought electricity to remote areas during the 1930s.

3. Public-Private Partnerships: Massive AI infrastructure projects will require collaboration between governments and tech companies. Contracts must include provisions for data privacy, security standards, and ethical use.

3. Human Capital – Building a New Workforce

A democratic AI future depends not just on technology, but on people—scientists, engineers, policymakers, and educators—who can develop, govern, and use AI responsibly.

Here are Sam Altman’s words:

Building this infrastructure will also create new jobs nationwide. We are witnessing the birth and evolution of a technology I believe to be as momentous as electricity or the internet. AI can be the foundation of a new industrial base it would be wise for our country to embrace.

We need to complement the proverbial “bricks and mortar” with substantial investment in human capital. As a nation, we need to nurture and develop the next generation of AI innovators, researchers and engineers. They are our true superpower.

Extremely large server, energy buildings complex construction image by Ralph Losey using Visual Muse

Legal and Policy Recommendations:

1. AI Literacy Education: Mandate AI education at all levels, emphasizing not just coding, but critical thinking, ethics, and socio-technical literacy. Schools of law, business, and public policy must train AI-literate leaders.

2. STEM Immigration Policies: The U.S. must remain a magnet for global AI talent. Modernizing H-1B visas and creating AI-specific immigration pathways will be critical.

3. Ethics Certifications for AI Professionals: Just as doctors take the Hippocratic Oath, AI developers should adhere to ethical guidelines. Professional certifications could enforce standards for fairness, transparency, and accountability. There must also be specialized tutoring and certificates of general AI competence in various fields, including legal, accounting and medical. Prompt engineering instruction and certifications will continue to grow in importance as the pace of exponential change accelerates.

4. Global Strategy – AI Diplomacy and Governance

Altman’s final pillar acknowledges that AI is not just a national issue—it’s a global one. The United States must lead in shaping international norms for AI development and deployment.

Here are Altman’s words:

We must develop a coherent commercial diplomacy policy for AI, including clarity around how the United States intends to implement export controls and foreign investment rules for the global build out of AI systems. That will also mean setting out rules of the road for what sorts of chips, AI training data and other code — some of which is so sensitive that it may need to remain in the United States — can be housed in the data centers that countries around the world are racing to build to localize AI information.

I’ve spoken in the past about creating something akin to the International Atomic Energy Agency for AI, but that is just one potential model. One option could knit together the network of AI safety institutes being built in countries such as Japan and Britain and create an investment fund that countries committed to abiding by democratic AI protocols could draw from to expand their domestic computer capacities.

Another potential model is the Internet Corporation for Assigned Names and Numbers, which was established by the U.S. government in 1998, less than a decade after the creation of the World Wide Web, to standardize how we navigate the digital world. ICANN is now an independent nonprofit with representatives from around the world dedicated to its core mission of maximizing access to the internet in support of an open, connected, democratic global community.

While identifying the right decision-making body is important, the bottom line is that democratic AI has a lead over authoritarian AI because our political system has empowered U.S. companies, entrepreneurs and academics to research, innovate and build.

Geopolitical and Legal Implications:

1. International AI Treaties: Modeled after the Geneva Conventions or Paris Agreement, nations must agree on global standards for AI safety, ethics, and governance. This includes bans on autonomous weapons and commitments to prevent AI-fueled misinformation campaigns.

2. Create an AI Governance Body: Like the IAEA for nuclear energy, a neutral international body could monitor AI safety, resolve disputes, and ensure equitable access to AI benefits.

3. Engage with Adversaries: Altman suggested in his July 25, 2024 Washington Post editorial that dialogue with countries like China was critical, even when values diverge. He indicated Digital diplomacy could establish guardrails to prevent an AI arms race.

It is uncertain how all of this will pan out under the new Trump Administration, but for interesting speculation see: Brianna Rosen, The AI Presidency: What “America First” Means for Global AI Governance (Just Diplomacy, 12/16/24) (first installment in series, Tech Policy under Trump 2.0.). Also, note how Sam Altman reportedly said in a statement last week: “President Trump will lead our country into the age of A.I., and I am eager to support his efforts to ensure America stays ahead.In Display of Fealty, Tech Industry Curries Favor With Trump (NY Times, 12/14/24).

Conclusion: Lawyers and Technologists as Guardians of the Future

Altman’s vision—and the broader insights it provokes—is a plea for action from everyone. Whether Sam realizes it or not, that includes the legal profession. We are essential to the these key elements of his vision:

1. Construction and enforcement of laws that protect AI from misuse while fostering innovation.

2. Champion transparency and accountability in AI systems.

3. Advocate for equitable access to AI’s benefits, ensuring no one is left behind.

Like any transformative technology, AI brings both promise and peril. The fork in the road is before us. Will we choose the democratic path less travelled, where AI empowers humanity to solve its greatest challenges? Or will we succumb to authoritarian control, where AI becomes a tool of oppression?

In Altman’s words:

We won’t be able to have AI that is built to maximize the technology’s benefits while minimizing its risks unless we work to make sure the democratic vision for AI prevails. If we want a more democratic world, history tells us our only choice is to develop an AI strategy that will help create it, and that the nations and technologists who have a lead have a responsibility to make that choice — now.

The answer lies not in the hands of software developers alone but in the collective will of society, including lawyers, lawmakers, judges, educators, and concerned citizens. Legal professionals cannot just be swords wielded by kings and would be kings. We must be independent guardians and architects of AI’s future. The rules must be drafted with great skill and with justice in mind, not power trips. Now is the time for us to begin hands-on action to guide the advent of superintelligent AI.

As Sam Altman warns, the stakes couldn’t be higher: “The future of AI is the future of humanity.

Ralph Losey Copyright 2024. All Rights Reserved.


WARNING: The Evidence Committee Will Not Change the Rules to Help Protect Against Deep Fake Video Evidence

December 4, 2024

The November 8, 2024 meeting of the Evidence Committee made it clear that the august members of the committee do not believe our warnings. They will do little or nothing to protect our system of justice from the oncoming storm of deepfake justice. They think it is a fake problem and Judge Paul Grimm (ret) and Professor Maura Grossman are wrong. This is not unexpected. Losey, The Problem of Deepfakes and AI-Generated Evidence: Is it time to revise the rules of evidence? Part One and Part Two. Here is a deepfake video of me talking about the committee and deepfake videos.

True Deep Fake videos claim to be true and are much better than this.

Check out the EDRM CLE on DeepFakes on December 5, 2024 for more information. Ralph (the real one) appears on a panel with Judge Ralph Artigliere (ret.) and Professor Maura Grossman. Bottom line: we must all be very diligent and learn as much as we can about fake videos and what to do when you are hit with one. Also, what to do if your client presents you with a video too good to be true or otherwise suspect. We are now living in a world of “liar’s dividend” and it is hitting our courts now.

Ralph Losey Copyright 2024. — All Rights Reserved.


Dario Amodei’s Vision: A Hopeful Future ‘Through AI’s Loving Grace,’ Is Like a Breath of Fresh Air

November 1, 2024

By Ralph Losey

Published on November 1st, 2024

While almost everyone is panicking about a potential robot apocalypse, Dario Amodei, the CEO and co-founder of Anthropic (“Claude”), is explaining how AI might compress 100 years of medical progress into a decade, cure mental illnesses such as PTSD and depression, and alleviate poverty. Dario, a well-respected scientist previously known for his cautious, even gloomy, outlook, now speaks with optimism—and the world is listening.

Amodei is not a salesman like Sam Altman, who frequently makes similar predictions. Instead, Dario Amodei is an experienced scientist known for highlighting the risks of AI. He holds a Ph.D. in biophysics from Princeton and completed his postdoctoral research at Stanford School of Medicine. He also served as the Vice President of Research at OpenAI. In 2021, he and his sister, Daniela Amodei, the former Vice President for Safety and Policy at OpenAI, left the company to co-found Anthropic. Amodei’s detailed predictions in his 28-page essay, Machines of Loving Grace, are both profound and inspiring.


Dario’s essay is filled with science, rigorous analysis, and joyful visions, many of which he believes could begin to materialize as early as 2026. This optimistic outlook offers us all a much-needed breath of fresh air.

Introduction

Unlike Sam Altman, Dario Amodei’s future predictions go into specifics grounded in science and analysis. His 14,000 word essay, Machines of Loving Grace (October 2024), makes predictions in the five categories that Amodei is most excited about. It is not meant to be all-inclusive. Amodei focuses on five main categories in his predictions:

  1. Biology and physical health
  2. Neuroscience and mental health
  3. Economic development and poverty
  4. Peace and governance
  5. Work and meaning

Since Dario Amodei is a respected scientist and business leader, his visions of the future are taken very seriously, even if they do sound like science fiction. Dario reluctantly admits he is a science fiction fan and mentions one book, The Player of Games. (I reread it and noticed two spaceship names I recognized: Of Course I Still Love You and Just Read the Instructions. Sound familiar?)

Amodei starts his essay with an important point, that we do ourselves a disfavor by just dwelling on the negatives and not keeping our eye on how radical the upside of AI could be.” In his new essay Amodei, who was previously known as a doom and gloomer, tries to sketch out what a world with powerful AI might look like, if everything goes right. Sam Altman has been good at this for years, but he lacks the gravitas, ethical reputation, and scientific knowledge that Amodei has. Can AI Really Save the Future? A Lawyer’s Take on Sam Altman’s Optimistic Vision (10/4/24).

The positive visions of Dario, including what he calls a century of progress in a decade, are what motivate people to do the hard work to improve AI. That, and money, fame, and power, of course. But I reject the cynics who say that’s all it is – just a sales pitch to raise more money. As Amodei puts it at the start of his essay:

I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires. Many of the implications of powerful AI are adversarial or dangerous, but at the end of it all, there has to be something we’re fighting for, some positive-sum outcome where everyone is better off, something to rally people to rise above their squabbles and confront the challenges ahead. Fear is one kind of motivator, but it’s not enough: we need hope as well.

Progress is never made by cynical focus on everything that can go wrong. If we just focus on the negatives, we will never be able to unlock the positive potential. We need to envision the amazing things that could happen and figure out how we can help make that future a reality. This means approaching AI with a balanced perspective, recognizing both the potential downsides and benefits, then working proactively to mitigate the risks while pursuing the benefits.

The future is not predetermined. It’s up to us to create it. And with AI, we have a tool that can either amplify our worst tendencies or help us achieve our greatest aspirations. It’s our choice which path we take.

Amodei’s Basic Framework and Assumptions


Amodei uses the term “Powerful AI” in his essay, preferring it over the commonly used Artificial General Intelligence (“AGI”). It is, in fact, quite similar to the AGI concept I’ve referenced in previous articlesAmodei predicts that Powerful AI could emerge as early as 2026, though it may take longer depending on various factors. He defines Powerful AI with six distinct characteristics (all quoted from his essay):

  1. In terms of pure intelligence4, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc.
  2. It has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
  3. It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
  4. It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.
  5. The resources used to train the model can be repurposed to run millions of instances of it. . . . and the model can absorb information and generate actions at roughly 10x-100x human speed5 . . .
  6. Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate . . .

Amodei closes this impressive list of characteristics necessary for today’s AI to become Powerful AI, a/k/a an AGI, with this catchy phrase: We could summarize this as a “country of geniuses in a datacenter.” This sounds somewhat like the Genie in a Bottle myth from Islamic cultures.

1. Biology and Physical Health

This is the area about which Dario Amodei, an AI biophysicist, is best equipped to make predictions. Amodei states: Biology is probably the area where scientific progress has the greatest potential to directly and unambiguously improve the quality of human life. This assertion seems well-supported.

Now for the wonders he thinks could come to pass once AI reaches Powerful AI level, which remember he says could come as early as 2026. He presents detailed and compelling arguments for the feasibility of these predictions and explains how AI will facilitate such breakthroughs. For a full exploration of these ideas, you should read his original essay, Machines of Loving Grace. His overall prediction regarding the rate of improvement that Powerful AI will bring is as follows:

To summarize the above, my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.


Amodei predicts the following groundbreaking advancements in biology and physical health with the aid of Powerful AI. These predictions offer a hopeful glimpse into the future of medical science.

  1. Reliable prevention and treatment of nearly all17 natural infectious disease.
  2. Elimination of most cancer.
  3. Very effective prevention and effective cures for genetic disease.
  4. Prevention of Alzheimer’s.
  5. Improved treatment of most other ailments.
  6. Biological freedom (encompassing advancements in areas like birth control, fertility, weight management, appearance, and more). 
  7. Doubling of the human lifespan18 to about 150.

Amodei further elaborates on these predictions in biology and physical health, the first of five areas of advancement he foresees with the development of Powerful AI.

It is worth looking at this list and reflecting on how different the world will be if all of it is achieved 7-12 years from now (which would be in line with an aggressive AI timeline). It goes without saying that it would be an unimaginable humanitarian triumph, the elimination all at once of most of the scourges that have haunted humanity for millennia.


Any team responsible for achieving even one of these seven breakthroughs would attain legendary status. The Nobel Prize Committee would have to start adding new categories.

2. Neuroscience and Mind

This is the area that pertains to mental health. See e.g. Loneliness Pandemic: Can Empathic AI Friendship Chatbots Be the Cure? (10/17/24). Remember, Dario Amodei was a specialist in computational neuroscience at Stanford in his student days, so he has a strong background in brain research. His explanations again sound very plausible, and are way beyond my abilities to explain, so read his brilliant article, Machines of Loving Grace. Here is his list of wonderful accomplishments in this field that he believes are possible within 5-10 AI-accelerated years after Strong AI is attained. (Prepare yourself to be happy.)

  1. Most mental illness can probably be cured.
  2. Conditions that are very “structural” may be more difficult to cure, but not impossible. This has to do with brain abnormalities that are thought to cause such disorders as Psychopathy, and some intellectual disabilities.
  3. Effective genetic prevention of mental illness seems possible.
  4. Everyday issues that are not traditionally seen as clinical diseases, such as quick temper, difficulty focusing, anxiety, or trouble adapting to change, may also be addressed.
  5. Human baseline experience can be much better. This has to do with expanding dimensions of human peak experiences (without drugs). As Dario puts it: “Many people have experienced extraordinary moments of revelation, creative inspiration, compassion, fulfillment, transcendence, love, beauty, or meditative peace.” These moments can become more common and diverse.

I like Dario’s summary of this section; I suppose because I totally agree with it:

In summary, AI-accelerated neuroscience is likely to vastly improve treatments for, or even cure, most mental illness as well as greatly expand “cognitive and mental freedom” and human cognitive and emotional abilities. It will be every bit as radical as the improvements in physical health described in the previous section. Perhaps the world will not be visibly different on the outside, but the world as experienced by humans will be a much better and more humane place, as well as a place that offers greater opportunities for self-actualization. I also suspect that improved mental health will ameliorate many other societal problems, including ones that seem political or economic.

3. Economic Development and Poverty


This section addresses a crucial humanitarian question: will everyone have access to these technologies? It’s a point that Amodei, true to form, raises right at the outset. His goal for Powerful AI is to bridge the enormous economic gap between people and countries alike and make these new technologies available to all.

Dario approaches this goal with wide-eyed skepticism and awareness of the many obstacles involved. He does not predict equality, but does hope for substantial progress, saying: “A good goal might be for the developing world 5-10 years after powerful AI to at least be substantially healthier than the developed world is today, even if it continues to lag behind the developed world.”

Here are Amodei’s guesses (he does not call these predictions) about how things may go in the developing world over the 5-10 years after powerful AI is developed:

  • Distribution of health interventions.
  • Economic growth.
  • Food security.
  • Mitigating climate change.
  • Inequality within countries.
  • The opt-out problem.

He spells out many lofty goals here but does so in a realistic manner. Here is how he closes this important section.

It won’t be a perfect world, and those who are behind won’t fully catch up, at least not in the first few years. But with strong efforts on our part, we may be able to get things moving in the right direction—and fast. If we do, we can make at least a downpayment on the promises of dignity and equality that we owe to every human being on earth.

4. Peace and Governance

Dario Amodei takes a very thoughtful approach to this all-important goal. Again, he is no naive optimist. He admits upfront a point cynics and the news media love to emphasize:

Unfortunately, I see no strong reason to believe AI will preferentially or structurally advance democracy and peace, in the same way that I think it will structurally advance human health and alleviate poverty. Human conflict is adversarial and AI can in principle help both the “good guys” and the “bad guys”. If anything, some structural factors seem worrying: AI seems likely to enable much better propaganda and surveillance, both major tools in the autocrat’s toolkit. 

Then Dario goes beyond the fear mentality into what has always been the heart of being human, the optimistic, self-help, can-do mentality. This is part of the culture I grew up in and try to pass on: God helps those who help themselves.

It’s therefore up to us as individual actors to tilt things in the right direction: if we want AI to favor democracy and individual rights, we are going to have to fight for that outcome. I feel even more strongly about this than I do about international inequality: the triumph of liberal democracy and political stability is not guaranteed, perhaps not even likely, and will require great sacrifice and commitment on all of our parts, as it often has in the past.

Amodei then goes on to state how he thinks we should go about doing this. What geo-political strategy should now be used to protect everyone from misuse of AI by foreign powers. The main threat here is the government of China, although Amodei does not mention them by name.

My current guess at the best way to do this is via an “entente strategy”26, in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy (this would be a bit analogous to “Atoms for Peace”). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe.

If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies, and may be able to parlay their AI superiority into a durable advantage. . . .

Even if all that goes well, it leaves the question of the fight between democracy and autocracy within each country. It is obviously hard to predict what will happen here, but I do have some optimism that given a global environment in which democracies control the most powerful AI, then AI may actually structurally favor democracy everywhere.

Putting aside the question of the internal struggles many countries are now having about their continued adherence to democratic values, the U.S. included, the foreign policy of AI entente is currently followed by the U.S. government and most other democratic countries. This strategy is implemented by trade restrictions with China. I have mentioned this on my blog several times and agree with Amodei. See e.g. White House Obtains Commitments to Regulation of Generative AI from OpenAI, Amazon, Anthropic, Google, Inflection, Meta and Microsoft (8/1/23); Can AI Really Save the Future? A Lawyer’s Take on Sam Altman’s Optimistic Vision; Also see: What you need to know about Nvidia and the AI chip arms race (Marketplace, 5/8/24).

A vocal minority disagrees with Amodei on this strategy. They consider it overly aggressive. A well-known MIT professor, Max Tegmark, has already written an article that argues the proposed policy in Machines of Loving Grace will trigger a “suicide” AI arms race between China and the U.S.

Wake up Max Tegmark! That race started long ago and your hope that China will be good and follow safety standards is dangerously naive.


Returning to Amodei’s discussion of the internal conflict between democracy and autocracy within countries, he offers a hopeful perspective—one that aligns with the sentiments I often express in my AI lectures:

It is obviously hard to predict what will happen here, but I do have some optimism that given a global environment in which democracies control the most powerful AI, then AI may actually structurally favor democracy everywhere. In particular, in this environment democratic governments can use their superior AI to win the information war: they can counter influence and propaganda operations by autocracies and may even be able to create a globally free information environment by providing channels of information and AI services in a way that autocracies lack the technical ability to block or monitor. It probably isn’t necessary to deliver propaganda, only to counter malicious attacks and unblock the free flow of information. Although not immediate, a level playing field like this stands a good chance of gradually tilting global governance towards democracy, for several reasons. . . .

I expect improvements in mental health, well-being, and education to increase democracy, as all three are negatively correlated with support for authoritarian leaders. In general people want more self-expression when their other needs are met, and democracy is among other things a form of self-expression. Conversely, authoritarianism thrives on fear and resentment.


We can only hope that America remains at the forefront of this fight, maintaining its leadership in pro-democracy policies.

Dario Amodei also makes a few comments on our legal system in this section. As usual he starts with the popular dark side, AI bias, but correctly moves on to what legal techs are already beginning to realize.

[T]he vitality of democracy depends on harnessing new technologies to improve democratic institutions, not just responding to risks. A truly mature and successful implementation of AI has the potential to reduce bias and be fairer for everyone. . . .

For centuries, legal systems have faced the dilemma that the law aims to be impartial, but is inherently subjective and thus must be interpreted by biased humans. . . . Instead legal systems rely on notoriously imprecise criteria like “cruel and unusual punishment” or “utterly without redeeming social importance”, which humans then interpret—and often do so in a manner that displays bias, favoritism, or arbitrariness. . . . AI . . . is the first technology capable of making broad, fuzzy judgements in a repeatable and mechanical way.  

I am not suggesting that we literally replace judges with AI systems, but the combination of impartiality with the ability to understand and process messy, real-world situations feels like it should have some serious positive applications to law and justice. At the very least, such systems could work alongside humans as an aid to decision-making.

This is exactly the kind of thing that many people like me are working towards. It is indeed already viable as experiments with AI and legal decision making have shown. Improvements in AI intelligence and abilities are still needed but the ultimate Strong AI–a courthouse with a million legal geniusesis not required to assist in most legal tasks, including judicial. See e.g. Circuits in Session: Addendum and Elaboration of the Appellate Court Judge Experiment (10/26/23); Eleventh Circuit Judge Admits to Using ChatGPT to Help Decide a Case and Urges Other Judges and Lawyers to Follow Suit (6/3/24); Future Ralph as Herald of Coming Good about generative AI and the justice system (YouTube, 11/2/23); Prosecutors and AI: Navigating Justice in the Age of Algorithms (August 30, 2024); ChatGPT’s Surprising Ability to Split into Multiple Virtual Entities to Debate and Solve Legal Issues (June 30, 2024).

5. Work and Meaning

Amodei believes that finding meaningful work in the age of AI presents the greatest challenge. I respectfully disagree. In my view, curing cancer, eliminating other diseases, and extending human life to 150 years will be far more difficult than addressing the question of meaningful work. While Amodei acknowledges that making accurate predictions about changes in the job market is nearly impossible, he still ventures into this area—likely driven by widespread media fear-mongering over job losses, which often ignores historical trends. He begins by aligning with my perspective: that more jobs will likely be created than lost, with the real challenge lying in training and education.


Beyond this, he asserts that it is impossible to predict what new forms of economic systems may emerge. However, he emphasizes that “civilization has successfully navigated major economic shifts in the past: from hunter-gathering to farming, farming to feudalism, and feudalism to industrialism.”

The deeper question he raises is based on sci-fi projections far into the future, when no one will need to work because AI can do everything for us. (Think Player of Games or Star Trek.) How can humans find meaning then? Amodei suggests that human meaning has never been solely derived from economic labor, and that we can find purpose in relationships, creativity, internal self-discovery, external exploration, and contributing to something larger than ourselves. He believes we can be happy in a world where we are free to pursue our passions and explore our full potential.

Conclusion

In the concluding section of Dario Amodei’s 28-page essay, Machines of Loving Grace, he explains:

I’ve tried to lay out a vision of a world that is both plausible if everything goes right with AI, and much better than the world today. I don’t know if this world is realistic, and even if it is, it will not be achieved without a huge amount of effort and struggle by many brave and dedicated people. Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits.

He then lays a deep thought on us and invites us to ponder the profound impacts these predictions could have on everyone:

But it is a world worth fighting for. If all of this really does happen over 5 to 10 years—the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights—I suspect everyone watching it will be surprised by the effect it has on them. I don’t mean the experience of personally benefiting from all the new technologies, although that will certainly be amazing. I mean the experience of watching a long-held set of ideals materialize in front of us all at once. I think many will be literally moved to tears by it. . . . a thing of transcendent beauty. We have the opportunity to play some small role in making it real.

The title of Dario Amodei’s essay, Machines of Loving Grace, was taken from a landmark poem called All Watched Over By Machines Of Loving Grace. It was written in 1967 by Richard Brautigan (1935-1982) while he was a poet-in-residence at the California Institute of Technology. Brautigan is best known for his novel Trout Fishing in America (1967) and is a key figure in the counterculture movement of the 1960s. So, relax, take a deep breath, and let’s end with his poem.


All Watched Over By Machines Of Loving Grace by Richard Brautigan.

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

How you heard the podcast? Echoes of AI: Episode 6 | Dario Amodei’s Essay on AI, ‘Machines of Loving Grace,’ Is Like a Breath of Fresh Air

Ralph Losey Copyright 2024 — All Rights Reserved