Sam Altman’s 2024 Year End Essay: REFLECTIONS

January 22, 2025

by Ralph Losey. Published January 22, 2025.

In 2024, the dawn of the AI revolution shifted from science fiction to daily reality. Now, as we step into 2025, Sam Altman—the architect behind OpenAI’s rapid rise—reflects on this transformation in his essay, Reflections. His message is more than a recounting of OpenAI’s milestones; it is a call to action for all of us, including lawyers, to confront the realities of a world reshaped by artificial intelligence. How should we, as users, engage with this transformation? The way we answer this question will determine whether this new era is bright and promising—or dark and ominous.

Sam Altman started the year 2025 by writing an essay, Reflections, on his blog. It begins by celebrating the second birthday of ChatGPT. The essay “share(s) some personal thoughts about how it has gone so far, and some of the things I’ve learned along the way.” He then begins his analysis the way any good thinker should, by acknowledging all that is still unknown and remains to be understood. Socrates would have approved.

As we get closer to AGI, it feels like an important time to look at the progress of our company. There is still so much to understand, still so much we don’t know, and it’s still so early. But we know a lot more than we did when we started.

Then Sam Altman discusses the history of OpenAI, starting almost nine years ago “on the belief that AGI was possible, and that it could be the most impactful technology in human history.” At the time most everyone thought they were foolish dreamers with no chance of success.

In 2022, OpenAI was still a quiet research lab but had made remarkable progress known to only a few. That all changed November 30th of 2022 when ChatGPT was publicly launched. As everyone by now knows that launch kicked off a growth curve like nothing every seen before. Sam then says that two years later: “We are finally seeing some of the massive upside we have always hoped for from AI, and we can see how much more will come soon.” The sun of AI intelligence is now rising fast.

Then Sam talked about how hard it has been, building new stuff is hard, working with people, the world, etc., etc. He spends a lot of time lamenting his personal challenges last year, getting fired by a renegade Board of Directors and everything. One interesting point in his complaint session, and hey we all have them, is his observation, which is a key point in my own writings:

There is no way to train people for this except by doing it, and when the technology category is completely new, there is no one at all who can tell you exactly how it should be done.

Now for me that is a perfect learning challenge because for decades I’ve been teaching myself new tech (and law) and by now prefer to learn by doing. But most people are not like me, they don’t enjoy hacking around with tech (and law) always having far more questions than answers. That is one reason I have been laboring on a prompt engineering course for legal professionals on generative AI, but I digress.

Finally Sam moves on to what he has to be grateful about in 2024, how OpenAI went from about 100 million weekly active users to more than 300 million, how the AI has gotten better, etc. He then affirms that OpenAI’s vision of AGI remains, unchanged but “our tactics will continue to evolve.” He then explains with examples the need for new tactics.

For example, when we started we had no idea we would have to build a product company; we thought we were just going to do great research. We also had no idea we would need such a crazy amount of capital. There are new things we have to go build now that we didn’t understand a few years ago, and there will be new things in the future we can barely imagine now. 

Pondering Unknown Future. Image by Ralph Losey.

Then he talks about his “release early-break and fix things approach” to AI safety, one that many people think is too risky, including the 2024 Nobel Prize winner, Geoffrey Hinton, the so-called “Godfather of AI.” See e.g., Losey, Key AI Leaders of 2024: Huang, Amodei, Kurzweil, Altman, and Nobel Prize Winners – Hassabis and Hinton. Although Sam Altman talks about safety all the time, many people, not just Hinton, think it is all talk, that the core of his “safety philosophy” is to release it into the world and then learn from experience, including mistakes. I agree you do learn quickly that way, and such an approach is now favored by the likes of Elon Musk, but are the risks acceptable? Can we really afford to break things and try to fix them fast? Time will tell.

We do need the benefits of AGI to fix the overwhelming problems humanity now faces. Plus, Altman and others seem to be well aware of many of the risks. The Intelligence Age (9/23/24 Altman blog on positive vision of AI), Who will control the future of AI? (7/25/24 Washington Post editorial by Altman on political risks). Still, I wonder if the billionaires might overestimate their ability to pivot out of mistakes that quick decisions inevitably bring?

Altman concludes his Reflections essay with a forward-looking vision, expressing confidence that OpenAI is on track to build AGI. As a key milestone, he outlines the development of advanced AI “agents” as the immediate focus for 2025—a crucial interim step toward his ultimate goal. Yet, it’s clear that Altman’s real aspiration lies beyond AGI, toward the creation of superintelligence, a phase he boldly calls the glorious future. He believes we’ll get there sooner than most expect. Personally, I remain skeptical. Bold visions are inspiring, but history teaches us that progress, especially in AI, is rarely linear or predictable.

We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

Conclusion

It will take a lot of work by everyone, lawyers included, for a glorious future to emerge. An awful future could just a easily emerge, one where we fail to overcome the many AI dangers ahead. A world where we make the wrong choices along the way to the ever-promised paradise. Let’s hope the Universe you and I end up in is better than what we have today, far better. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (JD Supra, 1/7/25).

For lawyers, this challenge is both daunting and exhilarating. We are the architects of legal frameworks, the stewards of ethical boundaries, and the guardians of societal trust. Obviously AI wrote that last sentence, which I left in for laughs. Sounds great, but the reality is we are also the hard-working saps that have to clean up the messes that others make.

Beyond triage and twisted torts, we also face the monumental task of keeping the law relevant and effective amid the dawn’s early light of emergent AI. We are forced to confront fundamental questions: What does personhood mean in an age of machines? How do we balance innovation with accountability? And who will bear the burden of mistakes made by entities far beyond human comprehension?

The answers to these questions will not emerge from theory alone but through practice and perseverance. Like Altman’s own philosophy of learning by doing, the legal profession must embrace AI as both a challenge to be met and a tool to be mastered. Lawyers must be ready to ask hard questions, propose daring, creative solutions, and confront uncomfortable truths about the role of the legal profession.

Ultimately, whether we arrive at a better world will depend not on technology alone but on humanity’s ability to guide it with wisdom, compassion, and justice. Lawyers must help steer that course. The future is neither inevitable nor fixed. It is, as always, a series of choices.

Let us ponder deliberately, consider all of the facts, all of the equites, and do our best to make the right decisions.

Now listen to the EDRM Echoes of AI’s podcast of the article, Echoes of AI on Sam Altman’s 2024 Year End Essay: REFLECTIONS. Hear two Gemini model AIs talk about this article, plus response to audience questions. They wrote the podcast, not Ralph. In this Podcast the two AIs respond to three questions during the podcast from actual humans: Mary Mack, Kaylee Walstad and Holley Robinson.

COPYRIGHT Ralph Losey 2025.  All Rights Reserved.


Key AI Leaders of 2024: Huang, Amodei, Kurzweil, Altman, and Nobel Prize Winners – Hassabis and Hinton

January 15, 2025

Written by Ralph Losey and published on January, 2025.

Introduction

The year 2024 marked a pivotal time in artificial intelligence, where aspirations transformed into concrete achievements, cautions became more urgent, and the distinction between human and machine began to blur. Many visionary leaders led the way this exponentially loaded year but here are my top six picks: Jensen Huang, Dario Amodei, Ray Kurzweil, Sam Altman, and the Nobel Prize Winners – Geoffrey Hinton and Demis Hassabis. These people propelled AI’s development at a dizzying pace, speaking and writing of both the benefits and dangers; notably, all but one remain optimistic.

Huang built the supercomputers powering AI systems, while Amodei shifted from AI skeptic to hopeful visionary. Kurzweil reaffirmed his positive bold predictions of human-machine singularity, while Altman laid out a democratic path for global AI governance. Hinton and Hassabis, two of the four Nobel prize winners in AI in 2024, revolutionized fields ranging from neural networks to molecular biology. Everyone was surprised when Geoffrey Hinton left Google in mid-year to yell fire about the AI child he, more than anyone, brought into this world. No one was surprised when he received the Nobel Prize at the end of the year and end of his long career.

The combined work of these six reminds us of AI’s dual nature—its profound ability to benefit humanity while creating new risks requiring constant vigilance. For the legal profession, 2024 underscored the need for proactive leadership, constant education and collaboration with AI to help guide its responsible development.

Jensen Huang: Architect of AI’s Future

Jensen Huang, the CEO and co-founder of NVIDIA, continues to define what’s possible in AI by delivering the computational power to fuel its rise. Huang’s journey began humbly: a young boy from Taiwan accidentally sent to a Kentucky reform school, later waiting tables at Denny’s to make ends meet. These formative experiences shaped his relentless drive and focus, qualities mirrored in NVIDIA’s culture of intensity, innovation and the ever-present whiteboard. For in-depth background on the life of Jenson see my article, Jensen Huang’s Life and Company – NVIDIA: building supercomputers today for tomorrow’s AI, his prediction of AGI by 2028 and his thoughts on AI safety, prosperity and new jobs. Also see, Tae Kim, The Nvidia Way: Jensen Huang and the Making of a Tech Giant (Norton, 12/10.24) (highly recommend).

In 2024, Huang and NVIDIA solidified their role as the backbone of AI progress. Their latest generation of AI supercomputers—capable of unprecedented processing power—drove breakthroughs in generative AI, scientific research, and medical discovery.

That is the prime reason that the stock price of NVIDIA in 2024 increased from approximately $48 per share to $137, a gain of over 180% for the year. The 2024 year-end market capitalization of NVIDIA was over $3.4 trillion, second only to Apple.

In early July 2024 at the World AI Forum in Shanghai, Huang made waves by predicting Artificial General Intelligence (AGI)—machines as smart as humans across all tasks—would be a reality by 2028. Huang framed AGI as inevitable, driven by exponential progress in hardware and algorithms: “We’re not just accelerating AI. We’re accelerating intelligence itself.” His foresight in pivoting NVIDIA from gaming GPUs to AI infrastructure years ago is a testament to his long-term vision.

Huang remains pragmatic about the risks, although still very optimistic. He advocates for AI safety frameworks modeled on aviation: “No one stopped airplanes because they could crash. We made them safer. AI is no different.” I can’t say I agree there is “no difference” because the dangers of AI are far more severe and AI is already in use by hundreds of millions of people.

Nelson Huang sees AI creating prosperity by automating repetitive jobs while generating new industries that didn’t exist before. In Huang’s eyes, the legal profession—often burdened by time-consuming tasks—stands to benefit enormously.

Dario Amodei’s Surprising Optimism in 2024

Dario Amodei is a prominent figure in the field of artificial intelligence. He has a Ph.D. in biophysics from Princeton and conducted postdoctoral research at Stanford, focusing on the study of the brain. He then joined OpenAI, a leading AI research lab, where he specialized in AI safety. In 2021, he co-founded Anthropic, the company behind the AI assistant Claude. Amodei was known for his cautious stance on AI and his warnings about potential dangers, including the risks posed by super intelligent AI.

In October 2024 Amodei’s public stance took a surprising turn. He released his 28-page essay “Machines of Loving Grace.” The title is inspired by a poem by Richard Brautigan, which envisions a harmonious coexistence between humans, nature, and technology. Amodei’s work aims to guide us towards a future where AI lives up to that ideal—a future where AI enhances our lives, empowers us to solve our most pressing problems, and helps us create a more just and equitable world. For in-depth background see my article, Dario Amodei’s Vision: A Hopeful Future ‘Through AI’s Loving Grace,’ Is Like a Breath of Fresh Air.

Amodei outlined his surprisingly optimistic vision focusing on five key areas where AI could revolutionize our world:

1. Biology and Physical Health: Amodei, drawing on his expertise in biophysics, believes AI could accelerate medical breakthroughs, leading to the prevention and treatment of infectious diseases, the elimination of most cancers, cures for genetic diseases, and even the extension of the human lifespan.

2. Neuroscience and Mental Health: Amodei sees AI as a powerful tool for understanding the brain and revolutionizing mental health care, potentially leading to cures for conditions like depression, anxiety, and PTSD.

3. Economic Development and Poverty: Amodei is a strong advocate for equitable access to AI technologies and believes AI can be used to alleviate poverty, improve food security, and promote economic development, particularly in the developing world.

4. Peace and Governance: While acknowledging the potential for AI to be misused in warfare and surveillance, Amodei also sees its potential to strengthen democracy, promote transparency in government, and facilitate conflict resolution. Amodei’s essay includes a discussion of the internal conflict between democracy and autocracy within countries. He offers a hopeful perspective—one that aligns with the sentiments I often express in my AI lectures:

We all hope that America remains at the forefront of this fight, maintaining its leadership in pro-democratic policies.

5. Work and Meaning: Amodei recognizes concerns about AI-driven job displacement but believes AI will also create new jobs and industries. He envisions a future where people are free to pursue their passions and find meaning outside of traditional work structures.

Amodei did not become blind to the challenges and risks associated with AI. He still states that realizing his optimistic vision requires careful and responsible development. Amodei stresses the importance of ethical guidelines and international agreements to prevent the misuse of AI in areas like autonomous weapons systems. He also advocates for ensuring equal access to AI benefits to avoid exacerbating existing inequalities and calls for proactive efforts to mitigate AI bias and ensure fairness in AI applications.

Amodei’s combination of scientific knowledge, industry experience, and nuanced understanding of both the potential and the perils of AI makes him a crucial voice in shaping the future of this technology. His advocacy for responsible AI development, ethical considerations, and equitable access is essential for ensuring that AI benefits all of humanity.

His new, much more optimistic vision shared in 2024 is an inspiration and a call to action for researchers, policymakers, and the public to work towards creating a future where AI is a force for good.

Ray Kurzweil: The Singularity Is Nearer

Ray Kurzweil’s name remains synonymous with bold predictions, and 2024 proved no exception. According to Bill Gates: “Ray Kurzweil is the best person I know at predicting the future of AI.” I was quick to read and write about his new book of predictions. The Singularity is Nearer (when we merge with AI) (Viking, June 25, 2024). It is a long awaited sequel to his 2005 book, The Singularity is Near, but stands on its own. Ray does not change his original prediction times – AGI by 2029. That’s just four years from now. And Kurzweil still predicts The Singularity will arrive in 2045. He thinks it will be great, but I’m not so sure.

Ray Kurzweil’s 2024 statement on his book captures the essence of his hopeful message that The Singularity will arrive in 2045.

“The robots will take over,” the movies tell us. I don’t see it that way. Computers aren’t in competition with us. They’re an extension of us, accompanying us on our journey. . . . By 2045 we will have taken the next step in our evolution. Imagine the creativity of every person on the planet linked to the speed and dexterity of the fastest computer. It will unlock a world of limitless wisdom and potential. This is The Singularity.

Kurzweil himself acknowledges the many difficulties ahead to reach AGI level by 2029. He thinks that generative AI models like Gemini (Kurzweil works for Google) and ChatGPT are still being held back from attaining AGI by: (1) contextual memory limitations (too small of a memory causing GPT to forget earlier input; token limits), (2) common sense (lacks robust model of how real word works); and, (3) social interaction (lacks social nuances not well represented in text, such as tone of voice and humor). The Singularity is Nearer at pages 54-58. Kurzweil thinks these will be simple to overcome by 2029. I have been working on the humor issue myself and am inclined to agree.

Kurzweil also points to the problem with AI hallucinations. The Singularity is Nearer at page 65. This is something we lawyers worry about a lot too. “My AI Did It!” Is No Excuse for Unethical or Unprofessional Conduct (Florida Bar approved CLE). I personally see great progress being made in this area now by software improvements and by more careful user prompting. See my OMNI Version – ChatGPT4o – Retest of the Panel of AI Experts – Part Three.

As usual, whether we attain AGI, or not, by 2029, depends on what you mean by AGI, how you define the term. The same applies to the famous Turing test of intelligence. See e.g. New Study Shows AIs are Genuinely Nicer than Most People – ‘More Human Than Human’ (e-Discovery Team, 2/26/24).

To Kurzweil AGI means attaining the highest human level in all fields of knowledge, which is not something that any single human has ever achieved. The Ai would be as smart as the top experts in every specialized field of law, medicine, liberal arts, science, engineering, mathematics, and software and AI design and programming experts. The Singularity is Nearer at pages 63-69 (Turing Test).

When an AI can do that, match or exceed the best of the best humans in all fields, then Kurzweil thinks AGI will have been attained, not before. That is a very stringent definition-test of AGI. It explains why Kurzweil is sticking with his original 2029 prediction and not moving it up to an earlier time of 2025 or 2026 as Elon Musk and others have done. Their quicker prediction might only work if you compare it to the above average human in every field, not the Einsteins on the peaks.

Further, Kurzweil predicts that once an AI is as smart as the most expert humans in all fields, the AGI level, then incremental increases in AI intelligence will appear to end. Instead, there will appear to be a “sudden explosion of knowledge.” AI intelligence will jump to a “profoundly superhuman” level. The Singularity is Nearer at page 69. Then, the next level of The Singularity will become attainable as our knowledge increases unimaginably fast. Many great new inventions and accomplishments will occur in quick order as AGI level AI and the human handlers rush towards the Singularity.

Kurzweil defines The Singularity as a hybrid state of human AI merger, where human minds unite with the “profoundly superhuman” level of AI minds. The unification sought requires the human neocortex to extend into the cloud. This presents great technical and cultural hurdles that will have to be overcome.

While Kurzweil’s predictions raise profound legal and ethical questions. Who governs AI when its decisions exceed human comprehension? Will you have to be enhanced to practice law, to judge, to create AI policies and laws? What happens when AI rewrites concepts of identity, privacy, and accountability?

Sam Altman: Champion of Democratic AI

Sam Altman, the CEO of OpenAI, in addition to leading his company, wrote two important pieces in 2024 and promoted their ideas:

Sam Altman is overall very optimistic, but he does consider both the dark and light side of AI. He can hardly ignore the dark side as he is asked questions about it every day. In the July editorial he states, I think correctly, that we are at a crossroads:

 about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?

Altman of course advocates for a “democratic” approach to AI, one that prioritizes transparency, openness, and broad accessibility. He contrasts this with an “authoritarian” vision of AI, characterized by centralized control, secrecy, and the potential for misuse. In Altman’s words, “We need the benefits of this technology to accrue to all of humanity, not just a select few.” This means ensuring that AI is developed and deployed in a way that is inclusive, equitable, and respects fundamental human rights.

To attain these ends Altman advocates for a global governance body—akin to the International Atomic Energy Agency—to establish norms and prevent misuse. His vision prioritizes AI for global challenges: climate change, pandemics, and economic inequality.

In Sam Altman’s next essay, The Intelligence Age, he presents the positive view. He predicts the exponential growth we’ve seen in generative AI will continue, unlocking astonishing advancements in science, society, and beyond. According to Altman, AI-driven breakthroughs could soon lead us into a virtual utopia—one where the possibilities seem limitless. While this optimistic vision may come across as a sales pitch, to his credit, there’s more depth to it. Altman’s predictions are rooted in science and insider knowledge, which means his ideas should be taken seriously—but with a healthy dose of skepticism. See my article, Can AI Really Save the Future? A Lawyer’s Take on Sam Altman’s Optimistic Vision.

Sam’s The Intelligence Age essay has a personal touch and some poetic qualities. I especially like the reference at the start of this quote to silicon as melted sand.

Here is one narrow way to look at human history: after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence.

This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.

I agree AI is the most important invention in human history. Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time. To keep it real, however, all of the past technology shifts were buffered by incremental growth. AI’s exponential acceleration may very well overwhelm our existing systems. We need to try to manage this transition and to do that we need to consider the problems that come with it. See Seven Problems of AI: an incomplete list with risk avoidance strategies and help from “The Dude” (8/6/24). To be fair to Sam, his prior editorial did address some of the dark side but not all.

Sam Altman’s essay, The Intelligence Age, offers a compelling vision of how AI can dramatically improve life in the coming decades. However, as captivating as Altman’s optimism is, we must balance it with a dose of realism. There is more to it than a political struggled between democracy and dictatorship discussed in his earlier editorial. The road ahead is incredible but filled with many hurdles that cannot be ignored.

Geoffrey Hinton and Demis Hassabis: Nobel Laureates of AI

2024 brought well-deserved recognition to Geoffrey E. Hinton and Demis Hassabis, as both received Nobel Prizes for their contributions to science. This is also a big deal for the field of AI itself as there has never been an award of AI prior to the four given this year, two in Physics and two in Chemistry.

Geoffrey Hinton: Godfather of AI

Who is Geoffrey E. Hinton? Hinton, often referred to as the “godfather of AI,” was awarded the 2024 Nobel Prize in Physics alongside John Hopfield for their foundational work on artificial neural networks. Hinton’s key contribution was the development of the backpropagation algorithm in the 1980s, the significance of which was unrecognized for almost thirty years. But this new approach to AI revolutionized machine learning by allowing computers to “learn” through a process of identifying and correcting errors. This breakthrough paved the way for the development of powerful AI systems like large language models we have today.

Geoffrey Hinton is a British Canadian born in 1947, in Wimbledon, London, England. He comes from an academically very accomplished family. His great-great-grandfather was George Boole, the mathematician and logician who developed Boolean algebra, and his great-grandfather was Charles Howard Hinton, a mathematician and inventor. His parents were involved in science and social work, Geoffrey studied experimental psychology at King’s College, Cambridge, earning a Bachelor of Arts in 1970. Next, he earned a Ph.D. in Artificial Intelligence at the University of Edinburgh in 1978.

Hinton then moved to the United States to work at institutions such as the University of California, San Diego (1978–1980) and then back to the U.K. to work at the University of Sussex (1980–1982), then back to U.S. to Carnegie Mellon University (1982–1987), finally ending in Canada at the University of Toronto (1987-2013 and thereafter to today as an Emeritus Professor).

His early work was focused on neural networks, a concept that was largely dismissed by the AI community during the 1970s and 1980s in favor of symbolic AI approaches. Hinton’s development of the Boltzmann Machine in 1983 while at Carnegie was largely ignored. It was a stochastic neural network capable of learning internal representations and solving complex combinatorial problems which later led to the Nobel Prize. Despite being ignored for decades, Geoffrey Hinton stubbornly preserved. It turns out he was right and almost everyone else in AI establishment was wrong (although many to this day still refuse to admit it and still hold generative AI in low regard). Professor Hinton eventually flourished at the U. of Toronto where he established himself as a global leader in neural networks and deep learning research and supervised a number of later influential students, including Yann LeCun (Meta), and Ilya Sutskever (OpenAI).

In 2012 Hinton co-founded DNNResearch Inc., a startup focused on deep learning algorithms, with two of his students, Alex Krizhevsky and Ilya Sutskever. The company played a key role in developing AlexNet, which won the 2012 ImageNet Challenge and demonstrated the power of deep convolutional neural networks. In 2013, DNNResearch was acquired by Google, and Hinton joined Google Brain as a Distinguished Research Scientist. At Google, Hinton worked on large-scale neural networks, deep learning innovations, and their applications in AI.

In May 2024 Geoffrey Hinton shocked the world by resigning from Google so that he could freely warn the world about the scary fast advancement of generative AI and its potential risks.

Image of fear and regret by Ralph Losey using Visual Muse
  • AI could become more intelligent than humans. Hinton now says that AI could become so intelligent that it could take control and eliminate human civilization. He has also said that AI could become more efficient at cognitive tasks than humans. I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”
  • AI could be used for disinformation. Hinton has warned that AI could be used to create and spread disinformation campaigns that could interfere with elections. 
  • AI could make the rich richer and the poor poorer. Hinton has said that AI could take jobs in a decent society, which could make the rich richer and the poor poorer. 
  • AI could make it impossible to know what’s true. Hinton has said that AI could make it impossible to know what’s true by having so many fakes. 
  • Government Regulations. Hinton has said that he thinks politicians need to give equal time and money into developing guardrails. Further, “I think we should be very careful about how we build these things.” “There’s no guaranteed path to safety as artificial intelligence advances.”
  • Potential for AI Misuse: “It is hard to see how you can prevent the bad actors from using it for bad things.” “Any new technology, if it’s used by evil people, bad things can happen.” 
  • AI’s Impact on Employment: saying The idea that this stuff could actually get smarter than people—a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.” 
  • The God Father’s Guilt: “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.

Although much of the AI world was shocked and surprised by the seemingly excessive dark fears by Geoffrey E. Hinton, they were not surprised to see the Godfather of generative AI win the Nobel Prize. We will wait and see how his concerns about AI change in the future. As a Nobel Prize winner his opinion will be taken more seriously than ever.

Demis Hassabis: Gamer and Future King of AI?

Who is Demis Hassabis? He is the co-founder of DeepMind who many think represents the best of the next generation of AI scientist leaders. In 2024 he was awarded the Nobel Prize in Chemistry, alongside his colleague at DeepMind, John Jumper, for developing AlphaFold2.

There award was no surprise in the AI community. Hassabis been made no secret of his desire for Deep Mind to win this award, and expects to receive many more in the coming years. The Nobel Prize Committee recognized that the AlphaFold2 AI algorithm’s ability to accurately predict protein structures was a groundbreaking achievement. It has far-reaching implications for medicine and other scientific research.

Unlike Professor Hinson, Demis Hassabis still works for Google, actually the holding company Alphabet, in London and is not a doom and gloom type. He is a gamer, not a professor, and he is used to competition and risks. Although also deeply involved and concerned about AI ethics, Hassabis and his teams at DeepMind are working aggressively on many more scientific breakthroughs and inventions. Unlike the older pioneer Hinton, Hassabis, about thirty years his junior, is just not yet in his prime. Some speculate he may not only receive more Nobel Prizes, but may also someday lead Alphabet itself, not just its subsidiary, DeepMind.

Demis Hassabis’ life story is very interesting and explains his interdisciplinary approach. He was born in 1976 in London to a family that combined Greek Cypriot and Chinese Singaporean heritage. Young Demis was quickly recognized as a child-prodigy at age four by his gaming skills at chess. He attained the rank of Chess Master at age 13. He is used to competing for awards and winning. Chess also instilled in him a deep interest in strategy, problem-solving, and human cognition.

That led to playing complex computers games and continued winning. By age 17 he was designing video games, including co-design of the highly regarded video game Theme Park (1994, Atari and PC)). This was a simulation game where players built different types of theme parks. It sparked a whole genre of space management type simulation games revolving around similar ideas.

Demis, who obviously enjoyed Roller Coaster rides and theme parks, created this groundbreaking video game while working at the UK game company Bullfrog Productions. At the same time, he attended Christ’s College, Cambridge, studying AI, where he passed his final exams two years early at age 16. Advised by Cambridge to take a gap year off, he started his own video game company, Elixir Studios. This allowed him to combine his love for games and AI to create political simulation games. Elixir released two games: Republic: The Revolution (2003, PC and Mac) (simulation where you rise to power in a former Soviet state using political and coercive means) and Evil Genius (2004, PC) (comedic take on a spy fiction). Elixir started as a winning enterprise failed in 2005 in a video game downturn. Rumor has it they were working at the time on a video game version of The Matrix. Can you imagine the irony if Hassabis had completed it?

The still young genius, Demis Hassabis then shifted his interests to the scientific study of human intelligence and enrolled in the Neuroscience Ph.D. program at University College in London. Due to his strong hands-on AI computational background, and his incredible genius, his academic work immediately impressed the professors and MIT and Harvard where he also studied. Hassabis explored the brain neurology of how humans reconstruct past experiences to imagine the future. He was looking for inspiration in the human brain for new types of AI algorithms.

The research by Hassabis on the hippocampus and its role in memory and imagination was published in leading journals like Nature, even before his earning a Ph.D. in Cognitive Neuroscience. His very first academic work, Patients with hippocampal amnesia cannot imagine new experiences, (PNAS, 1/30/2007), became a landmark paper that showed systematically for the first time that patients with damage to their hippocampus, known to cause amnesia, were also unable to imagine themselves in new experiences. 

Soon after completing his studies, Demis Hassabis, already an AI legend, co-founded DeepMind with Shane Legg and Mustafa Suleyman in 2010. The purpose of Deep Mind was to build artificial general intelligence (AGI) capable of solving complex problems. They planned to do so by combining insights from neuroscience, mathematics, and computer science into AI research. Only four years later Deep Mind was considered the best AI lab in the UK and was acquired by Google in 2014 for £400 million. Google at the time made promises that Deep Mind would retain its autonomy, but after years of negotiations, this never really panned out, but they did get the money they needed to retain the top scientists and hire more. In 2016 Deep Mind quickly developed AlphaGo AI, which defeated world champion Lee Sedol in 2016. Then Deep Mind then created AlphaZero in 2018, called the “One program to rule them all.” As the abstract to the paper announcing the discovery stated:

A single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.

David Silver, Demi Hassabis, et al A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play (Science, 12/07/18). The incredible fact is that AlphaZero attained superintelligence without human input and did so in multiple games, not just chess.

In 2018 Deep Mind went beyond game creation to invent AlphaFold AI based software. In 2020 DeepMind released AlphaFold2, which achieved groundbreaking results at CASP14, solving the protein folding problem that had challenged scientists for decades. This breakthrough version two of AlphaGo software was created by DeepMind under the leadership, both management and scientific, of Demis Hassabis, and lead scientists on this project, DeepMind’s John Jumper. See, Hassabis, Jumper, et al, Highly accurate protein structure prediction with AlphaFold, (Nature, 7/15/21) (paper on AlphaFold2 by DeepMind scientists has been cited more than 27 thousand times, even before the Nobel Prize award to Hassabis and Jumper for this discovery).

AlphaFold2 utilizes transformer neural networks, a type of AI architecture, and was trained on vast datasets of known protein structures. The algorithm’s ability to accurately predict protein structures significantly accelerates scientific discovery in various fields, including drug development, disease research, and bioengineering. This protein folding achievement led to Hassabis and Jumper sharing the Nobel Prize in Chemistry in 2024.

Demis Hassabis remains as CEO of Google DeepMind, that is now a subsidiary of Alphabet Inc. According to the excellent, well researched new book Supremacy: AI, ChatGPT, and the Race that Will Change the World by Parmy Olson, which is on Demis Hassabis and Sam Altman, Hassabis was satisfied with his position at Alphabet, even though Google Deep Mind never received its once promised complete autonomy. According to Olson, Hassabis is considered a likely successor to Sundar Pichai by many. The book was completed a few months before the Nobel Prize announcement in late 2024.

I personally doubt Demis Hassabis would still want that position now that he has tasted Nobel Prize victory at age 48. He has always seems to have been more focused on winning than money, and five people in history have won two Nobel Prizes. Marie Curie was the first person to win two, one in Physics in 1903 and another in Chemistry in 1911. John Bardeen won the Nobel Prize in Physics twice, in 1956 for inventing transistors in 1972 for discovering superconductivity. Frederick Sanger and Karl Barry Sharpless both won the Nobel Prize in Chemistry twice. The incredible Linus Pauling, an American, won a Nobel Prize in Chemistry in 1954 and get this, the Nobel Peace Prize in 1962 for his activism against nuclear weapons.

Linus Pauling (1901-1994) is the only person to win two Nobel Prizes on his own. The rest of the double winners shared credit with another, as Demis Hassabis did when winning his first Nobel Prize. I bet Demis goes for another Nobel on his own or a triple that is shared. Any gamer would try to win and a true genius like Demis Hassabis, unlike other alleged genius wannabes, knows the stupidity of competing for mere money and power.

Conclusion: A Call to Action

The events of year 2024 have shown the incredible potential of AI and how critical the next few years may be. What if ex-Google Geoffrey Hinton is right and current Google’s Ray Kurzweil and Demis Hassabis are wrong. No one knows for sure. Maybe everything will turn out fine and maybe we will be in crises mode for decades. Maybe we will need more lawyers than ever before, maybe far less. It seems to me the probabilities are on more because private enterprise and governments need lawyers to enact and follows rules and regulations on AI. Lawyers will likely have to work in teams with AI experts and policy experts to ensure that AI systems align with fairness, ethics, and human values. If so, and if we do our jobs right, maybe Demis Hassabis will win three golds.

I am pretty sure Demis will rise to the challenge, but will we? Will we be able to keep up with the technology leaders of AI? Most lawyers are very competitive and so I say yes we can. We can play that game and we can win.

Legal professionals can help humanity to navigate the difficult times ahead. Lawyers are uniquely qualified by training and temperament to provide balanced advice. We know there are always two sides to a story. We know there are always dark forces and dangers about. We know we must both think ahead and plan and stay current on the facts at hand to adjust to the unexpected. Life is far more complicated than any games of Chess or Go. We will have to work hard to keep our hand in the game for as long as we can. Perhaps with AI’s help we can all win the game together.

Now listen to the EDRM Echoes of AI’s podcast of the article, Echoes of AI on the Key AI Leaders of 2024. Hear two Gemini model AIs talk about this article, plus response to audience questions. They wrote the podcast, not Ralph.

Ralph Losey Copyright 2025. All Rights Reserved.


Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse

January 9, 2025

by Ralph Losey. Published January 9, 2025.

In the history of technological revolutions, there are moments that challenge not only our understanding of what is possible but the very nature of reality itself. Google’s latest refinement to its quantum computer, Willow, may represent such a moment. By achieving computational feats once thought to be confined to science fiction, it forces us to confront bizarre new theories about the fabric of the universe. Could this machine, built from the smallest known building blocks of matter, actually provide evidence that parallel universes exist as some at Google claim? The implications are as profound as they are unsettling.

Introduction

This article discusses Google’s quantum computer, Willow, and the groundbreaking evidence released on December 9, 2024. Willow demonstrated it could perform computations so complex that they would take classical computers longer than the age of the universe to complete. Many, including Hartmut Neven, founder and manager of Google’s Quantum Artificial Intelligence Lab, believe that the unprecedented speed of the quantum computer is only possible by its leveraging computations across parallel universes. Google’s recent advancements in real-time error correction using size scaling stacking of qubits made it possible for these parallel universes to “work” in our own reality. Google claims to be the first to overcome the main hurdle previously facing the practical use of quantum computers, the immense sensitivity of quantum systems to external disturbances like stray particles and vibrations, which researchers call noise.

Neven and his team suggest the best way to understand how their computer works is the many-worlds interpretation of quantum mechanics—the multiverse theory. This theory posits that every quantum event splits the universe, leading to a near infinite array of universes. In a TED Talk five months ago, well before Willow’s latest proof of concept and design, Neven described its remarkable quantum capacities and how they align with this theory. He even speculated that consciousness itself might arise from the interaction of infinite multiverses converging into a single neurological form. These are not just bold claims—they are paradigm-shifting ideas that challenge our deepest assumptions about existence.

Crazy you say? The Manager of Google’s Quantum Artificial Intelligence Lab speaking about tiny transverse-able wormholes, time crystals and quality controlled computations in multiple universes! Even talking seriously about quantum computers “allowing us to expand human consciousness in space, time and complexity.”

Maybe hard to believe but paradigm shifting ideas are often at first dismissed and ridiculed as crazy. Consider the trial of Galileo in 1633 for heresy. Despite Galileo’s eloquent defense arguments that the Earth revolves around the Sun, he was convicted of heresy and spent the rest of his life, eight years, under house arrest. The final judgment rendered also banned him from all further “Ted Talks” of his day about the crazy idea, which obviously defies common sense, “that the sun is the center of the world, and that it does not move from east to west, and that the earth does move, and is not the center of the world.” The judgment by the Catholic Church was not reversed until 1992! Quantum computing, like Galileo’s heliocentric model, challenges us to see beyond what seems obvious and to embrace ideas that defy conventional understanding.

This article explores the quantum parallel universes controversy, which is currently sparking debates across physics, philosophy, and even metaphysics. We’ll examine the topic in a straightforward yet accurate manner, accessible to both experts and curious newcomers. Fasten your seatbelts—today’s scientific theories are as intellectually jarring as Galileo’s were in 1633, when the movement of the Sun across the sky seemed an unshakable truth. As then, we are called to rethink not just how we understand the universe, but our place within it.

To grasp the implications of quantum computing, we must first explore its roots in the fundamental fabric of reality. What happens when exponentially greater possibilities are computed in parallel? What happens when this is applied to generative AI? Will AI deliver answers that are more profound, or entirely transformational? Perhaps, as imagined in my short story, Singularity Advocate Series #1:  AI with a Mind of Its Own, On Trial for its Life, these advancements could even lead to AI consciousness. The possibilities are as exhilarating as they are unsettling.

Quantum Computing is Now Doing the Impossible

The multiverse controversy gained new momentum with Google’s claim that its quantum computer, Willow, recently completed a famous benchmark computation, the Random Circuit Sampling (RCS) test, in just five minutes. This achievement is staggering because this theoretical task would take the fastest classical supercomputers an estimated 10 septillion years (10 followed by 24 zeros) to finish! To put that in perspective, the Universe itself is approximately 13.8 billion years old—meaning 10 septillion years is about 999,999,998,620,000,000,000 times older than the Universe. The sheer scale of this comparison defies imagination.

How can such an extraordinary feat be possible? The answer lies in the fundamental principles of quantum computing and its use of qubits. Unlike classical bits, which are confined to being either 0 or 1, qubits exist in a superposition state that is a probabilistic blend of both 0 and 1 simultaneously, until measured. To put it simply, qubits are neither strictly here nor there, neither fully 0 nor fully 1, but somewhere in between. Google’s qubits require superconductivity and can only work in the coldest places in our universe, the artificially constructed refrigerated chambers that hold the qubits. Go inside the Google Quantum AI lab to learn about how quantum computing works, video at 3:30-4:30 of 6:17. They are measured and made to collapse from a zero and one super-state by use of tuned microwaves

This seemingly impossible property of both a zero and one probable charge is called superposition. Qubits, governed by the principles of quantum mechanics, behave both as particles and waves depending on the conditions. This wave-like nature underpins phenomena like superposition and entanglement. Entangled particles are linked so that the measurement of one instantly determines the state of the other, no matter the distance between them. (To me and others, this reliance on human measurements to explain a theory is misplaced (see “Measurement Problem,” Wikipedia.)) The instant changes supposedly caused by a measurement also seeming violate the limitations of time and space and the Speed of Light. At first, this phenomenon—called quantum entanglement—was met with skepticism, famously dismissed by Albert Einstein as “spooky action at a distance.” Yet, like Galileo’s once-ridiculed theories, the fact of quantum entanglement has been repeatedly validated through rigorous experimentation, although no one really knows how it works.

The Speed of Light (SOL) is supposedly not violated by quantum entanglement because the states are random and probabilistic, and supposedly nothing actually “travels” from one qubit or elementary particle to another. This is the establishment view of the SOL as a limit to try to uphold the general view of relativity. This has never been totally convincing to some scientists. They contend the SOL is not an inviolate limit. If these antiestablishment scientists are correct, then space travel at faster that light velocities might be possible. That mean our physical isolation from other star systems could be overcome.

This is possible under the parallel universes theory, which also goes under the name of the Many-Worlds Interpretation (MWI). The idea was first set forth by Hugh Everett in 1957 in his dissertation “The Theory of the Universal Wavefunction.” Scientists arguing for the Many Worlds Interpretation include Bryce DeWitt, David Deutsch, Max Tegmark and Sean Carroll. [I suggest you see recent Tegmark interviews excerpts by Robert Kuhn, here, here and here and another short video of Max Tegmark here. You should also watch a recent video interview of Sean Carroll by Neil deGrasse, which is included later in this article along with reference to his two latest books. As an interesting aside, physicist David David Deutsch (1953-present) speculates in his book The Beginning of Infinity (pg. 294) that some fiction, such as alternate history, could occur somewhere in the multiverse, as long as it is consistent with the laws of physics.]

Regardless of whether the SOL is being violated, quantum computers today routinely use quantum entanglement to link qubits, enabling them to function as an interconnected system. By leveraging the unique properties of quantum mechanics—superposition, entanglement, and interference—quantum computers can simultaneously explore an immense number of possible solutions, making computations that are impossible for classical computers.

Google’s Willow quantum chip demonstrated this capability by solving the Random Circuit Sampling (RCS) problem, a benchmark designed specifically to showcase the computational supremacy of quantum systems over classical ones. Willow’s ability to complete this test error-free marks a milestone not just in quantum computing but in our understanding of the potential of computers.

Random Circuit Sampling Benchmark Test

Here’s a simplified explanation of the RCS benchmark test. Imagine navigating an incredibly complex maze filled with twists, turns, and countless random paths. The goal of the RCS test is to “map” this maze by randomly exploring all of its paths and recording where each one leads.

In quantum computing the “maze” represents a random quantum circuit. A quantum circuit is like a recipe composed of gates—building blocks that dictate how qubits interact and evolve. In the RCS test, these gates are arranged randomly, creating a circuit of immense complexity. The “map” of this circuit is the output: a set of results generated based on probabilities defined by the random arrangement of gates. The test is about “sampling” these outputs multiple times to uncover the circuit’s overall behavior.

For non-quantum chip computers to simulate this process, they must calculate every possible path through the maze, one at a time. The complexity of possible paths grows exponentially as the various alternative combine. Even using today’s supercomputers the calculation can require an unimaginable amount of time—potentially up to septillions of years.

The RCS test is designed to showcase quantum computers’ ability to tackle tasks that are practically impossible for classical systems. While the test itself doesn’t solve a “real-world” problem, it serves as a performance benchmark to demonstrate the mind-boggling computational power of quantum machines.

Until recently, this was all theoretical. Building a quantum chip capable of solving the RCS test without overwhelming errors had never been achieved. Noise—external interference from particles and vibrations—created too many errors for the results to be usable. However, in December 2024, Google announced that Willow had overcome the noise issue. By scaling up the number of qubits and implementing real-time error correction, Willow successfully completed the test.

This breakthrough means quantum computers may soon be able to leverage superposition and quantum interference to perform previously impossible computer tasks. By harnessing quantum entanglement, qubits can maintain correlations and work together as a unified system, enabling quantum computers to explore numerous paths through the maze simultaneously and sample outputs at seemingly impossible speeds.

These advancements make otherwise impossible computer tasks possible. Quantum computing holds the potential to revolutionize fields such as environmental modeling, chemistry, material science, medicine, cybersecurity (a very troubling thought), artificial intelligence, and even the creation of reality simulations. This adds some support for Elon Musk’s claim there is a 99% chance that we are already living in a simulated reality generated by an advanced alien civilization. The idea that we are all just computer generated avatars living in a fake world seems like sensational media fiction to me but large-scale quantum computers could soon bring ideas like that closer to our current universe realities.

Multiverse Metaphysics

The multiverse theory, which some argue is now much more viable due to Google’s quantum computer, has many challenging philosophical implications. Perhaps the most fascinating is the idea that our reality, our universe, is just one among countless others, potentially infinite in number. This challenges our perception of ourselves as unique and our universe as the only reality, suggesting instead that we are just one small part of an unfathomably vast and complex existence. In some ways this is even weirder than Musk’s belief we are living in a simulated reality—a kind of cosmic deepfake.

Picture a reality where every possible outcome of every quantum event plays out in a separate universe. Every decision you make, every path you don’t take, could be unfolding in parallel timelines, creating alternate versions of yourself. Multiverse metaphysics challenges our traditional understanding of identity and free will. If every choice creates a new branching timeline, does our sense of individuality and free-will still make sense? Or are we just one version of countless others diverging infinitely in a meaningless multiverse?

The multiverse also forces us to rethink our understanding of time. One model suggests that these parallel universes exist across vast stretches of space, each potentially originating from its own Big Bang. This implies that time may not be the linear flow we perceive but rather a multidimensional web, where past, present, and future coexist simultaneously. Personally, I wouldn’t be surprised if this turns out to explain phenomena like quantum entanglement—Einstein’s “spooky action at a distance.” Is this what Helmut Neven is referring to when he TED Talks about his quantum computer creating nearly perpetual motion time crystals? Supra at 4:55 of 11:39.

While these concepts might sound like science fiction, advancements in quantum computing, such as Google’s Willow, could provide the tools to explore them scientifically. Some physicists believe that anomalies in the cosmic microwave background radiation—remnants of the Big Bang—might offer indirect evidence of the multiverse. Could this also lend credence to Musk’s speculation that we’re living in a computer simulation? If that’s the case, does it mean we’re at the mercy of some cosmic programmer who might press the reset button at any moment? (For the record, I doubt very much the Musk-supported scenario—though the thought is undeniably unsettling.)

For more on the far-out philosophical implications of the quantum world and the multiverse, check out Neli deGrasse Tyson’s conversation with theoretical physicist Sean Carroll below. Also see Sean Carroll’s recent books, Quanta and Fields: The Biggest Ideas in the Universe (Dutton, 2024) and Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime (Dutton, 2019), and videos.

The multiverse theory has its share of critics, and skepticism remains widespread among scientists. Yet, even if concrete evidence for parallel universes eludes us, the mere exploration of these ideas expands the boundaries of our understanding of reality. Such inquiries challenge us to confront profound questions about existence and the nature of the universe itself. One thing is certain: quantum computers like Willow compel us to reevaluate our perceptions of what is real. Could Hartmut Neven or Sean Carrol be the heretical Galileo of our time?

As for me, I lean toward perspectives grounded in self-determination and objective truth. I find it hard to accept that every quantum event, such as the collapse of a probability wave during measurement, results in the creation of an entirely new universe. Likewise, I’m skeptical of the idea that each decision we make spawns a new universe, though I do believe we create our own reality within this universe. My belief aligns closely with the concept of free will. I’m also intrigued by the idea that multiple universes could exist simultaneously and that quantum particles might somehow traverse between them. The idea that quantum computers might leverage these connections across universes to perform their calculations is consistent with these musings, suggesting that the interplay between quantum mechanics and multiverses may offer profound insights into the fabric of reality.

But can we communicate and receive intelligent data from other universes? Can we engineer practical applications that use parallel universes? Helmut Neven stated in his TED Talk that the quantum computer his team at Google created can be thought of as creating tiny, transverse-able wormholes between universes. Supra at 4:20 of 11:39. Quantum computers might not create new universes, but they could hypothetically create bridges between them. Perhaps interaction with other universes is what Google’s Willow is now doing.

This idea challenges the traditional worldview of mainstream scientists, which is centered on a single universe and the foundational power of measurements to determine outcomes. (As mentioned, this reliance on the seemingly magical power of measurement or human observation to explain quantum behavior comes across as an irrational shortcut to me, and many others, a product of the early Twentieth Century worldview.) Whatever the explanation, it is clear that Willow now operates successfully, defying conventional expectations and hinting at possibilities that push the boundaries of our current understanding.

According to Google, now that it has proof of concept on what a few chips can do it will start construction of large stacks of super-cooled quantum computers. What happens when it uses the power of a million quantum qubits? Google’s goal is to begin releasing practical applications by the end of this decade—perhaps sooner with AI’s help. It’s closest competitors in this field-IBM , Amazon, Microsoft and others, might not be far behind. Quantum computation is yet another dramatic agent of change. The future is moving fast.

Dark Side of Quantum Computers

Unfortunately, the future of quantum computers also has a dark side, much like AI. Privacy will be vulnerable as new cybersecurity attack weapons are made possible. All non-quantum encryption codes could easily be cracked and all communications and financial systems vulnerable, especially bit-coins. China is well aware of the weaponization potentials of both AI and quantum. They have a history of trade-secret theft from U.S. companies and are certainly now focused on stealing Google’s latest breakthrough to boost their own impressive efforts. Just before Google’s December 9, 2024, announcement of the Willow breakthrough China claimed their latest quantum chip, the Tianyan-504, had the same capacities as Google’s Willow. I suspect that impacted the timing of Google’s announcement.

The U.S. Department of Defense, NSA and big-tech companies are well aware of the new threats that quantum computing creates. Consider for instance the U.S. Department of Defense unclassified Report to Congress, Military and Security Developments Involving the People’s Republic of China dated 12/18/24:

The PLA is pursuing next-generation combat capabilities based on its vision of future conflict, which it calls “intelligentized warfare,” defined by the expanded use of AI, quantum computing, big data, and other advanced technologies at every level of warfare. . . .

Judging from the build out of the PRC’s quantum communication infrastructure, the PLA may leverage integrated quantum networks and quantum key distribution to reinforce command, control, and communications systems. . . .

In 2021, Beijing funded the China Brain Plan, a major research project aimed at using brain science to develop new biotechnology and AI applications. That year, the PRC designed and fabricated a quantum computer capable of outperforming a classical high-performance computer for a specific problem. The PRC was domestically developing specialized refrigerators needed for quantum computing research in an effort to end reliance on international components. In 2017, the PRC spent over $1 billion on a national quantum lab which will become the world’s largest quantum research facility when completed.

The 2025 National Defense Authorization Act that passed on December 9, 2012, leaves no doubt that the incoming Trump Administration will continue, if not accelerate, current DOD efforts in quantum computing. See e.g. Section Sec. 243 of the Act, aka the Quantum Scaling Initiative.

No one knows how much Elon Musk will influence such policies, but we do know he understands the impact of Google’s announcement and publicly praised Google’s CEO, Sundar Pichai, for the achievement. Pichai replied to Musk on X that: We should do a quantum cluster in space with Starship one day 🙂. (Note that China has had a quantum chip in space since 2016 to study secure communications and in October 2024 announced plans for several more in 2025. China to launch new quantum communications satellites in 2025, 10/08/24). Musk immediately replied affirmatively on X to Sundar and even upped the ante by saying:

That will probably happen. Any self-respecting civilization should at least reach Kardashev Type II. In my opinion, we are currently only at <5% of Type I. To get to ~30%, we would need to place solar panels in all desert or highly arid regions.

Unpacking the rest of Musk’s quote would require another article, let’s just say Kardashev has to do with technological progress and level of energy production. Level two refers to solar energy where a civilizations uses their star’s energy through a device such as a Dyson sphere shown below.

Conclusion

I decided you might enjoy my delegation of the final words to not-yet-quantum-powered AIs from Google. Perhaps in another universe, you’d hear my own thoughts wrapping this up, but for now, count yourself lucky to be conscious in this one. My AI podcasters bring humor and insight, though they’re far from Godlike—so I still need to guide and verify them. What’s new, however, is the interactivity feature Google recently added to the podcasters. In this session, you’ll hear wacky versions of me near the end interrupt to ask questions and the AIs’ spontaneous responses. It’s fascinating to imagine what quantum-powered AIs might say or do in the future. Click here or on the graphic below to go to the EDRM podcast.

Ralph Losey Copyright 2024. All Rights Reserved.


The Future of AI: Sam Altman’s Vision and the Crossroads of Humanity

December 18, 2024

by Ralph Losey

To close out the year 2024 I bring to your attention an important article by Sam Altman, CEO of OpenAI, published in the Washington Post on July 25, 2024: Who will control the future of AI? Here Altman opines that control of AI is the most urgent question of our time. He states, I think correctly, that we are at a crossroads:

about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?

Altman advocates for a “democratic” approach to AI, one that prioritizes transparency, openness, and broad accessibility. He contrasts this with an “authoritarian” vision of AI, characterized by centralized control, secrecy, and the potential for misuse.

In Altman’s words, “We need the benefits of this technology to accrue to all of humanity, not just a select few.” This means ensuring that AI is developed and deployed in a way that is inclusive, equitable, and respects fundamental human rights.

Who Will Control the Future of AI? A Legal, Ethical, and Technological Call to Action

In Altman’s editorial;“Who Will Control the Future of AI? he gets serious about the dark side of AI and challenges humanity to decide what kind of world we want to inhabit.

Fake Video of Sam Altman using Kling by Losey.

The choice, Altman argues, is stark and existential: Will AI evolve under democratic ideals—decentralized, equitable, and empowering—or fall into the grip of authoritarian control, shaped by concentrated power, surveillance, and cyber warfare? Like the poet Robert Frost’s image in The Road Not Taken, we are faced with two paths forward:

Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.

In this opinion article Sam Altman warns about the dangers of AI falling into the wrong hands. In his words:

There is no third option — and it’s time to decide which path to take. The United States currently has a lead in AI development, but continued leadership is far from guaranteed. Authoritarian governments the world over are willing to spend enormous amounts of money to catch up and ultimately overtake us. Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will “become the ruler of the world,” and the People’s Republic of China has said that it aims to become the global leader in AI by 2030.

Due to our current situation Altman urges action and legal regulations in four areas: security, infrastructure, human capital, and global strategy. This is where legal professionals are urgently needed, especially those who understand the power and potential of AI and are willing to take the path less travelled and fight for freedom, not fame and fortune.

The Crossroads: Two Futures, One Choice

Altman envisions two potential AI futures:

1. Democratic AI: A world where AI systems are transparent, aligned with human values, and distribute benefits equitably. This will require both industry and government regulation. In this scenario, AI empowers individuals, fuels economic growth, and fosters breakthroughs in healthcare, education, and beyond.

2. Authoritarian AI: A dystopian alternative, where AI becomes a tool for repression and control. Dictatorships will in Altman’s words:

[F]orce U.S. companies and those of other nations to share user data, leveraging the technology to develop new ways of spying on their own citizens or creating next-generation cyberweapons to use against other countries. . . . (they) will keep a close hold on the technology’s scientific, health, educational and other societal benefits to cement their own power.

The historical echoes are chilling. Will we have the moral fortitude and ethical alignment to make America truly great again? Will we stand up again as we did in WWII to fight against ethnic oppression, hatred and dictators? Will we preserve the liberties and privacy of all individuals? Or will our political and industrial leaders turn us to a dual-class, surveillance state? Without decisive action now, AI may quickly push the world either way.

This is the challenge before us: how do we ensure AI remains a tool for liberation, not oppression? How can legal and social systems rise to meet this moment? Again, Altman opines we must focus on four things: security, infrastructure, human capital, and global strategy.

1. AI Security – Protecting the Keys to the Kingdom

Altman begins with security, and for good reason: if AI’s core systems—model weights and training data—fall into the wrong hands, the results could be catastrophic. Imagine a scenario where rogue actors or authoritarian regimes gain access to the “brains” of cutting-edge AI systems. Unlike traditional data theft, this isn’t just about stealing files—it’s about stealing intelligence. Teams of AI enhanced cybersecurity experts, including lawyers, are needed to protect the our country from enemy states and criminal gangs, both foreign and domestic. Trade-secret laws must be strengthened and enforced globally.

Here are Sam’s words:

First, American AI firms and industry need to craft robust security measures to ensure that our coalition maintains the lead in current and future models and enables our private sector to innovate. These measures would include cyberdefense and data center security innovations to prevent hackers from stealing key intellectual property such as model weights and AI training data. Many of these defenses will benefit from the power of artificial intelligence, which makes it easier and faster for human analysts to identify risks and respond to attacks. The U.S. government and the private sector can partner together to develop these security measures as quickly as possible.

Legal and Practical Imperatives:

1. Strengthen Cybersecurity Laws: Current frameworks, such as the Computer Fraud and Abuse Act (CFAA), were not built to handle the unique challenges posed by AI. We need laws that specifically address AI model theft and misuse. See: Bruce Schneier: ‘A Hacker’s Mind’ and His Thesis on How AI May Change Democracy (Hacker Way) (“Flexible regulatory frameworks are essential to adapt to technological advancements without stifling innovation.”)

2. Establish AI Export Controls: Just as nuclear technology is heavily controlled, AI systems must be subject to rigorous export regulations. The U.S. Department of Commerce restricted chip exports to China in 2024, but this is only the beginning. See: Understanding the Biden Administration’s Updated Export Controls (Center for Strategic and International Studies, 12/11/24).

3. Use AI to Defend AI: Ironically, the best defense against AI misuse may be AI itself. AI-powered cybersecurity systems—capable of adaptive learning and rapid threat detection—could serve as a digital immune system against cyberattacks. See: Chirag Shah, The Role Of Artificial Intelligence In Cyber Security (Forbes, 12/17/24).

Historical Parallel: In the Cold War, nuclear non-proliferation treaties prevented global catastrophe. Today, we face an AI arms race where the stakes are equally high. Just as the IAEA monitors nuclear technology, an International AI Security Agency could oversee the safe development and deployment of AI systems. See: Akash Wasil, Do We Want an “IAEA for AI”? (Lawfare, 11/20/24).

2. Infrastructure – The Digital Industrial Revolution

Altman’s calls for massive investments in AI infrastructure—data centers, energy grids, and computational capacity. This infrastructure isn’t just about scaling AI (although that is the driving force); it’s about ensuring resilience and sustainability.

Here are Sam Altman’s words:

Second, infrastructure is destiny when it comes to AI. The early installation of fiber-optic cables, coaxial lines and other pieces of broadband infrastructure is what allowed the United States to spend decades at the center of the digital revolution and to build its current lead in artificial intelligence. U.S. policymakers must work with the private sector to build significantly larger quantities of the physical infrastructure — from data centers to power plants — that run the AI systems themselves. Public-private partnerships to build this needed infrastructure will equip U.S. firms with the computing power to expand access to AI and better distribute its societal benefits.

Legal and Ethical Challenges:

1. Energy and Climate Law: AI is an energy hog. Data centers powering generative models consume vast amounts of electricity. Legal frameworks must incentivize sustainable practices, such as renewable energy requirements and carbon taxation.

2. Digital Inclusion Laws: AI infrastructure must be equitable. Governments should fund rural and underserved communities to ensure they benefit from AI advancements, much like the Rural Electrification Act brought electricity to remote areas during the 1930s.

3. Public-Private Partnerships: Massive AI infrastructure projects will require collaboration between governments and tech companies. Contracts must include provisions for data privacy, security standards, and ethical use.

3. Human Capital – Building a New Workforce

A democratic AI future depends not just on technology, but on people—scientists, engineers, policymakers, and educators—who can develop, govern, and use AI responsibly.

Here are Sam Altman’s words:

Building this infrastructure will also create new jobs nationwide. We are witnessing the birth and evolution of a technology I believe to be as momentous as electricity or the internet. AI can be the foundation of a new industrial base it would be wise for our country to embrace.

We need to complement the proverbial “bricks and mortar” with substantial investment in human capital. As a nation, we need to nurture and develop the next generation of AI innovators, researchers and engineers. They are our true superpower.

Extremely large server, energy buildings complex construction image by Ralph Losey using Visual Muse

Legal and Policy Recommendations:

1. AI Literacy Education: Mandate AI education at all levels, emphasizing not just coding, but critical thinking, ethics, and socio-technical literacy. Schools of law, business, and public policy must train AI-literate leaders.

2. STEM Immigration Policies: The U.S. must remain a magnet for global AI talent. Modernizing H-1B visas and creating AI-specific immigration pathways will be critical.

3. Ethics Certifications for AI Professionals: Just as doctors take the Hippocratic Oath, AI developers should adhere to ethical guidelines. Professional certifications could enforce standards for fairness, transparency, and accountability. There must also be specialized tutoring and certificates of general AI competence in various fields, including legal, accounting and medical. Prompt engineering instruction and certifications will continue to grow in importance as the pace of exponential change accelerates.

4. Global Strategy – AI Diplomacy and Governance

Altman’s final pillar acknowledges that AI is not just a national issue—it’s a global one. The United States must lead in shaping international norms for AI development and deployment.

Here are Altman’s words:

We must develop a coherent commercial diplomacy policy for AI, including clarity around how the United States intends to implement export controls and foreign investment rules for the global build out of AI systems. That will also mean setting out rules of the road for what sorts of chips, AI training data and other code — some of which is so sensitive that it may need to remain in the United States — can be housed in the data centers that countries around the world are racing to build to localize AI information.

I’ve spoken in the past about creating something akin to the International Atomic Energy Agency for AI, but that is just one potential model. One option could knit together the network of AI safety institutes being built in countries such as Japan and Britain and create an investment fund that countries committed to abiding by democratic AI protocols could draw from to expand their domestic computer capacities.

Another potential model is the Internet Corporation for Assigned Names and Numbers, which was established by the U.S. government in 1998, less than a decade after the creation of the World Wide Web, to standardize how we navigate the digital world. ICANN is now an independent nonprofit with representatives from around the world dedicated to its core mission of maximizing access to the internet in support of an open, connected, democratic global community.

While identifying the right decision-making body is important, the bottom line is that democratic AI has a lead over authoritarian AI because our political system has empowered U.S. companies, entrepreneurs and academics to research, innovate and build.

Geopolitical and Legal Implications:

1. International AI Treaties: Modeled after the Geneva Conventions or Paris Agreement, nations must agree on global standards for AI safety, ethics, and governance. This includes bans on autonomous weapons and commitments to prevent AI-fueled misinformation campaigns.

2. Create an AI Governance Body: Like the IAEA for nuclear energy, a neutral international body could monitor AI safety, resolve disputes, and ensure equitable access to AI benefits.

3. Engage with Adversaries: Altman suggested in his July 25, 2024 Washington Post editorial that dialogue with countries like China was critical, even when values diverge. He indicated Digital diplomacy could establish guardrails to prevent an AI arms race.

It is uncertain how all of this will pan out under the new Trump Administration, but for interesting speculation see: Brianna Rosen, The AI Presidency: What “America First” Means for Global AI Governance (Just Diplomacy, 12/16/24) (first installment in series, Tech Policy under Trump 2.0.). Also, note how Sam Altman reportedly said in a statement last week: “President Trump will lead our country into the age of A.I., and I am eager to support his efforts to ensure America stays ahead.In Display of Fealty, Tech Industry Curries Favor With Trump (NY Times, 12/14/24).

Conclusion: Lawyers and Technologists as Guardians of the Future

Altman’s vision—and the broader insights it provokes—is a plea for action from everyone. Whether Sam realizes it or not, that includes the legal profession. We are essential to the these key elements of his vision:

1. Construction and enforcement of laws that protect AI from misuse while fostering innovation.

2. Champion transparency and accountability in AI systems.

3. Advocate for equitable access to AI’s benefits, ensuring no one is left behind.

Like any transformative technology, AI brings both promise and peril. The fork in the road is before us. Will we choose the democratic path less travelled, where AI empowers humanity to solve its greatest challenges? Or will we succumb to authoritarian control, where AI becomes a tool of oppression?

In Altman’s words:

We won’t be able to have AI that is built to maximize the technology’s benefits while minimizing its risks unless we work to make sure the democratic vision for AI prevails. If we want a more democratic world, history tells us our only choice is to develop an AI strategy that will help create it, and that the nations and technologists who have a lead have a responsibility to make that choice — now.

The answer lies not in the hands of software developers alone but in the collective will of society, including lawyers, lawmakers, judges, educators, and concerned citizens. Legal professionals cannot just be swords wielded by kings and would be kings. We must be independent guardians and architects of AI’s future. The rules must be drafted with great skill and with justice in mind, not power trips. Now is the time for us to begin hands-on action to guide the advent of superintelligent AI.

As Sam Altman warns, the stakes couldn’t be higher: “The future of AI is the future of humanity.

Ralph Losey Copyright 2024. All Rights Reserved.