by Ralph Losey
To close out the year 2024 I bring to your attention an important article by Sam Altman, CEO of OpenAI, published in the Washington Post on July 25, 2024: Who will control the future of AI? Here Altman opines that control of AI is the most urgent question of our time. He states, I think correctly, that we are at a crossroads:
… about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?

Altman advocates for a “democratic” approach to AI, one that prioritizes transparency, openness, and broad accessibility. He contrasts this with an “authoritarian” vision of AI, characterized by centralized control, secrecy, and the potential for misuse.
In Altman’s words, “We need the benefits of this technology to accrue to all of humanity, not just a select few.” This means ensuring that AI is developed and deployed in a way that is inclusive, equitable, and respects fundamental human rights.
Who Will Control the Future of AI? A Legal, Ethical, and Technological Call to Action
In Altman’s editorial;“Who Will Control the Future of AI?” he gets serious about the dark side of AI and challenges humanity to decide what kind of world we want to inhabit.
The choice, Altman argues, is stark and existential: Will AI evolve under democratic ideals—decentralized, equitable, and empowering—or fall into the grip of authoritarian control, shaped by concentrated power, surveillance, and cyber warfare? Like the poet Robert Frost’s image in The Road Not Taken, we are faced with two paths forward:
Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.
In this opinion article Sam Altman warns about the dangers of AI falling into the wrong hands. In his words:
There is no third option — and it’s time to decide which path to take. The United States currently has a lead in AI development, but continued leadership is far from guaranteed. Authoritarian governments the world over are willing to spend enormous amounts of money to catch up and ultimately overtake us. Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will “become the ruler of the world,” and the People’s Republic of China has said that it aims to become the global leader in AI by 2030.

Due to our current situation Altman urges action and legal regulations in four areas: security, infrastructure, human capital, and global strategy. This is where legal professionals are urgently needed, especially those who understand the power and potential of AI and are willing to take the path less travelled and fight for freedom, not fame and fortune.
The Crossroads: Two Futures, One Choice
Altman envisions two potential AI futures:
1. Democratic AI: A world where AI systems are transparent, aligned with human values, and distribute benefits equitably. This will require both industry and government regulation. In this scenario, AI empowers individuals, fuels economic growth, and fosters breakthroughs in healthcare, education, and beyond.
2. Authoritarian AI: A dystopian alternative, where AI becomes a tool for repression and control. Dictatorships will in Altman’s words:
[F]orce U.S. companies and those of other nations to share user data, leveraging the technology to develop new ways of spying on their own citizens or creating next-generation cyberweapons to use against other countries. . . . (they) will keep a close hold on the technology’s scientific, health, educational and other societal benefits to cement their own power.
The historical echoes are chilling. Will we have the moral fortitude and ethical alignment to make America truly great again? Will we stand up again as we did in WWII to fight against ethnic oppression, hatred and dictators? Will we preserve the liberties and privacy of all individuals? Or will our political and industrial leaders turn us to a dual-class, surveillance state? Without decisive action now, AI may quickly push the world either way.



This is the challenge before us: how do we ensure AI remains a tool for liberation, not oppression? How can legal and social systems rise to meet this moment? Again, Altman opines we must focus on four things: security, infrastructure, human capital, and global strategy.
1. AI Security – Protecting the Keys to the Kingdom
Altman begins with security, and for good reason: if AI’s core systems—model weights and training data—fall into the wrong hands, the results could be catastrophic. Imagine a scenario where rogue actors or authoritarian regimes gain access to the “brains” of cutting-edge AI systems. Unlike traditional data theft, this isn’t just about stealing files—it’s about stealing intelligence. Teams of AI enhanced cybersecurity experts, including lawyers, are needed to protect the our country from enemy states and criminal gangs, both foreign and domestic. Trade-secret laws must be strengthened and enforced globally.

Here are Sam’s words:
First, American AI firms and industry need to craft robust security measures to ensure that our coalition maintains the lead in current and future models and enables our private sector to innovate. These measures would include cyberdefense and data center security innovations to prevent hackers from stealing key intellectual property such as model weights and AI training data. Many of these defenses will benefit from the power of artificial intelligence, which makes it easier and faster for human analysts to identify risks and respond to attacks. The U.S. government and the private sector can partner together to develop these security measures as quickly as possible.
Legal and Practical Imperatives:
1. Strengthen Cybersecurity Laws: Current frameworks, such as the Computer Fraud and Abuse Act (CFAA), were not built to handle the unique challenges posed by AI. We need laws that specifically address AI model theft and misuse. See: Bruce Schneier: ‘A Hacker’s Mind’ and His Thesis on How AI May Change Democracy (Hacker Way) (“Flexible regulatory frameworks are essential to adapt to technological advancements without stifling innovation.”)
2. Establish AI Export Controls: Just as nuclear technology is heavily controlled, AI systems must be subject to rigorous export regulations. The U.S. Department of Commerce restricted chip exports to China in 2024, but this is only the beginning. See: Understanding the Biden Administration’s Updated Export Controls (Center for Strategic and International Studies, 12/11/24).
3. Use AI to Defend AI: Ironically, the best defense against AI misuse may be AI itself. AI-powered cybersecurity systems—capable of adaptive learning and rapid threat detection—could serve as a digital immune system against cyberattacks. See: Chirag Shah, The Role Of Artificial Intelligence In Cyber Security (Forbes, 12/17/24).

Historical Parallel: In the Cold War, nuclear non-proliferation treaties prevented global catastrophe. Today, we face an AI arms race where the stakes are equally high. Just as the IAEA monitors nuclear technology, an International AI Security Agency could oversee the safe development and deployment of AI systems. See: Akash Wasil, Do We Want an “IAEA for AI”? (Lawfare, 11/20/24).
2. Infrastructure – The Digital Industrial Revolution
Altman’s calls for massive investments in AI infrastructure—data centers, energy grids, and computational capacity. This infrastructure isn’t just about scaling AI (although that is the driving force); it’s about ensuring resilience and sustainability.
Here are Sam Altman’s words:
Second, infrastructure is destiny when it comes to AI. The early installation of fiber-optic cables, coaxial lines and other pieces of broadband infrastructure is what allowed the United States to spend decades at the center of the digital revolution and to build its current lead in artificial intelligence. U.S. policymakers must work with the private sector to build significantly larger quantities of the physical infrastructure — from data centers to power plants — that run the AI systems themselves. Public-private partnerships to build this needed infrastructure will equip U.S. firms with the computing power to expand access to AI and better distribute its societal benefits.

Legal and Ethical Challenges:
1. Energy and Climate Law: AI is an energy hog. Data centers powering generative models consume vast amounts of electricity. Legal frameworks must incentivize sustainable practices, such as renewable energy requirements and carbon taxation.
2. Digital Inclusion Laws: AI infrastructure must be equitable. Governments should fund rural and underserved communities to ensure they benefit from AI advancements, much like the Rural Electrification Act brought electricity to remote areas during the 1930s.
3. Public-Private Partnerships: Massive AI infrastructure projects will require collaboration between governments and tech companies. Contracts must include provisions for data privacy, security standards, and ethical use.
3. Human Capital – Building a New Workforce
A democratic AI future depends not just on technology, but on people—scientists, engineers, policymakers, and educators—who can develop, govern, and use AI responsibly.
Here are Sam Altman’s words:
Building this infrastructure will also create new jobs nationwide. We are witnessing the birth and evolution of a technology I believe to be as momentous as electricity or the internet. AI can be the foundation of a new industrial base it would be wise for our country to embrace.
We need to complement the proverbial “bricks and mortar” with substantial investment in human capital. As a nation, we need to nurture and develop the next generation of AI innovators, researchers and engineers. They are our true superpower.

Legal and Policy Recommendations:
1. AI Literacy Education: Mandate AI education at all levels, emphasizing not just coding, but critical thinking, ethics, and socio-technical literacy. Schools of law, business, and public policy must train AI-literate leaders.
2. STEM Immigration Policies: The U.S. must remain a magnet for global AI talent. Modernizing H-1B visas and creating AI-specific immigration pathways will be critical.
3. Ethics Certifications for AI Professionals: Just as doctors take the Hippocratic Oath, AI developers should adhere to ethical guidelines. Professional certifications could enforce standards for fairness, transparency, and accountability. There must also be specialized tutoring and certificates of general AI competence in various fields, including legal, accounting and medical. Prompt engineering instruction and certifications will continue to grow in importance as the pace of exponential change accelerates.



4. Global Strategy – AI Diplomacy and Governance
Altman’s final pillar acknowledges that AI is not just a national issue—it’s a global one. The United States must lead in shaping international norms for AI development and deployment.
Here are Altman’s words:
We must develop a coherent commercial diplomacy policy for AI, including clarity around how the United States intends to implement export controls and foreign investment rules for the global build out of AI systems. That will also mean setting out rules of the road for what sorts of chips, AI training data and other code — some of which is so sensitive that it may need to remain in the United States — can be housed in the data centers that countries around the world are racing to build to localize AI information.
I’ve spoken in the past about creating something akin to the International Atomic Energy Agency for AI, but that is just one potential model. One option could knit together the network of AI safety institutes being built in countries such as Japan and Britain and create an investment fund that countries committed to abiding by democratic AI protocols could draw from to expand their domestic computer capacities.
Another potential model is the Internet Corporation for Assigned Names and Numbers, which was established by the U.S. government in 1998, less than a decade after the creation of the World Wide Web, to standardize how we navigate the digital world. ICANN is now an independent nonprofit with representatives from around the world dedicated to its core mission of maximizing access to the internet in support of an open, connected, democratic global community.
While identifying the right decision-making body is important, the bottom line is that democratic AI has a lead over authoritarian AI because our political system has empowered U.S. companies, entrepreneurs and academics to research, innovate and build.

Geopolitical and Legal Implications:
1. International AI Treaties: Modeled after the Geneva Conventions or Paris Agreement, nations must agree on global standards for AI safety, ethics, and governance. This includes bans on autonomous weapons and commitments to prevent AI-fueled misinformation campaigns.
2. Create an AI Governance Body: Like the IAEA for nuclear energy, a neutral international body could monitor AI safety, resolve disputes, and ensure equitable access to AI benefits.
3. Engage with Adversaries: Altman suggested in his July 25, 2024 Washington Post editorial that dialogue with countries like China was critical, even when values diverge. He indicated Digital diplomacy could establish guardrails to prevent an AI arms race.
It is uncertain how all of this will pan out under the new Trump Administration, but for interesting speculation see: Brianna Rosen, The AI Presidency: What “America First” Means for Global AI Governance (Just Diplomacy, 12/16/24) (first installment in series, Tech Policy under Trump 2.0.). Also, note how Sam Altman reportedly said in a statement last week: “President Trump will lead our country into the age of A.I., and I am eager to support his efforts to ensure America stays ahead.” In Display of Fealty, Tech Industry Curries Favor With Trump (NY Times, 12/14/24).


Conclusion: Lawyers and Technologists as Guardians of the Future
Altman’s vision—and the broader insights it provokes—is a plea for action from everyone. Whether Sam realizes it or not, that includes the legal profession. We are essential to the these key elements of his vision:
1. Construction and enforcement of laws that protect AI from misuse while fostering innovation.
2. Champion transparency and accountability in AI systems.
3. Advocate for equitable access to AI’s benefits, ensuring no one is left behind.
Like any transformative technology, AI brings both promise and peril. The fork in the road is before us. Will we choose the democratic path less travelled, where AI empowers humanity to solve its greatest challenges? Or will we succumb to authoritarian control, where AI becomes a tool of oppression?

In Altman’s words:
We won’t be able to have AI that is built to maximize the technology’s benefits while minimizing its risks unless we work to make sure the democratic vision for AI prevails. If we want a more democratic world, history tells us our only choice is to develop an AI strategy that will help create it, and that the nations and technologists who have a lead have a responsibility to make that choice — now.
The answer lies not in the hands of software developers alone but in the collective will of society, including lawyers, lawmakers, judges, educators, and concerned citizens. Legal professionals cannot just be swords wielded by kings and would be kings. We must be independent guardians and architects of AI’s future. The rules must be drafted with great skill and with justice in mind, not power trips. Now is the time for us to begin hands-on action to guide the advent of superintelligent AI.
As Sam Altman warns, the stakes couldn’t be higher: “The future of AI is the future of humanity.“

Ralph Losey Copyright 2024. All Rights Reserved.
Discover more from e-Discovery Team
Subscribe to get the latest posts sent to your email.


[…] Who will control the future of AI? an editorial published on July 25, 2024 in the Washington Post on a pivotal choice now facing the world concerning AI development. I wrote about this essay in The Future of AI: Sam Altman’s Vision and the Crossroads of Humanity. […]
[…] Republic of China has said that it aims to become the global leader in AI by 2030. See e.g. The Future of AI: Sam Altman’s Vision and the Crossroads of Humanity […]
[…] Republic of China has said that it aims to become the global leader in AI by 2030. See e.g. The Future of AI: Sam Altman’s Vision and the Crossroads of Humanity […]