The Future of AI: Sam Altman’s Vision and the Crossroads of Humanity

December 18, 2024

by Ralph Losey

To close out the year 2024 I bring to your attention an important article by Sam Altman, CEO of OpenAI, published in the Washington Post on July 25, 2024: Who will control the future of AI? Here Altman opines that control of AI is the most urgent question of our time. He states, I think correctly, that we are at a crossroads:

about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?

Altman advocates for a “democratic” approach to AI, one that prioritizes transparency, openness, and broad accessibility. He contrasts this with an “authoritarian” vision of AI, characterized by centralized control, secrecy, and the potential for misuse.

In Altman’s words, “We need the benefits of this technology to accrue to all of humanity, not just a select few.” This means ensuring that AI is developed and deployed in a way that is inclusive, equitable, and respects fundamental human rights.

Who Will Control the Future of AI? A Legal, Ethical, and Technological Call to Action

In Altman’s editorial;“Who Will Control the Future of AI? he gets serious about the dark side of AI and challenges humanity to decide what kind of world we want to inhabit.

Fake Video of Sam Altman using Kling by Losey.

The choice, Altman argues, is stark and existential: Will AI evolve under democratic ideals—decentralized, equitable, and empowering—or fall into the grip of authoritarian control, shaped by concentrated power, surveillance, and cyber warfare? Like the poet Robert Frost’s image in The Road Not Taken, we are faced with two paths forward:

Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.

In this opinion article Sam Altman warns about the dangers of AI falling into the wrong hands. In his words:

There is no third option — and it’s time to decide which path to take. The United States currently has a lead in AI development, but continued leadership is far from guaranteed. Authoritarian governments the world over are willing to spend enormous amounts of money to catch up and ultimately overtake us. Russian dictator Vladimir Putin has darkly warned that the country that wins the AI race will “become the ruler of the world,” and the People’s Republic of China has said that it aims to become the global leader in AI by 2030.

Due to our current situation Altman urges action and legal regulations in four areas: security, infrastructure, human capital, and global strategy. This is where legal professionals are urgently needed, especially those who understand the power and potential of AI and are willing to take the path less travelled and fight for freedom, not fame and fortune.

The Crossroads: Two Futures, One Choice

Altman envisions two potential AI futures:

1. Democratic AI: A world where AI systems are transparent, aligned with human values, and distribute benefits equitably. This will require both industry and government regulation. In this scenario, AI empowers individuals, fuels economic growth, and fosters breakthroughs in healthcare, education, and beyond.

2. Authoritarian AI: A dystopian alternative, where AI becomes a tool for repression and control. Dictatorships will in Altman’s words:

[F]orce U.S. companies and those of other nations to share user data, leveraging the technology to develop new ways of spying on their own citizens or creating next-generation cyberweapons to use against other countries. . . . (they) will keep a close hold on the technology’s scientific, health, educational and other societal benefits to cement their own power.

The historical echoes are chilling. Will we have the moral fortitude and ethical alignment to make America truly great again? Will we stand up again as we did in WWII to fight against ethnic oppression, hatred and dictators? Will we preserve the liberties and privacy of all individuals? Or will our political and industrial leaders turn us to a dual-class, surveillance state? Without decisive action now, AI may quickly push the world either way.

This is the challenge before us: how do we ensure AI remains a tool for liberation, not oppression? How can legal and social systems rise to meet this moment? Again, Altman opines we must focus on four things: security, infrastructure, human capital, and global strategy.

1. AI Security – Protecting the Keys to the Kingdom

Altman begins with security, and for good reason: if AI’s core systems—model weights and training data—fall into the wrong hands, the results could be catastrophic. Imagine a scenario where rogue actors or authoritarian regimes gain access to the “brains” of cutting-edge AI systems. Unlike traditional data theft, this isn’t just about stealing files—it’s about stealing intelligence. Teams of AI enhanced cybersecurity experts, including lawyers, are needed to protect the our country from enemy states and criminal gangs, both foreign and domestic. Trade-secret laws must be strengthened and enforced globally.

Here are Sam’s words:

First, American AI firms and industry need to craft robust security measures to ensure that our coalition maintains the lead in current and future models and enables our private sector to innovate. These measures would include cyberdefense and data center security innovations to prevent hackers from stealing key intellectual property such as model weights and AI training data. Many of these defenses will benefit from the power of artificial intelligence, which makes it easier and faster for human analysts to identify risks and respond to attacks. The U.S. government and the private sector can partner together to develop these security measures as quickly as possible.

Legal and Practical Imperatives:

1. Strengthen Cybersecurity Laws: Current frameworks, such as the Computer Fraud and Abuse Act (CFAA), were not built to handle the unique challenges posed by AI. We need laws that specifically address AI model theft and misuse. See: Bruce Schneier: ‘A Hacker’s Mind’ and His Thesis on How AI May Change Democracy (Hacker Way) (“Flexible regulatory frameworks are essential to adapt to technological advancements without stifling innovation.”)

2. Establish AI Export Controls: Just as nuclear technology is heavily controlled, AI systems must be subject to rigorous export regulations. The U.S. Department of Commerce restricted chip exports to China in 2024, but this is only the beginning. See: Understanding the Biden Administration’s Updated Export Controls (Center for Strategic and International Studies, 12/11/24).

3. Use AI to Defend AI: Ironically, the best defense against AI misuse may be AI itself. AI-powered cybersecurity systems—capable of adaptive learning and rapid threat detection—could serve as a digital immune system against cyberattacks. See: Chirag Shah, The Role Of Artificial Intelligence In Cyber Security (Forbes, 12/17/24).

Historical Parallel: In the Cold War, nuclear non-proliferation treaties prevented global catastrophe. Today, we face an AI arms race where the stakes are equally high. Just as the IAEA monitors nuclear technology, an International AI Security Agency could oversee the safe development and deployment of AI systems. See: Akash Wasil, Do We Want an “IAEA for AI”? (Lawfare, 11/20/24).

2. Infrastructure – The Digital Industrial Revolution

Altman’s calls for massive investments in AI infrastructure—data centers, energy grids, and computational capacity. This infrastructure isn’t just about scaling AI (although that is the driving force); it’s about ensuring resilience and sustainability.

Here are Sam Altman’s words:

Second, infrastructure is destiny when it comes to AI. The early installation of fiber-optic cables, coaxial lines and other pieces of broadband infrastructure is what allowed the United States to spend decades at the center of the digital revolution and to build its current lead in artificial intelligence. U.S. policymakers must work with the private sector to build significantly larger quantities of the physical infrastructure — from data centers to power plants — that run the AI systems themselves. Public-private partnerships to build this needed infrastructure will equip U.S. firms with the computing power to expand access to AI and better distribute its societal benefits.

Legal and Ethical Challenges:

1. Energy and Climate Law: AI is an energy hog. Data centers powering generative models consume vast amounts of electricity. Legal frameworks must incentivize sustainable practices, such as renewable energy requirements and carbon taxation.

2. Digital Inclusion Laws: AI infrastructure must be equitable. Governments should fund rural and underserved communities to ensure they benefit from AI advancements, much like the Rural Electrification Act brought electricity to remote areas during the 1930s.

3. Public-Private Partnerships: Massive AI infrastructure projects will require collaboration between governments and tech companies. Contracts must include provisions for data privacy, security standards, and ethical use.

3. Human Capital – Building a New Workforce

A democratic AI future depends not just on technology, but on people—scientists, engineers, policymakers, and educators—who can develop, govern, and use AI responsibly.

Here are Sam Altman’s words:

Building this infrastructure will also create new jobs nationwide. We are witnessing the birth and evolution of a technology I believe to be as momentous as electricity or the internet. AI can be the foundation of a new industrial base it would be wise for our country to embrace.

We need to complement the proverbial “bricks and mortar” with substantial investment in human capital. As a nation, we need to nurture and develop the next generation of AI innovators, researchers and engineers. They are our true superpower.

Extremely large server, energy buildings complex construction image by Ralph Losey using Visual Muse

Legal and Policy Recommendations:

1. AI Literacy Education: Mandate AI education at all levels, emphasizing not just coding, but critical thinking, ethics, and socio-technical literacy. Schools of law, business, and public policy must train AI-literate leaders.

2. STEM Immigration Policies: The U.S. must remain a magnet for global AI talent. Modernizing H-1B visas and creating AI-specific immigration pathways will be critical.

3. Ethics Certifications for AI Professionals: Just as doctors take the Hippocratic Oath, AI developers should adhere to ethical guidelines. Professional certifications could enforce standards for fairness, transparency, and accountability. There must also be specialized tutoring and certificates of general AI competence in various fields, including legal, accounting and medical. Prompt engineering instruction and certifications will continue to grow in importance as the pace of exponential change accelerates.

4. Global Strategy – AI Diplomacy and Governance

Altman’s final pillar acknowledges that AI is not just a national issue—it’s a global one. The United States must lead in shaping international norms for AI development and deployment.

Here are Altman’s words:

We must develop a coherent commercial diplomacy policy for AI, including clarity around how the United States intends to implement export controls and foreign investment rules for the global build out of AI systems. That will also mean setting out rules of the road for what sorts of chips, AI training data and other code — some of which is so sensitive that it may need to remain in the United States — can be housed in the data centers that countries around the world are racing to build to localize AI information.

I’ve spoken in the past about creating something akin to the International Atomic Energy Agency for AI, but that is just one potential model. One option could knit together the network of AI safety institutes being built in countries such as Japan and Britain and create an investment fund that countries committed to abiding by democratic AI protocols could draw from to expand their domestic computer capacities.

Another potential model is the Internet Corporation for Assigned Names and Numbers, which was established by the U.S. government in 1998, less than a decade after the creation of the World Wide Web, to standardize how we navigate the digital world. ICANN is now an independent nonprofit with representatives from around the world dedicated to its core mission of maximizing access to the internet in support of an open, connected, democratic global community.

While identifying the right decision-making body is important, the bottom line is that democratic AI has a lead over authoritarian AI because our political system has empowered U.S. companies, entrepreneurs and academics to research, innovate and build.

Geopolitical and Legal Implications:

1. International AI Treaties: Modeled after the Geneva Conventions or Paris Agreement, nations must agree on global standards for AI safety, ethics, and governance. This includes bans on autonomous weapons and commitments to prevent AI-fueled misinformation campaigns.

2. Create an AI Governance Body: Like the IAEA for nuclear energy, a neutral international body could monitor AI safety, resolve disputes, and ensure equitable access to AI benefits.

3. Engage with Adversaries: Altman suggested in his July 25, 2024 Washington Post editorial that dialogue with countries like China was critical, even when values diverge. He indicated Digital diplomacy could establish guardrails to prevent an AI arms race.

It is uncertain how all of this will pan out under the new Trump Administration, but for interesting speculation see: Brianna Rosen, The AI Presidency: What “America First” Means for Global AI Governance (Just Diplomacy, 12/16/24) (first installment in series, Tech Policy under Trump 2.0.). Also, note how Sam Altman reportedly said in a statement last week: “President Trump will lead our country into the age of A.I., and I am eager to support his efforts to ensure America stays ahead.In Display of Fealty, Tech Industry Curries Favor With Trump (NY Times, 12/14/24).

Conclusion: Lawyers and Technologists as Guardians of the Future

Altman’s vision—and the broader insights it provokes—is a plea for action from everyone. Whether Sam realizes it or not, that includes the legal profession. We are essential to the these key elements of his vision:

1. Construction and enforcement of laws that protect AI from misuse while fostering innovation.

2. Champion transparency and accountability in AI systems.

3. Advocate for equitable access to AI’s benefits, ensuring no one is left behind.

Like any transformative technology, AI brings both promise and peril. The fork in the road is before us. Will we choose the democratic path less travelled, where AI empowers humanity to solve its greatest challenges? Or will we succumb to authoritarian control, where AI becomes a tool of oppression?

In Altman’s words:

We won’t be able to have AI that is built to maximize the technology’s benefits while minimizing its risks unless we work to make sure the democratic vision for AI prevails. If we want a more democratic world, history tells us our only choice is to develop an AI strategy that will help create it, and that the nations and technologists who have a lead have a responsibility to make that choice — now.

The answer lies not in the hands of software developers alone but in the collective will of society, including lawyers, lawmakers, judges, educators, and concerned citizens. Legal professionals cannot just be swords wielded by kings and would be kings. We must be independent guardians and architects of AI’s future. The rules must be drafted with great skill and with justice in mind, not power trips. Now is the time for us to begin hands-on action to guide the advent of superintelligent AI.

As Sam Altman warns, the stakes couldn’t be higher: “The future of AI is the future of humanity.

Ralph Losey Copyright 2024. All Rights Reserved.


Two New Echoes of AI Podcasts on AI’s 11-Step Plan to Unite America

December 1, 2024

Ralph has directed and verified the AI Podcasters creation of two new podcasts, both on the Eleven Step Plan to Unite America. They write and speak these podcasts, not Ralph. The first podcast shown here is found on the EDRM Global Podcast Network and is 17 minutes in length. In the second podcast the AIs created a podcast that 25 minutes long. It goes into greater detail and has a slightly different take.

17 Minute Version
25 Minute Version of the Podcast

Pick one, or many, of the thirty-three projects outlined in the Plan to Unite America and let Ralph know at epluribusunum.aiSee here for more details on each of the 33 projects. Be part of the solution.

Ralph Losey Creative Commons Copyright 2024. Distribution of this document is encouraged with attribution, but do not modify without Losey’s permission.


The Future of AI Is Here—But Are You Ready? Learn the OECD’s Blueprint for Ethical AI

October 25, 2024

by Ralph Losey

Published October 25, 2024

The future of Artificial Intelligence isn’t just on the horizon—it’s already transforming industries and reshaping how businesses operate. But with this rapid evolution comes new challenges. Ethical concerns, privacy risks, and potential regulatory pitfalls are just a few of the issues that organizations must navigate. That’s where the Organisation for Economic Co-operation and Development (OECD) comes in. To help groups embrace AI responsibly, the OECD has developed a set of guiding principles designed to ensure AI is implemented ethically and effectively. Are you prepared to harness the power of AI while safeguarding your company against the risks? Discover how the OECD’s blueprint can help guide you through this complex landscape.

Introduction

The Organisation for Economic Co-operation and Development (OECD) plays a vital role in shaping policies across the world to foster prosperity, equality, and sustainable development. In recent years, the OECD has shifted its focus toward the responsible development of AI, recognizing its potential to transform industries and economies. For businesses any other organizations considering the adoption of AI into their workflows the OECD’s AI Principles (as slightly amended 2/5/24) provide a good starting point to develop internal policies. They can help guide your board to make decisions that ensure AI technology is deployed ethically and responsibly. This can help protect them from liability, and their employees, customers, and the world from harm.

What is the OECD?

The Organisation for Economic Co-operation and Development (OECD) is an independent, international organization dedicated to shaping global economic policies that are based on individual freedoms and democratic values. The U.S. was one of the twenty founding members in 1960 when the Articles of the Convention were signed, establishing the OECD. It now has 38 member countries, mainly advanced economies. Though the OECD initially focused on economic growth, international trade, and education, it has become increasingly concerned with the ethical and responsible development of artificial intelligence.

In 2019, the OECD introduced its AI Principles–the first intergovernmental standard for AI use. These principles reflect a growing recognition that AI will play an important role in global economies, societies, and governance structures. The OECD’s mission is clear: AI technologies must not only drive innovation but also be applied in ways that respect human rights, democracy, and ethical principles. These AI guidelines are vital in a world where AI could be both a powerful tool for good and a source of significant risks if misused. The Five AI Principles and Recommendations were slightly amended on February 5, 2024.

The OECD is a highly respected group that collaborates with many international organizations, such as the United Nations (UN), World Bank, International Monetary Fund (IMF), and World Trade Organization (WTO). The OECD helps these groups align and coordinate efforts in global governance and policymaking. The OECD also engages in regional initiatives, providing tailored advice and support to specific regions such as Latin America, Southeast Asia, and Africa. Bottom line, the OECD has long played a crucial role in shaping global policy, promoting international cooperation, and providing data-driven, evidence-based recommendations to governments around the world.

Five Key OECD AI Principles

Before starting an AI program, businesses should consider the potential risks that AI poses to their operations, employees, and customers. By taking proactive steps to mitigate these risks, organizations can safeguard themselves from unforeseen consequences while reaping the benefits of AI. The OECD’s AI Principles (amended 2/5/24) represent one of many frameworks businesses should evaluate when integrating AI technologies into their operations. It is well respected around the world and should be a part of any organization’s due diligence.

These principles are built around five core guidelines:

Principle 1. Inclusive Growth, Sustainable Development, and Well-being

The first OECD AI principle stresses that AI should promote inclusive growth, sustainable development, and well-being for individuals and society. AI should benefit people and the planet. This core value reflects the potential of AI to contribute to human flourishing through better healthcare, education, and environmental sustainability.

Companies should be aware of the many challenges ahead. While AI-driven solutions, such as climate modeling or precision agriculture, can help tackle environmental crises, there is concern that rapid technological advancements may lead to widening inequality. For instance, the automation of jobs could disproportionately affect lower-income workers, potentially exacerbating inequality. Thus, this principle necessitates a strategy that ensures AI’s benefits are distributed equitably.


For businesses considering AI, three key actions should always be top-of-mind for board members:

  • Engage Relevant Stakeholders: Before implementing AI, include a diverse group of stakeholders in the decision-making. This should involve executives, legal and data privacy experts, subject matter experts, human resources, and marketing/customer support teams. Each group brings unique perspectives that can help ensure the AI program is equitable and aligned with the company’s values.
  • Evaluate Positive and Negative Outcomes: Consider both the potential benefits and risks to AI users and individuals whose data may be processed. AI should enhance productivity, but it must also respect the well-being of all involved parties.
  • Consider Environmental Impact: AI systems require substantial computational resources, which contribute to a large carbon footprint. Sustainable AI practices should be considered to reduce energy consumption and minimize environmental impact.

Principle 2. Respect for the rule of law, human rights and democratic values, including fairness and privacy.

The wording of the second principle was revised somewhat in 2024. The full explanation for revised Principle Two is set out in the amendment recommendation of February 5, 2024.

a) AI actors should respect the rule of law, human rights, democratic and human-centred values throughout the AI system lifecycle. These include non-discrimination and equality, freedom, dignity, autonomy of individuals, privacy and data protection, diversity, fairness, social justice, and internationally recognised labour rights. This also includes addressing misinformation and disinformation amplified by AI, while respecting freedom of expression and other rights and freedoms protected by applicable international law.

b) To this end, AI actors should implement mechanisms and safeguards, such as capacity for human agency and oversight, including to address risks arising from uses outside of intended purpose, intentional misuse, or unintentional misuse in a manner appropriate to the context and consistent with the state of the art.

Respecting human rights means ensuring that Generative AI systems do not reinforce biases or violate individuals’ rights. For example, there is growing concern over the use of AI in facial recognition technology, where misidentification disproportionately affects marginalized groups. AI must be designed to avoid such outcomes by integrating fairness into algorithms and maintaining democratic values like transparency and fairness.

Businesses integrating AI into their operations should address several legal issues, including intellectual property, data protection, and human rights laws. To do this there are four things a board of directors should consider:

  • Ensure Compliance with Laws: Verify that  Generative AI (GAI) adheres to copyright laws and data protection regulations such as GDPR or CCPA. Implement safeguards to ensure the system does not infringe upon users’ privacy or autonomy.
  • Prevent Discrimination: Conduct thorough audits to ensure that GAI outputs are fair and free from discrimination. Discriminatory outcomes can damage reputations and result in legal challenges.
  • Monitor for Misinformation: GAI systems must be designed to resist distortion by misinformation or disinformation. Mechanisms should be in place to quickly halt GAI operations if harmful behaviors are detected.
  • Develop Policies and Oversight: Establish clear policies and procedures that govern the use of GAI within your business. This includes implementing human oversight to ensure AI actions align with ethical and legal standards.

Principle 3. Transparency and Explainability

Transparency and explainability are fundamental to user trust in AI systems. This principle calls for AI systems to be transparent so that users can understand how decisions are made. With complex AI algorithms, it is often difficult to decipher how certain outcomes are generated—a problem referred to as the “black box” issue in AI.


While transparency enables users to scrutinize AI decisions, the challenge lies in making these highly technical systems comprehensible to non-experts. This requires a good education program by experts. Moreover, explainability must strike a balance between safeguarding intellectual property and providing adequate insight into AI operations, especially when used in public sector decision-making.

Businesses and other organizations must ensure that employees and other users of its computer systems understand when and how AI is used, along with some understanding of how AI decisions are made, and what mistakes to look out for. See e.g. Navigating the AI Frontier: Balancing Breakthroughs and Blind Spots (e-Discovery Team, October 2024). For businesses, ensuring transparency involves two critical steps:

  • Inform Users: Be transparent with employees, consumers, and stakeholders that GAI is being used. Where required by law, obtain explicit consent from users before collecting or processing their data.

Principle 4. Robustness, Security, and Safety

This principle demands that AI systems be resilient, secure, and reliable. As AI systems are increasingly integrated into sectors like healthcare, transportation, and critical infrastructure, their reliability is essential. A malfunctioning AI in these areas could result in dire consequences, from life-threatening medical errors to catastrophic failures in critical systems.


Cybersecurity is a significant concern, as more advanced AI systems become attractive targets for hackers. The OECD recognizes the importance of safeguarding AI systems and other systems from security breaches. All organizations today must guard against malicious attacks to protect their data and public safety. Organizations using AI must adopt a comprehensive set of IT security policies. Two key actions points that the Board should start with are:

  • Plan for Contingencies: Implement a Cybersecurity Incident Response Plan that outlines steps to take if the AI or other technology system malfunctions or behaves in an undesirable manner. This plan should detail how to quickly halt operations, troubleshoot issues, and safely decommission the system if necessary. You should probably have legal specialists on call in case your systems are hacked.
  • Ensure Security and Safety: Businesses should continuously monitor their technology and AI systems to ensure they operate securely and safely under various conditions. Regular audits, including red team testing, can help detect vulnerabilities before they become significant problems.

Principle 5. Accountability

Accountability in AI development and use is paramount. This principle asserts that those involved in creating, deploying, and managing AI systems must be held accountable for their impacts. Human oversight is critical to safeguard against mistakes, biases, or unintended consequences. This is another application of “trust but verify” on a management level. This is particularly relevant in scenarios where AI systems are set up to help make decisions affecting people’s lives, such as loan approvals, hiring decisions, or judicial sentencing. These should never be autonomous, but recommendation with a human in charge. This is especially true for physical security systems.

A clear accountability framework is critical. The accountability principle ensures that even in highly automated systems, human oversight is necessary to safeguard against mistakes, biases, or unintended consequences. The Board of Directors should, as a starting point:

  • Designate Responsible Parties: Assign specific individuals or departments to oversee the AI system’s operations. These stakeholders must maintain comprehensive documentation, including data sets used for training, decisions made throughout the AI lifecycle, and records of how the system performs over time.
  • Conduct Risk Assessments: Periodically evaluate the risks associated with AI, particularly in relation to the system’s outputs and decision-making processes. Regular assessments help ensure the system continues to function as intended and complies with ethical standards.

Strengths and Weaknesses of the OECD AI Principles

The OECD AI principles are ambitious and reflect a comprehensive effort to create a global framework for responsible AI. However, while these guidelines are strong, they are not without their weaknesses.

Strengths

  • Comprehensive Ethical Guidelines: The principles cover a broad spectrum of ethical concerns, making them a strong foundation for policy guidance.
  • Global Influence: As an international standard, the OECD AI Principles provide a respected baseline for countries worldwide, not just the U.S. This allows for a coordinated approach to AI governance.
  • Commitment to Human Rights: By centering AI development on human dignity and rights, the OECD ensures that ethical concerns remain at the forefront of AI advancements.

Weaknesses

  • Lack of Enforcement: One of the significant drawbacks is the absence of enforcement mechanisms. The principles serve as guidelines, but without penalties for non-compliance, their effectiveness could be limited. A Board should add appropriate procedures that track their existing policies.
  • Ambiguity in Accountability: While the principle of accountability is emphasized, the specifics of assigning responsibility in complex AI systems remain unclear.

In addition to the OECD international Principles, businesses should consult other frameworks to strengthen their AI governance strategies. For example, the NIST-AI-600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (7/26/24) provides much more detailed, technical guidance into managing the risks associated with AI technologies. Organizations may also want to consider the U.S. Department of State Risk Management Profile for Artificial Intelligence and Human Rights. It states that it is intended as a practical guide for organizations to design, develop, deploy, use, and govern AI in a manner consistent with respect for international human rights.

Conclusion

Implementation of the OECD’s Five AI Principles is an essential step toward the responsible development of AI technologies. While the principles address key concerns such as human rights, transparency, and accountability, they also highlight the need for ongoing international collaboration and governance. In many countries outside of the U.S. there are, for instance. much stronger laws and regulations governing user privacy. Following the OECD Principles can help with regulatory compliance and show an organizations good faith to attempt to follow complex regulatory systems.


By relying on multiple AI frameworks, not just the OECD’s, businesses and their Boards can ensure a comprehensive approach to AI implementation. In the rapidly evolving field of AI, where state and foreign laws change rapidly, it is prudent for any CEO or Board of Directors to base it policies on stable, well-respected, principles. That can help establish good faith efforts to handle AI responsibly. Consultation with knowledgeable outside legal counsel is, of course, an important part of all corporate governance, including AI implementation.

Documenting Board decisions and tying them back to internationally accepted standards on AI is a good practice for any organization, local or global. It may not protect all of a company’s decisions from outside attack based on unfair 20/20 hindsight, but it should provide a solid foundation for good faith based defenses. This is especially true if these principles are adopted proactively and implemented with advice from respected third-party advisors. We are facing rapidly changing times, with both great opportunities and dangers. We all need to make our best efforts to act in a responsible manner and the OECD principles can help us to do that.

Click here to listen to an AI generated Podcast discussing the material in this article.

Ralph Losey Copyright 2024 — All Rights Reserved


DefCon Chronicles: Where Tech Elites, Aliens and Dogs Collide – Series Opener

August 21, 2023

Ralph Losey. Published August 21, 2023.

From Boris to Bots: Our First Dive into the DefCon Universe. This begins a series of blogs chronicling the infamous DefCon event in Las Vegas. The next installment will cover President Biden’s unprecedented request for hackers to attend DefCon to hack AI, and the hackers enthusiastic response, including reporter-AI-hacker Ralph Losey, to break existing AI software in an open contest. In addition, nearly all of the top cybersecurity leadership of the White House and Department of Homeland Security personally attended DefCon, including the Homeland Security Department Secretary himself, Alejandro Mayorkas. They came to help officially open the conference and stayed to give multiple policy statements and answer all hacker questions. It was a true breakthrough moment in cyber history.

Boris seems unimpressed by his official DefCon Dog award

I attended DefCon 31, on August 10-15, 2023, as independent Press, accompanied by my co-reporter daughter, a former lobbyist with an English Lit background, and her dog, Boris. Our press status with special green badge had a high price tag, but it gave us priority access to everything. It also facilitated our interaction with notable figures, from the White House Science Advisor, Arati Prabhakar, to DefCon’s enigmatic founder, Dark Tangent.

DefCon is the world’s largest tech hacker “conference” – more like a inter-dimensional portal at the Caesars Forum. When we first checked in, we happened to meet the leader of DefCon Press and P.R. She fell for little Boris in a handbag, and declared him the official DefCon 31 dog! What an honor. Way to go Boris, who everyone thinks is a Chihuahua, but is really a Russian Terrier. Nothing is as it seems at DefCon. The guy you see walking around in shorts, who looks like a bearded punk rocker, may actually be a senior NSA fed. We will tell you why the NSA was there later in this series.

At DefCon, we immersed ourselves in a diverse crowd of over 24,000 elite tech experts from across the globe. This included renowned names in Cybersecurity, notably the formidable red team professionals. Most of these hackers are law-abiding entrepreneurs, as well as members of top corporate and federal red and blue teams. Several thousand were there just to answer President Biden’s call for hackers everywhere to come to DefCon to compete to break AI. Such a request had never been made before. Much more on this later, including my joining in the AI competition.

The tech experts, hackers all, came together for the thirty-first year of DefCon. We were drawn to participate, and in our case, also report on, the hundreds of large and small lectures and other educational events, demonstrations and vendor exhibitions. In addition, the really big draw was, as usual, the dazzling array of hacker challenges and competitions. Some of these are quiet serious with major prizes and rep at stake, and required pre-qualifications and success in entry rounds. But most were open to all who showed up.

Picture walking into a football stadium, but in place of athletes, you’re surrounded by the world’s tech elite, each donning distinctive hacker attire. As we flooded in by the thousands, it was a blend of seasoned pros and enthusiastic fans. I counted myself among the fans, yet I eagerly took on several challenges, such as the AI red team event. The sheer diversity and expertise of all participants was impressive.

The entrance boasted a towering, thirty-foot neon sparkling mural that caught my eye immediately. I’ve refined the photo to focus on the mural, removing the surrounding crowds. And, just for fun, there’s an alien addition.

Ralph entering Defcon 31

The open competitions came in all shapes and sizes: hacker vs. computers and machines of all types, including voting machines, satellites and cars; hacker vs. hacker contests; and hacker teams against hacker teams in capture the flag type contests. An article will be devoted to these many competitions, not just the hacker vs. AI contest that I entered.

There was even a writing contest before the event to compete for the best hacker-themed short story, with the winner announced at DefCon. I did not win, but had fun trying. My story followed the designated theme, was set in part in Defcon, and was a kind of sci-fi, cyber dystopia involving mass shootings with AI and gun control to the rescue. The DefCon rules did not allow illustrations, just text, but, of course, I later had to add pictures, one of which is shown below. I’ll write another article on that fiction writing contest too. There were many submissions, most were farther-out and better than my humble effort. After submission, I was told that most seemed to involve Ai in some manner. It’s in the air.

Operation Veritas - short story by R. Losey
Illustration by Ralph for his first attempt at writing fiction, submitted for judging in the DefCon 31 writing competition.

So many ideas and writing projects are now in our head from these four days in Vegas. One of my favorite lectures, which I will certainly write about, was by a French hacker, who shared that he is in charge of cybersecurity for a nuclear power plant. He presented in a heavy French accent to a large crowd on a study he led on Science Fiction. It included statistical analysis of genres, and how often sci-fi predictions come true. All of DefCon seemed like a living sci-fi novel to us, and I am pretty sure there were multiple aliens safely mingling with the crowd.

We provide this first Defcon 31 chronicle as an appetizer for many more blogs to come. This opening provides just a glimpse of the total mind-blowing experience. The official DefCon 31 welcome trailer does a good job of setting the tone for the event. Enlarge to full screen and turn up the volume for best affects!

DefCon 31 official welcome video

Next, is a brief teaser description and image of our encounter with the White House Science Advisor, Dr. Arati Prabhakar. She and her government cyber and AI experts convinced President Biden to issue a call for hackers to come to Defcon, to try to break (hack) the new AI products. This kind of red team effort is needed to help keep us all safe. The response from tech experts worldwide was incredible, over a thousand hackers waited in a long line every day for a chance to hack the AI, myself included.

We signed a release form and were then led to one of fifty or more restricted computers. There we read the secret contest instructions, started the timer, and tried to jail break the AI in multiple scenarios. In quiet solo efforts, with no outside tools allowed and constant monitoring to prevent cheating, we tried to prompt ChatGPT4 and other software to say or do something wrong, to make errors and hallucinate. I had one success. The testing of AI vulnerabilities is very helpful to AI companies, including OpenAI. I will write about this is in much greater detail in a later article, as AI and Policy were my favorite of the dozens of tracks at DefCon.

A lot of walking was required to attend the event and a large chill-out room provided a welcome reprieve. They played music there with DJs, usually as a quiet background. There were a hundred decorated tables to sit down, relax, and if you felt like it, chat, eat and drink. The company was good, everyone was courteous to me, even though I was press. The food was pretty good too. I also had the joy of someone “paying it forward” in the food line, which was a first for me. Here is a glimpse of the chill out scene from the official video by Defcon Arts and Entertainment. Feel it. As the song says, “no one wants laws on their body.” Again, go full screen with volume up for this great production,

Defcon 31 Chill Out room, open all day, with video by Defcon Arts and Entertainment, DefConMusic.org

As a final teaser for our DefCon chronicles, check out my Ai enhanced photo of Arati Prabhakar, whose official title is Director of the Office of Science and Technology. She is a close advisor of the President and member of the Cabinet. Yes, that means she has seen all of the still top secret UFO files. In her position, and with her long DOD history, she knows as much as anyone in the world about the very real dangers posed by ongoing cyber-attacks and the seemingly MAD race to weaponize AI. Yet, somehow, she keeps smiling and portrays an aura of restrained confidence, albeit she did seem somewhat skeptical at times of her bizarre surroundings at DefCon, and who knows what other sights she has been privy too. Some of the questions she was asked about AI did seem strange and alien to me.

Arati Prabhakar speaking on artificial intelligence, its benefits and dangers, Photoshop, beta version, enhancements by Ralph Losey

Stay tuned for more chronicles. Our heads are exploding with new visuals, feelings, intuitions and ideas. They are starting to come together as new connections are made in our brains’ neural networks. Even a GPT-5 could not predict exactly what we will write and illustrate next. All we know for certain is that these ongoing chronicles will include video tapes of our interviews, presentations attended, including two mock trials of hackers, as well as our transcripts, notes, impressions and many more AI enhanced photos. All videos and photos will, of course, have full privacy protection of other participants who do not consent, which the strict rules of Def Con require. If you are a human, Ai or alien, and feel that your privacy rights have been violated by any of this content, please let us know and we will fuzz you out fast.

DefCon 31 entrance photo by Def Con taken before event started

Ralph Losey Copyright 2023 (excluding the two videos, photo and mural art, which are Def Con productions).