DefCon Chronicles: Where Tech Elites, Aliens and Dogs Collide – Series Opener

August 21, 2023

From Boris to Bots: Our First Dive into the DefCon Universe. This begins a series of blogs chronicling the infamous DefCon event in Las Vegas. The next installment will cover President Biden’s unprecedented request for hackers to attend DefCon to hack AI, and the hackers enthusiastic response, including reporter-AI-hacker Ralph Losey, to break existing AI software in an open contest. In addition, nearly all of the top cybersecurity leadership of the White House and Department of Homeland Security personally attended DefCon, including the Homeland Security Department Secretary himself, Alejandro Mayorkas. They came to help officially open the conference and stayed to give multiple policy statements and answer all hacker questions. It was a true breakthrough moment in cyber history.

Boris seems unimpressed by his official DefCon Dog award

I attended DefCon 31, on August 10-15, 2023, as independent Press, accompanied by my co-reporter daughter, a former lobbyist with an English Lit background, and her dog, Boris. Our press status with special green badge had a high price tag, but it gave us priority access to everything. It also facilitated our interaction with notable figures, from the White House Science Advisor, Arati Prabhakar, to DefCon’s enigmatic founder, Dark Tangent.

DefCon is the world’s largest tech hacker “conference” – more like a inter-dimensional portal at the Caesars Forum. When we first checked in, we happened to meet the leader of DefCon Press and P.R. She fell for little Boris in a handbag, and declared him the official DefCon 31 dog! What an honor. Way to go Boris, who everyone thinks is a Chihuahua, but is really a Russian Terrier. Nothing is as it seems at DefCon. The guy you see walking around in shorts, who looks like a bearded punk rocker, may actually be a senior NSA fed. We will tell you why the NSA was there later in this series.

At DefCon, we immersed ourselves in a diverse crowd of over 24,000 elite tech experts from across the globe. This included renowned names in Cybersecurity, notably the formidable red team professionals. Most of these hackers are law-abiding entrepreneurs, as well as members of top corporate and federal red and blue teams. Several thousand were there just to answer President Biden’s call for hackers everywhere to come to DefCon to compete to break AI. Such a request had never been made before. Much more on this later, including my joining in the AI competition.

The tech experts, hackers all, came together for the thirty-first year of DefCon. We were drawn to participate, and in our case, also report on, the hundreds of large and small lectures and other educational events, demonstrations and vendor exhibitions. In addition, the really big draw was, as usual, the dazzling array of hacker challenges and competitions. Some of these are quiet serious with major prizes and rep at stake, and required pre-qualifications and success in entry rounds. But most were open to all who showed up.

Picture walking into a football stadium, but in place of athletes, you’re surrounded by the world’s tech elite, each donning distinctive hacker attire. As we flooded in by the thousands, it was a blend of seasoned pros and enthusiastic fans. I counted myself among the fans, yet I eagerly took on several challenges, such as the AI red team event. The sheer diversity and expertise of all participants was impressive.

The entrance boasted a towering, thirty-foot neon sparkling mural that caught my eye immediately. I’ve refined the photo to focus on the mural, removing the surrounding crowds. And, just for fun, there’s an alien addition.

Ralph entering Defcon 31

The open competitions came in all shapes and sizes: hacker vs. computers and machines of all types, including voting machines, satellites and cars; hacker vs. hacker contests; and hacker teams against hacker teams in capture the flag type contests. An article will be devoted to these many competitions, not just the hacker vs. AI contest that I entered.

There was even a writing contest before the event to compete for the best hacker-themed short story, with the winner announced at DefCon. I did not win, but had fun trying. My story followed the designated theme, was set in part in Defcon, and was a kind of sci-fi, cyber dystopia involving mass shootings with AI and gun control to the rescue. The DefCon rules did not allow illustrations, just text, but, of course, I later had to add pictures, one of which is shown below. I’ll write another article on that fiction writing contest too. There were many submissions, most were farther-out and better than my humble effort. After submission, I was told that most seemed to involve Ai in some manner. It’s in the air.

Operation Veritas - short story by R. Losey
Illustration by Ralph for his first attempt at writing fiction, submitted for judging in the DefCon 31 writing competition.

So many ideas and writing projects are now in our head from these four days in Vegas. One of my favorite lectures, which I will certainly write about, was by a French hacker, who shared that he is in charge of cybersecurity for a nuclear power plant. He presented in a heavy French accent to a large crowd on a study he led on Science Fiction. It included statistical analysis of genres, and how often sci-fi predictions come true. All of DefCon seemed like a living sci-fi novel to us, and I am pretty sure there were multiple aliens safely mingling with the crowd.

We provide this first Defcon 31 chronicle as an appetizer for many more blogs to come. This opening provides just a glimpse of the total mind-blowing experience. The official DefCon 31 welcome trailer does a good job of setting the tone for the event. Enlarge to full screen and turn up the volume for best affects!

DefCon 31 official welcome video

Next, is a brief teaser description and image of our encounter with the White House Science Advisor, Dr. Arati Prabhakar. She and her government cyber and AI experts convinced President Biden to issue a call for hackers to come to Defcon, to try to break (hack) the new AI products. This kind of red team effort is needed to help keep us all safe. The response from tech experts worldwide was incredible, over a thousand hackers waited in a long line every day for a chance to hack the AI, myself included.

We signed a release form and were then led to one of fifty or more restricted computers. There we read the secret contest instructions, started the timer, and tried to jail break the AI in multiple scenarios. In quiet solo efforts, with no outside tools allowed and constant monitoring to prevent cheating, we tried to prompt ChatGPT4 and other software to say or do something wrong, to make errors and hallucinate. I had one success. The testing of AI vulnerabilities is very helpful to AI companies, including OpenAI. I will write about this is in much greater detail in a later article, as AI and Policy were my favorite of the dozens of tracks at DefCon.

A lot of walking was required to attend the event and a large chill-out room provided a welcome reprieve. They played music there with DJs, usually as a quiet background. There were a hundred decorated tables to sit down, relax, and if you felt like it, chat, eat and drink. The company was good, everyone was courteous to me, even though I was press. The food was pretty good too. I also had the joy of someone “paying it forward” in the food line, which was a first for me. Here is a glimpse of the chill out scene from the official video by Defcon Arts and Entertainment. Feel it. As the song says, “no one wants laws on their body.” Again, go full screen with volume up for this great production,

Defcon 31 Chill Out room, open all day, with video by Defcon Arts and Entertainment,

As a final teaser for our DefCon chronicles, check out my Ai enhanced photo of Arati Prabhakar, whose official title is Director of the Office of Science and Technology. She is a close advisor of the President and member of the Cabinet. Yes, that means she has seen all of the still top secret UFO files. In her position, and with her long DOD history, she knows as much as anyone in the world about the very real dangers posed by ongoing cyber-attacks and the seemingly MAD race to weaponize AI. Yet, somehow, she keeps smiling and portrays an aura of restrained confidence, albeit she did seem somewhat skeptical at times of her bizarre surroundings at DefCon, and who knows what other sights she has been privy too. Some of the questions she was asked about AI did seem strange and alien to me.

Arati Prabhakar speaking on artificial intelligence, its benefits and dangers, Photoshop, beta version, enhancements by Ralph Losey

Stay tuned for more chronicles. Our heads are exploding with new visuals, feelings, intuitions and ideas. They are starting to come together as new connections are made in our brains’ neural networks. Even a GPT-5 could not predict exactly what we will write and illustrate next. All we know for certain is that these ongoing chronicles will include video tapes of our interviews, presentations attended, including two mock trials of hackers, as well as our transcripts, notes, impressions and many more AI enhanced photos. All videos and photos will, of course, have full privacy protection of other participants who do not consent, which the strict rules of Def Con require. If you are a human, Ai or alien, and feel that your privacy rights have been violated by any of this content, please let us know and we will fuzz you out fast.

DefCon 31 entrance photo by Def Con taken before event started

Ralph Losey Copyright 2023 (excluding the two videos, photo and mural art, which are Def Con productions).

Rule for All Congressional Staff on Use of Chatbots: Only ChatGPT Plus is Allowed

June 27, 2023
Images generated by Losey using Midjourney and Photoshop

On June 26, 2023, all of the staff of Congress received a confidential memo on the use of Chatbots. It was leaked by some staffer the same day. Below is a copy, now freely available everywhere on the Internet. The memo restricts the use of Chatbots to OpenAI’s ChatGPT Plus with privacy settings on. Other use restrictions are established, including that it can only be used for test purposes, not part of workflow.

If you are an employer, you should have some kind of employee use restriction too, especially if any of your employees work with confidential information. That includes most every organization I can think of. Restrictions should also apply to co-owners and anyone else handling your confidential information.

By my copying and sharing these use restrictions for Congressional employees I am not not in any way recommending or endorsing these particular restrictions or policy language. In fact, I would grade this as a C+ effort, better than nothing. Note the restrictions do not apply to Congressman and Senators, just their employees. My suggestion is that you consult with your own attorney about this right away.

Privacy is important. Confidentiality of government and business information is important. You probably do not want your organization to leak as badly as Congress, or the White House for that matter. Take care if you use chatbots or other artificial intelligence.

Fake Midjourney image. This is not really happening, yet.

Ralph Losey Copyright 2023 – ALL RIGHTS RESERVED

OpenAI’s Best Practices For Using GPT Software

June 16, 2023

The OpenAI’s new website GPT Best Practices provides six strategies and tactics to maximize the effectiveness of Generative Pre-trained Transformers (GPTs) like ChatGPT-4. The information provided is very detailed with many technical suggestions. The key message of OpenAI’s best practices guide is that while GPTs are always capable of generating intricate, human-like text, user input and guidance are vital to attain the best outcomes. The better prompts you make, the better answers GPT will provide. Success in large part depends on you.

Success of Your AI Output Depends on You

This blog will summarize the six main strategies outlined by OpenAI and include my original Midjourney photoshopped images for right-brain impact. ChatGPT-4, web-browsing pro version, helped me to write this, so did my WordPress software. It is all one big hybrid, multimodal, effort.

Best Practices to Add an AI to Your Team

Here is a synopsis of the six fundamental strategies provided by OpenAI for obtaining optimal results from GPTs:

  1. Write clear instructions: The AI cannot infer user intent, hence the need for clarity. If users require shorter responses, they should ask for brevity. For more technical outputs, they should request expert-level writing. If a particular format is desired, the user should demonstrate that format. Essentially, the clearer the instructions, the more accurate the GPT’s output.
  2. Provide reference text: GPTs can sometimes fabricate answers, especially when dealing with complex or unfamiliar topics. Providing reference texts can help guide the GPT to produce more accurate and reliable responses.
  3. Split complex tasks into simpler subtasks: GPTs are better at handling simpler tasks, which have lower error rates. A complex task can be broken down into a series of simpler tasks, with the output of earlier tasks used to construct the inputs for subsequent ones.
  4. Give GPTs time to “think”: GPTs can make errors when required to provide instant responses. Asking for a chain of reasoning before an answer can help GPTs reason their way to more accurate conclusions.
  5. Use external tools: To compensate for GPTs’ limitations, the outputs of other tools can be utilized. If a task can be done more reliably or efficiently by another tool, it should be offloaded to that tool.
  6. Test changes systematically: To improve GPT performance, any changes made to a prompt should be tested systematically. A modification might improve performance in some instances but worsen it in others, so it’s crucial to test these changes across a range of examples.
Give Your GPT External Tools

These strategies can be implemented through specific tactics, each tailored to the particular strategy:

  1. For writing clear instructions, tactics include providing important details in your query, asking the model to adopt a persona, using delimiters to distinguish different parts of the input, specifying the steps required to complete a task, providing examples (known as “few-shot” prompting), and specifying the desired output length.
  2. When providing reference text, include sufficient additional context or source material for the GPT to understand the reference.
  3. To split complex tasks into simpler subtasks, the approach is to break down the task into a workflow of smaller, more manageable tasks.
  4. For giving GPTs time to “think”, the article suggests asking the AI for a chain of reasoning before providing an answer, allowing it to work out a more accurate response.
  5. In using external tools, the idea is to use the outputs of other tools to complement the abilities of the GPT. For instance, a text retrieval system or a code execution engine can be used to augment the GPT’s abilities.
  6. For testing changes systematically, the suggestion is to develop a comprehensive test suite, also known as an “eval”, to measure the impact of modifications made to prompts.
Give your AI time to think things through

The bottom line of the information provided here is that although GPTs are capable tools, the quality of their output depends on: the clarity of the instructions they receive; sufficient context to any reference text provided; the ability to decompose complex tasks into simpler ones; having the time to “think”; the use of external tools when necessary; and, the systematic testing of changes. If you learn to use these strategies and tactics, you can significantly enhance the effectiveness of your interactions with all of OpenAI’s GPT models.

Make Sure Your Instructions Are Clear


Implementing the strategies and tactics suggested by OpenAI can, indeed, help users get the most out of ChatGPT. Seems to me OpenAI should have provided these instructions upon launch. Maybe then so many newbies, and really we are all newbies at this new software, would not have complained so much about the accuracy, relevance and quality of its outputs. Basically OpenAI is invoking the old saying, “Garbage In, Garbage Out.”


I hope we see many more instructions like this from Open AI in the coming months. In the meantime, there are hundreds of software hackers who have attained some level of prompt engineering skills and are already sharing their prompting tips. I have even ventured into this territory by sharing some of my more interesting prompt experiments, such as: Prompting a GPT-4 “Hive Mind” to Dialogue with Itself on the Future of Law, AI and Adjudications; and ChatGTP-4 Prompted To Talk With Itself About “The Singularity”.

“The Singularity” May Arrive Someday, But We Are Nowhere Near That. AI Still Needs Our Help For It To Better Help Us.

As OpenAI points out, the Six Strategies and Tactics given here can be used in a variety of ways. It all depends, as lawyers love to say, on the particular use case. It also depends on the capabilities of the particular GPT model you use. There are already many variations, with 3.5 being the first and weakest.

As always I encourage everyone to go hands-on with this. Hack around with this new software yourself. If you are a lawyer or other professional with client confidential information, be extremely careful in its use for all client work. Make sure you engage privacy settings and do not expose client confidential information. Legal ethics and common sense also require that you verify very carefully all of the output of GPT, especially in these early days. Your trust level should be low and skeptical level high.

Emerge and Be “Hands” On, But Remain Vigilant

So go ahead, experiment and adapt these six strategies to suit your needs and requirements. Just remember it may seem like you are dealing with a great savant here, but never forget ChatGPT is an Idiot-Savant. Just a child really, but with a big vocabulary. It is prone to forgetfulness, memory limitations, hallucinations, outright errors, ethics jailbreaks, and many, many other humanlike foibles. It may seem like a genius in a box, but it is not. It is more like a bottom of the class law student that somehow sounds smarter than he is, especially to non-experts. Still, he did somehow get into law school and might be able to pass your state’s Bar Exam.

See my blog for many articles about ChatGPT’s many unique foibles. Finally, note my e-Discovery Team blog now has a handy new, easy to remember HTML address – EDISCOVERY.TEAM. Yup, team is a domain name and you don’t have to remember to put a hyphen between the e and d. Yes, we humans are prone to forgetfulness too.

Best Efforts Are Diverse Team Efforts, Including an AI Team Member

Ralph Losey 2023 Copyright — ALL RIGHTS RESERVED

Seeds of U.S. Regulation of AI: the Proposed SAFE Innovation Act

June 7, 2023
All Images by Ralph Losey Using Midjourney and Photoshop. (All images here of Senator Schumer are totally fake.)

In a speech on June 21, 2023, to the Center for Strategic and International Studies (CSIS), Senate Majority leader, Charles Schumer, explained the plan that his technical advisors have formulated for regulation of Ai. The speech writers used well crafted language to make many important regulatory suggestions.

Favorite Quotes from Schumer’s Speech

I have studied the speech carefully and begin this article by sharing a few of my personal favorite quotes.

Change is the law of life, more so now than ever. Because of AI, change is happening to our world as we speak in ways both wondrous and startling.

It was America that revolutionized the automobile. We were the first to split the atom, to land on the moon, to unleash the internet, and create the microchip that made AI possible. AI could be our most spectacular innovation yet, a force that could ignite a new era of technological advancement, scientific discovery, and industrial might. So we must come up with a plan that encourages, not stifles, innovation in this new world of AI. And that means asking some very important questions.

AI promises to transform life on earth for the better. It will shape how we fight disease, how we tackle hunger, manage our lives, enrich our minds, and ensure peace. But there are real dangers too – job displacement, misinformation, a new age of weaponry, the risk of being unable to manage this technology altogether.

Even if many developers have good intentions there will always be rogue actors, unscrupulous companies, foreign adversaries, that will seek to harm us. Companies may not be willing to insert guardrails on their own, certainly not if their competitors won’t be forced to do so.

If we don’t program these algorithms to align with our values, they could be used to undermine our democratic foundations, especially our electoral processes.

Senator Schumer, 6/21/23

Proposed Legislative Initiative

Senator Schumer’s speech was based on a five point outline for proposed legislation, called SAFE, an acronym for “Security, Accountability, Foundations, Explain.” This is further set out in Senator Schumer’s press release, which summarizes the points as follows:

1. Security: Safeguard our national security with AI and determine how adversaries use it, and ensure economic security for workers by mitigating and responding to job loss;

2. Accountability: Support the deployment of responsible systems to address concerns around misinformation and bias, support our creators by addressing copyright concerns, protect intellectual property, and address liability;

3. Foundations: Require that AI systems align with our democratic values at their core, protect our elections, promote AI’s societal benefits while avoiding the potential harms, and stop the Chinese Government from writing the rules of the road on AI;

4. Explain: Determine what information the federal government needs from AI developers and deployers to be a better steward of the public good, and what information the public needs to know about an AI system, data, or content.

5. Innovation: Support US-led innovation in AI technologies – including innovation in security, transparency and accountability – that focuses on unlocking the immense potential of AI and maintaining U.S. leadership in the technology.

In elaborating on Security, a key issue for any government to focus on, Schumer said in his speech:

First comes security – for our country, for American leadership, and for our workforce. We do not know what artificial intelligence will be capable of two years from now, 50 years from now, 100 years from now, in the hands of foreign adversaries, especially autocracies, or domestic rebel groups interested in extortionist financial gain or political upheaval. The dangers of AI could be extreme. We need to do everything we can to instill guardrails that make sure these groups cannot use our advances in AI for illicit and bad purpose. But we also need security for America’s workforce, because AI, particularly generative AI, is already disrupting the ways tens of millions of people make a living.

Schumer 6/21/23

Summary of Senator Schumer’s Speech

Here is a short summary made by ChatGPT-4 of all of the key points of Senator Schumer’s long speech. I checked the GPT output for accuracy and no mistakes were found, but, sorry to say baby chatbot, I did have to make several edits to bring the writing quality up to an acceptable level.

Senator Chuck Schumer’s speech at the CSIS focused on the significance and impact of artificial intelligence (AI) in contemporary society. He drew parallels between the ongoing AI revolution and the historical industrial revolution, emphasizing the potential for transformative effects on various aspects of life, such as healthcare, lifestyle management, and cognitive enhancement. However, he also highlighted the associated risks, including job displacement, misinformation, and the development of advanced weaponry.

To address these challenges, Senator Schumer advocated for proactive involvement by the US government and Congress in regulating AI. He proposed the SAFE Innovation Framework for AI Policy, which aims to balance the benefits and risks of AI while prioritizing innovation. The framework consists of two main components: a structured action plan and a collaborative policy formulation process involving AI experts.

This is the sole image created here by Losey using Adobe’s Firefly

The proposed framework seeks to address crucial questions related to collaboration and competition among AI developers, the necessary level of federal intervention, the balance between private and open AI systems, and ensuring accessibility and fair competition for innovation. Schumer outlined the SAFE (Security, Accountability, Foundations, Explainability) Innovation Framework as a means to ensure national and workforce security, accountability for the impact of AI on jobs and income distribution, and explainability of AI systems. He warned against potential disruptions similar to those caused by globalization, emphasizing the need for proper management to prevent job losses.

Schumer stressed the importance of shaping AI development and deployment in a manner that upholds democracy and individual rights. He cautioned against the misuse of AI technology, such as tracking individuals, exploiting vulnerable populations, and interfering with electoral processes through fabricated content. The senator emphasized the necessity of establishing accountability in AI practices and protecting intellectual property rights. Unregulated AI development, he warned, could jeopardize the foundations of liberty, civil rights, and justice in the United States.

Transparency and user understanding of AI decisions were identified as key factors in maintaining accountability. Schumer called on companies to develop mechanisms that allow users to comprehend how AI algorithms arrive at specific answers while respecting intellectual property. To facilitate discussions and consensus-building on AI challenges, he proposed organizing ‘AI insight forums’ with top AI developers, executives, scientists, advocates, community leaders, workers, and national-security experts. The insights gained from these forums would inform legislative action and lay the groundwork for AI policy.

Again, all of the Schumer photos are fake, created by Midjourney with prompts by Losey.

In conclusion, Schumer urged Congress, the federal government, and AI experts to adopt a proactive and inclusive approach in shaping the future of AI in the United States. He emphasized the necessity of embracing AI and ensuring its safe development for the benefit of society as a whole. This will require bipartisan cooperation, that sets aside ideological differences and self-interest, to tackle the complex, rapidly evolving field of AI. This collective effort, he asserted, would ensure that AI innovation serves humanity’s best interests while upholding the nation’s democratic principles.

Personal Analysis

This proposal is a good start for AI regulation. I especially like the linkage between innovation and regulation. I only hope enough politicians will put partisan bickering aside to unite on this key issue. For the sake of coming generations, we need to get this right the first time.

Ai will soon make the Internet look like small potatoes. We screwed up development and regulation of the early Internet, big time. It was completely unregulated. Few could see the potential. Our blinders are off now. We all see the potential of Ai and we must not get fooled again.

As a long time BBS user, including the big, pre-Internet online services like CompuServe and The Source, I was one of the first lawyers on the Internet. I even had my website challenged by the Florida Bar because they thought it was an unapproved television advertisement. I was able to get the Florida Bar to change its rules, then was invited to lecture all around Florida where I encouraged lawyers and judges to get into computers and try the Internet.

Ralph was really into computers in 90s. Created by Midjourney with help of an old photo.

The World Wide Web then was still a wonderful. interlinked place of learning, academic resources and friendly discussions, with just a few flames (rude, angry comments) that online communities quickly put out. Then the commercial exploitations began and it exploded in size. It went from a technical community BBS mentality, to big business. Then we allowed our privacy to become the product. The end result is the mess you see today. If Ai regulation is ignored, there are far greater dangers ahead.

When the Internet was still young, 1996, Macmillan found me on the Internet and asked me to write a chapter on the law of the Internet. It was for a new edition of a then best selling book explaining everything about the Internet. The very thick book even came with a CD. Your Cyber Rights and Responsibilities: The Law and Etiquette of the Internet, Chapter 2 of Que’s Special Edition Using the Internet, (McMillian 3rd Ed, 1996). The subheadings of my lengthy chapter, that included numerous case links, should be familiar: “free speech and association on the Internet; the libel and slander limitation; the important distinction between Internet publisher and distributor; obscenity limitations, privacy, copyright, and fair trade on the Internet; and, protecting yourself from crime on the Internet.Id.

I kept with my script and encouraged readers in 1996 to try the Internet, just like I am encouraging readers today to try generative Ai. I assured readers then that it was safe, that: “You have important legal rights and responsibilities in cyberspace, just like anywhere else.” My big warning concerned the dangers of computer viruses. Like most “computer lawyers” back then (that’s what we were called), I expected the Internet to continue to be a reasonable place of intellectual discourse. In retrospect, I realize my naïveté and unrealistic optimism. Today my Ai tech encouragement comes with warnings and calls for regulation.

Most online lawyers in the mid-nineties thought that the pre-cyberspace laws on the books would be adequate; individual citizens could self-regulate the Internet and prevent its exploitation. Lawyers would help. We did not want the help of Big Brother government. We were wrong. The Internet without regulation quickly became a dangerous, crass, commercial mess where billions of people were tricked into trading their personal privacy for cheap thrills.

Older now, I am still optimistic. That part is hard wired in. But I am no longer naive. We must regulate Ai and do it now. If the U.S. abdicates its legal leadership role, the E.U. will step in, or worse, the People’s Republic of China. The E.U., whom I greatly admire in many respects, seems likely to over-regulate, make everything a bureaucratic mess and stifle innovation. We do not want that.

If no government does anything to regulate, which is essentially what happened when the Internet was born with the WWW in the early 90s, the hustlers will take over again. So will the dictators of the world. Only this time, it will be worse, far worse, because now the tyrannical foreign powers, and the criminals and terrorists everywhere, know and understand the power of Ai. Few in the 90s realized the impact of the Internet. The 21st Century evil-doers have already started to use Ai for their self-serving greed and attempts of world domination. I agree with this quote from the Schumer Speech.

What if foreign adversaries embrace this technology to interfere in our elections? This is not about imposing one viewpoint, but it’s about ensuring people can engage in democracy without outside interference. This is one of the reasons we must move quickly. We should develop the guardrails that align with democracy and encourage the nations of the world to use them. Without taking steps to make sure AI preserves our country’s foundations, we risk the survival of our democracy.

Senator Schumer, 6/21/23

Fear the people who misuse the Ai – the terrorists, criminals and foreign agents – and not the Ai itself. That is why the U.S. needs to prepare good Ai regs now and follow-up with vigorous enforcement. Our democratic way of life hangs in the balance. We should not fall into the “paralysis by analysis” trap. We should not put off taking action based on the escape that things are moving too fast now to regulate. This is Congressman Ted Lieu’s current approach, a politician with a background in computer science whom I otherwise admire.

Fake image of Congressman Lieu and killer robot by Losey using Midjourney

Congressman Lieu on June 20, 2023 said in an interview on MSNBC’s “Morning Joe”:

“I’m not even sure we would know what we’re regulating at this point because it’s moving so quickly. . . . And so, some of these harms may in fact happen, but maybe they don’t happen. Or maybe we see some new harm.”

This sounds like dangerous procrastination to me. It is not going to slow down and stop changing so you can leisurely study it more. The danger is real and it’s happening now. Congress needs to start actually doing something. If need be, we can always revise or enact more laws later. Remember, perfect is the enemy of good. Senator Schumer’s technical advisors and speech writers have it right. We need to convene the expert Forums now and get down to the details of legislation that implements the SAFE ideas.

Still, I do have a coupe of criticisms. All of the goals of the SAFE policy are good, but, in my view, one goal not emphasized enough by SAFE, is the need for the government to ensure the availability of free unbiased education for all. Retraining and quality GPT based tutoring must be open-sourced and freely available.

Another point that should be emphasized is fairness in the distribution of new wealth that will arise from Ai. The recent McKinsey Report predicts a $4.4 Trillion increase in the economy from generative Ai. See: McKinsey Predicts Generative AI Will Create More Employment and Add 4.4 Trillion Dollars to the Economy. This new wealth must be more fairly distributed than in the last economic boom triggered by the Internet and globalism.


Senator Schumer’s next step to advance the proposed regulation is to refine the SAFE Innovation plan and build consensus. He is asking for help from “creators, innovators, and experts in the field.” That means the politically well-connected or famous. The Senator said that he will soon “invite top AI experts to come to Congress and convene a series of first ever AI insight forums for a new and unique approach to developing AI legislation.” Senator Schumer Speech, 6/21/23. If you have friends in high places and get an invite to a forum, I hope you will go and be heard.

Although I like to be in the arena, I have no political contacts, nor fame; never been one to cultivate contacts and play politics. I am far too outspoken and idealistic for that. Just a Florida native living in the dangerous backwoods of the country, far from the D.C., N.Y. and Silicon Valley arenas. Still, I will keep reporting on the government activities. I hope to persuade as many decision makers as possible by these writings, right-brain graphics, and occasional talks, to take action now.

We need government to protect us from the abusers, those who would, and already are, exploiting Ai for their personal goals and not the greater good. We need to have an intelligent blueprint for regulation, one that still encourages innovation and distribution of these powerful new tools. The SAFE Innovation proposal looks like a good start.

Copyright Ralph Losey 2023 – ALL RIGHTS RESERVED – (May also be Published on and with permission.)