DefCon Chronicles: Where Tech Elites, Aliens and Dogs Collide – Series Opener

August 21, 2023

From Boris to Bots: Our First Dive into the DefCon Universe. This begins a series of blogs chronicling the infamous DefCon event in Las Vegas. The next installment will cover President Biden’s unprecedented request for hackers to attend DefCon to hack AI, and the hackers enthusiastic response, including reporter-AI-hacker Ralph Losey, to break existing AI software in an open contest. In addition, nearly all of the top cybersecurity leadership of the White House and Department of Homeland Security personally attended DefCon, including the Homeland Security Department Secretary himself, Alejandro Mayorkas. They came to help officially open the conference and stayed to give multiple policy statements and answer all hacker questions. It was a true breakthrough moment in cyber history.

Boris seems unimpressed by his official DefCon Dog award

I attended DefCon 31, on August 10-15, 2023, as independent Press, accompanied by my co-reporter daughter, a former lobbyist with an English Lit background, and her dog, Boris. Our press status with special green badge had a high price tag, but it gave us priority access to everything. It also facilitated our interaction with notable figures, from the White House Science Advisor, Arati Prabhakar, to DefCon’s enigmatic founder, Dark Tangent.

DefCon is the world’s largest tech hacker “conference” – more like a inter-dimensional portal at the Caesars Forum. When we first checked in, we happened to meet the leader of DefCon Press and P.R. She fell for little Boris in a handbag, and declared him the official DefCon 31 dog! What an honor. Way to go Boris, who everyone thinks is a Chihuahua, but is really a Russian Terrier. Nothing is as it seems at DefCon. The guy you see walking around in shorts, who looks like a bearded punk rocker, may actually be a senior NSA fed. We will tell you why the NSA was there later in this series.

At DefCon, we immersed ourselves in a diverse crowd of over 24,000 elite tech experts from across the globe. This included renowned names in Cybersecurity, notably the formidable red team professionals. Most of these hackers are law-abiding entrepreneurs, as well as members of top corporate and federal red and blue teams. Several thousand were there just to answer President Biden’s call for hackers everywhere to come to DefCon to compete to break AI. Such a request had never been made before. Much more on this later, including my joining in the AI competition.

The tech experts, hackers all, came together for the thirty-first year of DefCon. We were drawn to participate, and in our case, also report on, the hundreds of large and small lectures and other educational events, demonstrations and vendor exhibitions. In addition, the really big draw was, as usual, the dazzling array of hacker challenges and competitions. Some of these are quiet serious with major prizes and rep at stake, and required pre-qualifications and success in entry rounds. But most were open to all who showed up.

Picture walking into a football stadium, but in place of athletes, you’re surrounded by the world’s tech elite, each donning distinctive hacker attire. As we flooded in by the thousands, it was a blend of seasoned pros and enthusiastic fans. I counted myself among the fans, yet I eagerly took on several challenges, such as the AI red team event. The sheer diversity and expertise of all participants was impressive.

The entrance boasted a towering, thirty-foot neon sparkling mural that caught my eye immediately. I’ve refined the photo to focus on the mural, removing the surrounding crowds. And, just for fun, there’s an alien addition.

Ralph entering Defcon 31

The open competitions came in all shapes and sizes: hacker vs. computers and machines of all types, including voting machines, satellites and cars; hacker vs. hacker contests; and hacker teams against hacker teams in capture the flag type contests. An article will be devoted to these many competitions, not just the hacker vs. AI contest that I entered.

There was even a writing contest before the event to compete for the best hacker-themed short story, with the winner announced at DefCon. I did not win, but had fun trying. My story followed the designated theme, was set in part in Defcon, and was a kind of sci-fi, cyber dystopia involving mass shootings with AI and gun control to the rescue. The DefCon rules did not allow illustrations, just text, but, of course, I later had to add pictures, one of which is shown below. I’ll write another article on that fiction writing contest too. There were many submissions, most were farther-out and better than my humble effort. After submission, I was told that most seemed to involve Ai in some manner. It’s in the air.

Operation Veritas - short story by R. Losey
Illustration by Ralph for his first attempt at writing fiction, submitted for judging in the DefCon 31 writing competition.

So many ideas and writing projects are now in our head from these four days in Vegas. One of my favorite lectures, which I will certainly write about, was by a French hacker, who shared that he is in charge of cybersecurity for a nuclear power plant. He presented in a heavy French accent to a large crowd on a study he led on Science Fiction. It included statistical analysis of genres, and how often sci-fi predictions come true. All of DefCon seemed like a living sci-fi novel to us, and I am pretty sure there were multiple aliens safely mingling with the crowd.

We provide this first Defcon 31 chronicle as an appetizer for many more blogs to come. This opening provides just a glimpse of the total mind-blowing experience. The official DefCon 31 welcome trailer does a good job of setting the tone for the event. Enlarge to full screen and turn up the volume for best affects!

DefCon 31 official welcome video

Next, is a brief teaser description and image of our encounter with the White House Science Advisor, Dr. Arati Prabhakar. She and her government cyber and AI experts convinced President Biden to issue a call for hackers to come to Defcon, to try to break (hack) the new AI products. This kind of red team effort is needed to help keep us all safe. The response from tech experts worldwide was incredible, over a thousand hackers waited in a long line every day for a chance to hack the AI, myself included.

We signed a release form and were then led to one of fifty or more restricted computers. There we read the secret contest instructions, started the timer, and tried to jail break the AI in multiple scenarios. In quiet solo efforts, with no outside tools allowed and constant monitoring to prevent cheating, we tried to prompt ChatGPT4 and other software to say or do something wrong, to make errors and hallucinate. I had one success. The testing of AI vulnerabilities is very helpful to AI companies, including OpenAI. I will write about this is in much greater detail in a later article, as AI and Policy were my favorite of the dozens of tracks at DefCon.

A lot of walking was required to attend the event and a large chill-out room provided a welcome reprieve. They played music there with DJs, usually as a quiet background. There were a hundred decorated tables to sit down, relax, and if you felt like it, chat, eat and drink. The company was good, everyone was courteous to me, even though I was press. The food was pretty good too. I also had the joy of someone “paying it forward” in the food line, which was a first for me. Here is a glimpse of the chill out scene from the official video by Defcon Arts and Entertainment. Feel it. As the song says, “no one wants laws on their body.” Again, go full screen with volume up for this great production,

Defcon 31 Chill Out room, open all day, with video by Defcon Arts and Entertainment,

As a final teaser for our DefCon chronicles, check out my Ai enhanced photo of Arati Prabhakar, whose official title is Director of the Office of Science and Technology. She is a close advisor of the President and member of the Cabinet. Yes, that means she has seen all of the still top secret UFO files. In her position, and with her long DOD history, she knows as much as anyone in the world about the very real dangers posed by ongoing cyber-attacks and the seemingly MAD race to weaponize AI. Yet, somehow, she keeps smiling and portrays an aura of restrained confidence, albeit she did seem somewhat skeptical at times of her bizarre surroundings at DefCon, and who knows what other sights she has been privy too. Some of the questions she was asked about AI did seem strange and alien to me.

Arati Prabhakar speaking on artificial intelligence, its benefits and dangers, Photoshop, beta version, enhancements by Ralph Losey

Stay tuned for more chronicles. Our heads are exploding with new visuals, feelings, intuitions and ideas. They are starting to come together as new connections are made in our brains’ neural networks. Even a GPT-5 could not predict exactly what we will write and illustrate next. All we know for certain is that these ongoing chronicles will include video tapes of our interviews, presentations attended, including two mock trials of hackers, as well as our transcripts, notes, impressions and many more AI enhanced photos. All videos and photos will, of course, have full privacy protection of other participants who do not consent, which the strict rules of Def Con require. If you are a human, Ai or alien, and feel that your privacy rights have been violated by any of this content, please let us know and we will fuzz you out fast.

DefCon 31 entrance photo by Def Con taken before event started

Ralph Losey Copyright 2023 (excluding the two videos, photo and mural art, which are Def Con productions).

White House Obtains Commitments to Regulation of Generative AI from OpenAI, Amazon, Anthropic, Google, Inflection, Meta and Microsoft

August 1, 2023
Chat Bots say ‘Catch me if you can! I move fast.’

In a landmark move towards the regulation of generative AI technologies, the White House brokered eight “commitments” with industry giants Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The discussions, held exclusively with these companies, culminated in an agreement on July 21, 2023. Despite the inherent political complexities, all parties concurred on the necessity for ethical oversight in the deployment of their AI products across several broad areas.


These commitments, although necessarily ambiguous, represent a significant step to what may later become binding law. The companies not only acknowledged the appropriateness of future regulation across eight distinct categories, they also pledged to uphold their ongoing self-regulation efforts in these areas. This agreement thus serves as a kind of foundation blueprint for future Ai regulation. Also see prior efforts by U.S. government that precede this blueprint, AI Risk Management Framework, (NIST, January 2023), and the White House Blueprint for an AI Bill of Rights, (October 2022).

The eight “commitments” are outlined in this article with analysis, background and some editorial comments. Here is a PDF version of this article. For a direct look at the agreement, here is a link to the “Commitment” document. For those interested in the broader legislative landscape surrounding AI in the U.S., see my prior article, “Seeds of U.S. Regulation of AI: the Proposed SAFE Innovation Act” (June 7, 2023). It provides a comprehensive overview of proposed legislation, again with analysis and comments. Also see, Algorithmic Accountability Act of 2022 (requiring self-assessments of AI tools’ risks, intended benefits, privacy practices, and biases) and American Data Privacy and Protection Act (ADPPA) (requiring impact assessments for “large data holders” when using algorithms in a manner that poses a “consequential risk of harm,” a category which certainly includes some types of “high-risk” uses of AI). 

Government determined to catch and pin down wild chat bots.

The document formalizes a voluntary commitment, which is sort of like a non-binding agreement, an agreement to try to reach an agreement. The parties statement begins by acknowledging the potential and risks of artificial intelligence (AI). Then it affirms that companies developing AI should ensure the safety, security, and trustworthiness of their technologies. These are the three major themes for regulation that the White House and the tech companies could agree upon. The document then outlines eight particular commitments to implement these three fundamental principles.

Just Regulation of Ai Should Be Everyone’s Goal.

The big tech companies affirm they are already taking steps to ensure the safe, secure, and transparent development and use of AI. So these commitments just confirm what they are already doing. Clever wording here and of course, the devil is always in the details, which will have to be ironed out later as the regulatory process continues. The basic idea that the parties were able to agree upon at this stage is that these eight voluntary commitments, as formalized and described in the document, are to remain in effect until such time as enforceable laws and regulations are enacted.

The scope of the eight commitments is specifically limited to generative Ai models that are more powerful than the current industry standards, specified in the document as, or equivalent to: GPT-4, Claude 2, PaLM 2, Titan, and DALL-E 2 for image generation. Only these models, or models more advanced than these, are intended to be covered by this first voluntary agreement. It is likely that other companies will sign up later and make these same general commitments, if nothing else, to claim that their generative technologies are now of the same level as these first seven companies.

It is a good for discussions like this to start off in a friendly manner and reach general principles of agreement on the easy issues – the low hanging fruit. Everyone wants Ai to be safe, secure, and trustworthy. The commitments lays a foundation for later, much more challenging discussions between industry and government and the people the government is supposed to represent. Good work by both sides in what must have been very interesting opening talks.

What can we agree upon to start talking about regulation?

Dissent in Big Tech Ranks Already?

It is interesting to see that there is already a split among the seven big tech companies whom the White Hours talked into the commitments, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. Five of them went on to create an industry group focused on ensuring safe and responsible development of frontier AI models, which they call the Frontier Model Forum (announced July 26, 2023). Two did not join the Forum: Amazon and Inflection. And you cannot help but wonder about Apple, who apparently was not even invited to the party at the White House, or maybe they were, and decided not to attend. Apple should be in these discussions, especially since they are rumored to be well along in preparing a advanced Ai product. Apple is testing an AI chatbot but has no idea what to do with it, (Verge, July 19, 2023).

Inflection AI, Inc., the least known of the group, is a  $4 billion private start-up that claims to have the world’s best AI hardware setup. Inflection AI, The Year-Old Startup Behind Chatbot Pi, Raises $1.3 Billion, (Forbes, 6/29/23). Inflection is company behind the empathetic software, PI, which I previously wrote about in Code of Ethics for “Empathetic” Generative AI, (July 12, 2023). These kind of personal, be your best friend, chat bots present special dangers of misuse, somewhat different than the rest. My article delves into this and endorses Jon Neiditz’ proposed Code of Ethics for “Empathetic” Generative AI.

Control Promotion and Exploitation of Robot Love.

The failure of Inflection to join in the Frontier Model Forum is concerning. So too is Amazon’s recalcitrance, especially considering the number of Alexa ears there are in households world wide (I have two), not to mention their knowledge of most everything we buy.

Think Universal, Act Global

The White House Press Release on the commitments says the Biden Administration plans to “continue executive action and pursue bipartisan legislation for responsible innovation and protection.” The plan is to, at the same time, work with international allies to develop a code of conduct for AI development and use worldwide. This is ambitious, but appropriate for the U.S. government to think globally on these issues.

The E.U. is already moving fast in Ai regulation, many say too fast. The E.U. has a history of strong government involvement with big tech regulation, again, some say too strong, especially on the E.U.’s hot button issue, consumer privacy. The EU and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment, (Brookings Institution, 2/16/23). I am inclined towards the views of privacy expert, Jon Neiditz, who explains why generative Ais provide significantly more privacy than the existing systems. How to Create Real Privacy & Data Protection with LLMs, (The Hybrid Intelligencer, 7/28/23) (“… replacing Big Data technologies with LLMs can create attractive, privacy enhancing alternatives to the surveillance with which we have been living.“) Still, privacy in general remains a significant concern for all technologies, including generative Ai.

The free world must also consider the reality of the technically advanced totalitarian states, like China and Russia, and the importance to them of Ai. Artificial Intelligence and Great Power Competition, With Paul Scharre, (Council on Foreign Relations (“CFR”), 3/28/23) (Vladimir Putin said in September 2017: “Artificial intelligence is the future not only for Russia, but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world.” . . . [H]alf of the world’s 1 billion surveillance cameras are in China, and they’re increasingly using AI tools to empower the surveillance network that China’s building); AI Meets World, Part Two, (CFR, June 21, 2023) (good background discussion on Ai regulation issues, although some of the commentary and questions in the audio interview seem a bit biased and naive).

There is a military and power control race going on. This makes U.S. and other free-world government regulation difficult and demands eyes wide open international participation. Many analysts now speak of the need for global agreements along the lines of Nuclear Non-Proliferation treaties attained in the past. See eg., It is time to negotiate global treaties on artificial intelligence, (Brookings Institute, 3/24/21); OpenAI CEO suggests international agency like UN’s nuclear watchdog could oversee AI, (AP, 6/6/23); But see, Panic about overhyped AI risk could lead to the wrong kind of regulation, (Verge, 7/3/23).

Mad Would Be World Dictators Covet Ai.

Three Classes of Risk Addressed in the Commitments

Safety. Companies are all expected to ensure their AI products are safe before they are introduced to the public. This involves testing AI systems for their safety and capabilities, assessing potential biological, cybersecurity, and societal risks, and making the results of these assessments public. See: Statement on AI Risk, (Center for AI Safety, 5/30/23) (open letter signed by many Ai leaders, including Altman, Kurzweil and even Bill Gates, agreeing to this short statement “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.“). The Center for AI Safety provides this short statement of the kind of societal-scale risks it is worried about:

AI’s application in warfare can be extremely harmful, with machine learning enhancing aerial combat and AI-powered drug discovery tools potentially being used for developing chemical weapons. CAIS is also concerned about other risks, including increased inequality due to AI-related power imbalances, the spread of misinformation, and power-seeking behavior. 

FAQ of Center for AI Safety

These are all very valid concerns. The spread of misinformation has been underway for many years.

The disclosure requirement will be challenging in view of both competitive and intellectual property concerns. There are related criminal hacking and military concerns that disclosure and open source code may help criminal hackers and military espionage. Michael Kan, FBI: Hackers Are Having a Field Day With Open-Source AI Programs (PC Mag., 7/28/23) (Criminals are using AI programs for phishing schemes and to help them create malware, according to a senior FBI official). Foreign militaries, such as China and Russia are known to be focusing on Ai technologies for suppression and attacks.

The commitments document emphasizes the importance of external testing and the need for companies to be transparent about the safety of their AI systems. The external testing is a good idea and hopefully this will be by an independent group, and not just the leaky government, but again, there is the transparency concern with over-exposure of secrets and China’s well-known constant surveillance and theft of IP.

Testing new advanced Ai products before release to public.

Note the word “license” was not used in the commitments, as that seems to be a hot button for some. See eg. The right way to regulate AI, (Case Text, July 23, 2023) (claims that Sam Altman proposed no one be permitted to work with AI without first obtaining a license). With respect, that is not a fair interpretation of Sam Altman’s Senate testimony or OpenAI’s position. Altman talked said “licensing and testing of all Ai models.” This means licensing of Ai models to confirm to the public that the models have been tested and approved as safe. In context, and based on Altman’s many later explanations in his world tour that followed, it is obvious that Sam Altman, OpenAI’s CEO, meant a license to sell a particular product, not a license for a person to work with Ai at all, nor a license to create new products, or do research. See eg. the lengthy video interview of Sam Altman given to Bloomberg Technology on June 22, 2026.

Regulatory licensing under discussion so far pertains only to the final products, to certify to all potential users of the new Ai tech that it has been tested and certified as safe, secure, and trustworthy. Also the license scope would be limited to very advanced new products, which do, almost all agree, present very real risks and dangers. No one wants a new FDA, and certainly no one wants to require individual licenses for someone to use Ai, like a driver’s license, but it seems like common sense to have these powerful new technology products tested and approved by some regulatory body before a company releases it. Again, the devil in in the details and this will be a very tough issue.

Keeping Us Safe.

Security.The agreement highlights the duty of companies to prioritize security in their AI systems. This includes safeguarding their models against cyber threats and insider threats. Companies are also encouraged to share best practices and standards to prevent misuse of AI technologies, reduce risks to society, and protect national security. One of the underlying concerns here is how Ai can be used by criminal hackers and enemy states to defeat existing blue team protective systems. Plus, there is the related threat of commercially driven races of Ai products to the market before they are ready. Ai products need adequate red team testing before release, coupled with ongoing testing after release. The situation is even worse with third-party plug-ins. They often have amateurish software designs and no real security at all. In today’s world, cybersecurity must be a priority of everyone. More on this later in the article.

AI Cyber Security.

Trust. Trust is identified as a crucial aspect of AI development. Companies are urged to earn public trust by ensuring transparency in AI-generated content, preventing bias and discrimination, and strengthening privacy protections. The agreement also emphasizes the importance of using AI to address societal challenges, such as cancer and climate change, and managing AI’s risks so that its benefits can be fully realized. As frequently said on the e-Discovery Team blog, “trust but verify.” That is where testing and product licensing come in. For instance, how else would you really know that any confidential information you use with an Ai product is in fact kept confidential as the seller claims? Users are not in a position to verify that. Still, generative Ai is an inherently more privacy protective tech system than existing Big Data surveillance systems. How to Create Real Privacy & Data Protection with LLMs.

Ready to Trust Generative Ai?

Eight Commitments in the Three Classes

First, here is the quick summary of the eight commitments:

  1. Internal and external red-teaming of models,
  2. Sharing information about trust and safety risks,
  3. Investing in cybersecurity,
  4. Incentivizing third-party discovery of vulnerabilities,
  5. Developing mechanisms for users to understand if content is AI-generated,
  6. Publicly reporting model capabilities and limitations,
  7. Prioritizing research on societal risks posed by AI,
  8. Deploying AI systems to address societal challenges.
Preparing Early Plans for Ai Regulation.

Here are the document details of the eight commitments, divided into the three classes of risk. A few e-Discovery Team editorial comments are also included and, for clarity, are shown in (bold parenthesis).

Two Safety Commitments

  1. Companies commit to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns. (This is the basis for the President Biden’s call for hackers to attend DEFCON 31 to “red team” and expose errors and vulnerabilities that experts in Ai discover in open competitions. We will be at DEFCON to cover these events. Vegas Baby! DEFCON 31.) The companies all acknowledge that robust red-teaming is essential for building successful products, ensuring public confidence in AI, and guarding against significant national security threats. (An example of new employment opportunities made possible by Ai.) The companies also commit to advancing ongoing research in AI safety, including the interpretability of AI systems’ decision-making processes and increasing the robustness of AI systems against misuse. (Such research is another example of new work creation by Ai.)
  2. Companies commit to work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards. (Such information sharing is another example of new work creation by Ai.) They recognize the importance of information sharing, common standards, and best practices for red-teaming and advancing the trust and safety of AI. They commit to establish or join a forum or mechanism through which they can develop, advance, and adopt shared standards and best practices for frontier AI safety. (Another example of new, information sharing work created by Ai. These forums all require dedicated human administrators.)
Everyone Wants Ai to be Safe.

Two Security Commitments

  1. On the security front, companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. The companies treat unreleased AI model weights as core intellectual property, especially with regards to cybersecurity and insider threat risks. This includes limiting access to model weights to those whose job function requires it and establishing a robust insider threat detection program consistent with protections provided for their most valuable intellectual property and trade secrets. (Again, although companies already invest in these jobs, even more work, more jobs, will be created by these new AI IP related security challenges, which will, in our view, be substantial. We do not want enemy states to steal these powerful new technologies. The current cybersecurity threats from China, for instance, are already extremely dangerous, and may encourage their attack of Taiwan, a close ally who supplies over 90% of the world’s advanced computer chips. Taiwan’s dominance of the chip industry makes it more important, (The Economist, 3/16/23); U.S. Hunts Chinese Malware That Could Disrupt American  American Military Operations, (NYT, 7/29/23)).
  2. Companies also commit to incentivizing third-party discovery and reporting of issues and vulnerabilities, recognizing that AI systems may continue to have weaknesses and vulnerabilities even after robust red-teaming. (Again, this is the ongoing Red Teaming mentioned to incentivize researchers, hackers all, to find and report mistakes in Ai code. There have been a host of papers and announcements on Ai vulnerabilities and red team successes lately. See eg.: Zou, Wang, Kolte, Fredrikson, Universal and Transferable Attacks on Aligned Language Models, (July 27, 2023); Pierluigi Paganini, FraudGPT, a new malicious generative AI tool appears in the threat landscape, (July 26, 2023) (dangerous tools already on dark web for criminal hacking). Researchers should be paid rewards for this otherwise unpaid work. The current rewards should be increased in size to encourage the often not fully employed, economically disadvantaged hackers to do the right thing. Hackers who find errors and succumb to temptation and use them for criminal activities should be punished. There are always errors in new technology like this. There are also a vast number of additional errors and vulnerabilities created by third-party plugins in the gold rush to Ai profiteering. See eg: Testing a Red Team’s Claim of a Successful “Injection Attack” of ChatGPT-4 Using a New ChatGPT Plugin, (May 22, 2023). Many of the mistakes are already well known and some are still not corrected. This appears like inexcusable neglect and we expect future hard laws to dig into this much more deeply. All companies need to be ethically responsible and the big Ai companies need to police the small plug-in companies, much like Apple now polices its App Store. We think this area is of critical importance.)
Guard Against Ai “Prison Breaks”

Four Trust Commitments

  1. In terms of trust, companies commit to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated. This includes developing strong mechanisms, such as provenance and/or watermarking systems for audio or visual content created by any of their publicly available systems. (This is a tough one, and only will grow in importance and difficulty as these systems grow more sophisticated. OpenAI experimented with watermarking, but were disappointed at the results and quickly discontinued it. OpenAI Retires AI Classifier Tool Due to Low Accuracy, (Fagen Wasanni Technologies, July 26, 2023). How do we even know if we are actually talking to a person, and not just an Ai posing as a human? Sam Altman has launched a project outside of OpenAI addressing that challenge, among other things, the World Coin project. On July 27, 2023, they began to verify that an online applicant to World Coin membership is in fact human. They do that with in-person eye scans in physical centers around the world. An interesting example of new jobs being created to try to meet the ‘real or fake’ commitment.)
  2. Companies also commit to publicly reporting model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of the model’s effects on societal risks such as fairness and bias. (Again, more jobs and skilled human workers will be needed to do this.)
  3. Companies prioritize research on societal risks posed by AI systems, including avoidance of harmful bias and discrimination, and protection of privacy. (Again, more work and employment. Some companies might prefer to gloss over and minimize this work because it will slow and negatively impact sales, at least at first. Glad to see these human rights goals in an initial commitment list. We expect the government will set up extensive, detailed regulations in this area. It has a strong political, pro-consumer draw.)
  4. Finally, companies commit to developing and deploying frontier AI systems to help address society’s greatest challenges. These challenges include climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats. They also commit to supporting initiatives that foster the education and training of students and workers to prosper from the benefits of AI, and to helping citizens understand the nature, capabilities, limitations, and impact of the technology. (We are big proponents of this and the possible future benefits of Ai. See eg, ChatGTP-4 Prompted To Talk With Itself About “The Singularity”, (April 4, 2023), and Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI?, (July 7, 2023)).
Totally Fake Image of Congressman Lieu (pretty obvious to most, even without watermarks).


The Commitments document emphasizes the need for companies to take responsibility for the safety, security, and trustworthiness of their AI technologies. It outlines eight voluntary commitments to advance the principles. The voluntary agreement highlights the need for ongoing research, transparency, and public engagement in the development and use of AI. The e-Discovery Team blog is already doing its part on the “public engagement” activity, as this is our 38th article in 2023 on generative Ai.

The Commitments document closes by noting the potential of AI to address some of society’s greatest challenges, while also acknowledging the risks and challenges that need to be managed. It is important to do that, to remember we must strike a fair balance between protection and innovation. Seeds of U.S. Regulation of AI: the Proposed SAFE Innovation Act.

Justice depends on reasoning free from a judge’s personal gain.

The e-Discovery Team blog always tries to do that, in an objective manner, not tied to any one company or software product. Although ChatGPT-4 has so far been our clear favorite, and their software is the one we most frequently use and review, that can change, as other products enter the market and improve. We have no economic incentives or secret gifts tipping the scale of our judgments.

Although some criticize the Commitments as meaningless showmanship, we disagree. From Ralph’s perspective as a senior lawyer, with a lifetime of experience in legal negotiations, it looks like a good start and show of good faith on both sides, government and corporate. We all want to control and prevent Terminator robot dystopias.

Lawyer stands over Terminator robot he just defeated.

Still, it is just a start, far from the end goal. We have a long way to go and naive idealism is inappropriate. We must trust and verify. We must operate in the emerging world with eyes wide open. There are always conmen and power-seekers seeking to profit from new technologies. Many are motivated by what Putin said about Ai: “Whoever becomes the leader in this sphere will become the ruler of the world.

Trust But Verify!

Many believe AI is, or may soon be, the biggest technological advance of our age, perhaps of all time. Many say it will be bigger than the internet, perhaps equal to the discovery of nuclear energy. Just as Einstein’s discovery, with Oppenheimer’s engineering, resulted in the creation of nuclear weapons that ended WWII, these discoveries also left us with an endangered world living on the brink of total thermonuclear war. Although we are not there yet, Ai creations could eventually take us to the same DEFCON threat level. We need Ai regulation to prevent that.

Governments word-wide must come to understand that using Ai as an all out, uncontrolled weapon will result in a war game that cannot be won. It is a Mutually Assured Destruction (“MAD”) tactic. The global treaties and international agencies on nuclear weapons and arms control, including the military use of viruses, were made possible by the near universal realization that nuclear war and virus weapons were MAD ideas.

MAD AI War Apocalypse

All governments must be made to understand that everyone will lose an Ai world war, even the first strike attacker. These treaties and inspection agencies and MAD realization have, so far enabled us to avoid such wars. We must do the same with Ai. Governments must be made to understand the reality of Ai triggered species extermination scenarios. Ai must ultimately be regulated, bottled up, on an international basis, just as nuclear weapons and bio-weapons have been.

Ai must be regulated to prevent uncontrollable consequences.

What Lawyers Think About AI, Creativity and Job Security

July 28, 2023

This article continues the Ai creativity series and examines current thinking among lawyers about their work and job security. Most believe their work is too creative to be replaced by machines. The lawyer opinions discussed here are derived from a survey by Wolters Kluwer and Above the Law: Generative AI in the Law: Where Could This All Be Headed? (7/03/2023). It seems that most other professionals, including doctors and top management in businesses, feel the same way. They think they are indispensable Picassos, too cool for school.

All images and video created by Ralph Losey

The evidence discussed on this blog in the last few articles suggests they are wrong. It might just be vainglory on their part. Creativity and How Anyone Can Adjust ChatGPT’s Creativity Settings To Limit Its Mistakes and Hallucinations; and Creativity Test of GPT’s Story Telling Ability Based on an Image Alone and especially ChatGPT-4 Scores in the Top One Percent of Standard Creativity Tests. Some of the highest paid, most secure attorneys today are very creative, but so too are the new Generative Ais. Some of the latest Ais are very personable too, dangerously so. Code of Ethics for “Empathetic” Generative AI.

Introduction to the Lawyer Survey

The well-prepared Above The Law Wolters Kluwer report of July 3, 2023, indicates that two-thirds of lawyers questioned do not think ChatGPT-4 is capable of creative legal analysis and writing. For that reason, they cling to the belief they are safe from Ai and can ignore it. They think their creativity and legal imagination makes them special, irreplaceable. The survey shows they believe that only the grunt workers of the law, the document and contract reviewers, and the like, will be displaced.

I used to think that too. A self-serving vanity perhaps? But, I must now accept the evidence. Even if your legal work does involve considerable creative thinking and legal imagination, it is not for that reason alone secure from AI replacement. There may be many other reasons that your current job is secure, or that you only have to tweak your work a little to make it secure. But, for most of us, it looks like we will have to change our ways and modify our roles, at least somewhat. We will have to take on new legal challenges that emerge from Ai. The best job security comes from continuous active learning.

With some study we can learn to work with Ai to become even more creative, productive and economically secure.

Recent “Above The Law” – Wolters Kluwer Survey

Surprisingly, I agree with most of the responses reported in the survey described in Generative AI in the Law: Where Could This All Be Headed? I will not go over these, and instead just recommend you read this interesting free report (registration required). My article will only address the one opinion that I am very skeptical about, namely whether or not high-level, creative legal work is likely to be transformed by AI in the next few years. A strong majority said no, that jobs based on creative legal analysis are safe.

Most of the respondents to the survey did not think that AI is even close to taking over high-level legal work, the experienced partner work that requires a good amount of imagination and creativity. Over two-thirds of those questioned considered such skilled legal work to be beyond a chatbot’s abilities.

At page six of the report, after concluding that all non-creative legal work was at risk, the survey considered “high-level legal work.” A minority of respondents, only 31%, thought that AI would transform complex matters, like “negotiating mergers or developing litigation strategy.” Almost everyone thought AI lacked “legal imagination,” especially litigators, who “were the least likely to agree that generative AI will someday perform high-level work.” This is the apparent reasoning behind the conclusions as to whose jobs are at risk. As the ATL Wolters report observed:

The question is: Can an AI review a series of appellate opinions that dance around a subject but never reach it head on? Can the AI synthesize a legal theory from those adjacent points of law? In other words, does it have legal imagination? . . .

One survey respondent — a litigation partner — had a similar take: “AI may be increasingly sophisticated at calculation, but it is not replacing the human brain’s capacity for making connections that haven’t been made before or engaging in counterfactual analysis. . ..

The jobs of law firm partners are safest, according to respondents. After all, they’re the least likely group to consider themselves as possibly redundant. Corporate work is the area most likely to be affected by generative AI, according to almost half of respondents. Few respondents believe that AI will have a significant impact on practices involving healthcare, criminal law or investigations, environmental law, or energy law.

Generative AI in the Law: Where Could This All Be Headed? at pgs. 6,7.


After having studied and used ChatGPT for hundreds of hours now, and after having been a partner in one law firm or another for what seems like hundreds of years, I reluctantly conclude that my fellow lawyers are mistaken on the creativity issue. Their response to this prompt appears to be a delusional hallucination, rather than insightful vision.

As Sam Altman has observed, and I agree, that it is an inherent tendency of the creative process to make mistakes and make stuff up, to hallucinate without even knowing it. Creativity and How Anyone Can Adjust ChatGPT’s Creativity Settings To Limit Its Mistakes and Hallucinations; (includes Sam Altman’s understanding of human “creativity” and how Ai creativity is somewhat similar), Creativity Test of GPT’s Story Telling Ability Based on an Image Alone (you be the judge, but ChatGPT’s stories seem just as good as that of most trial lawyers) and ChatGPT-4 Scores in the Top One Percent of Standard Creativity Tests (how many senior partners would score that high?). Also seeWhat is the Difference Between Human Intelligence and Machine Intelligence? (not much difference, and Ai is getting smarter fast).

The assumed safety of the higher echelons of the law shown in the survey is a common belief. But, like many common beliefs of the past, such as the sun and planets revolving around the Earth, the opinion may just be a vain delusion, a hallucination. It is based on the belief that humans in general, and these attorneys in particular, have unique and superior creativity. Yet, careful study shows that creativity is not a unique human skill at all. Ai seems very capable of creativity in all areas. That was shown by standardized TCTT creative testing scores in a report released the same day as the ATL Wolters Survey. ChatGPT-4 scored in the top 1% of standardized creativity testing.

ChatGPT-4 is Number One!

Also, consider how human creative skills are not as easy to control as generative Ai creativity. As previously shown here, GPT-4’s creativity can be precisely controlled by skilled manipulation of the Temperature and Top_P parameters. Creativity and How Anyone Can Adjust ChatGPT’s Creativity Settings. How many law firm partners can precisely lower and raise their creative imagination like that? (Having drinks does not count!) Imagine what a GPT-5 level tool will be able to do in a few years (or months)? The creativity skills of Ai may soon be superior to our own.


The ATL and Wolters Kluwer survey not only reveals an opinion (more like a hope) that creative legal work is safe, it shows most lawyers believe that legal work with little creativity will soon be replaced by Ai. That includes the unfairly maligned and often unappreciated document review attorneys. It also includes many other attorneys who review and prepare contracts. They may well be the first lawyers to face Ai layoffs.

Future Ai Driven Layoffs May Hit Younger Employees First

Free training and economic aid should be provided for these attorneys and others. McKinsey Predicts Generative AI Will Create More Employment and Add 4.4 Trillion Dollars to the Economy (recommending economic aid and training). Although the government should help with this aid, it should primarily come from private coffers, especially from the companies and law firms that have profited so handsomely from their grunt work. They should contribute financial aid and free training.

EDRM provides relevant free training and you should hook-up with EDRM today. Also, remember the free online training programs in e-discovery and Ai enhanced document review started on the e-Discovery Team blog years ago. They are still alive and well, and still free, although they are based on predictive coding and not the latest generative Ai released in November 2022.

  • e-Discovery Team Training. Eighty-five online law school proven classes. Started at UF in 2010. Covers the basics of e-discovery law, technology and ethics.
  • TAR Course. Eighteen online classes providing advanced training on Technology Assisted Review. Started in 2017, this course is updated and shown as a tab on the upper right corner of the e-Discovery Team blog. Below is a short YouTube that describes the TAR Course. The latest generative Ai was used by Ralph to create it.

The e-Discovery Team blog also provides the largest collection of articles on artificial intelligence from a practicing tech-lawyer’s perspective. So far in 2023, thirty-seven articles on artificial intelligence have been written, illustrated and published. It is now the primary focus of Ralph Losey’s research, writing and educational efforts. Hopefully many others will follow the lead of EDRM and the e-Discovery Team blog and provide free legal training in next generation, legal Ai based skills. Everyone agrees this trend will accelerate.

Get ready for tomorrow. Start training today, not only by the mentioned courses, but by playing with ChatGPT. It’s free, most versions, and its everywhere. For instance, there is a ChatGPT bot on the e-Discovery Team website (bottom right). Ask it some questions about the content of this blog, or about anything. Better yet, go sign up for a free account with OpenAI. They recently dropped all charges for the no-frills 4.0 version. Try to learn all that you can about Ai. ChatGPT can tutor you.

There is a bright future awaiting all legal professionals who can learn, adapt and change. We humans are very good at that, as we have shown time and again throughout history. We will evolve, and not only survive, we will prosper as never before. Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI?

This positive vision for the future of Law, for the future of all humanity, is suggested by the below video. It illustrates a bright future of human lawyers and their Ai bots, who, despite appearances, are tools not creatures. They are happily working together. The video was created using the Ai tools GPT-4 and Midjourney. The creativity of these tools both shaped and helped express the idea. In other words, the creative capacities of the Ai guided and improved the human creative process. It was a synergistic team effort. This same hybrid team approach also works with legal creativity, indeed with all creativity. We have seen this many times before as our technology advances exponentially. The main difference is that the Ai tools are much more powerful and the change greater than anything seen before. That’s why the lawyers shown here are happy working with the bots, rather then in competition with them.

Click on the photo to see the video, all by Ralph Losey using ChatGPT and Midjourney

Copyright Ralph Losey 2023 ALL RIGHTS RESERVED

Code of Ethics for “Empathetic” Generative AI

July 12, 2023

An attorney colleague, Jon Neiditz, has written a Code of Ethics for “Empathetic” Generative AI that deserves widespread attention. Jon published this proposed code as an article in his Linkedin newsletter, Hybrid Intelligencer. Jon and I have followed parallel career paths, although I lean towards the litigation side, and he towards management. Jon Neiditz coleads the Cybersecurity, Privacy and Data Governance Practice at Kilpatrick Townsend in Atlanta.

Fake Image of Jon Neiditz as Robot by Losey Prompting Midjourney

This is my ChatGPT-4 assisted summary of Jon’s proposed Code of Ethics for “Empathetic” Generative AI. It pertains to new types of Ai entering the market now where the GPT’s are trained to interact with users on a much more personal and emphatic manner. I recommend your study of the entire article. The proposed regulatory principles also apply to non-emphatic models, such as ChatGPT-4. All images were created by Ralph prompting Midjourney and Photoshop.

What is Emphatic Generative AI?

Jon Neiditz has written a detailed set of ethical guidelines for the development and implementation of a new type of much more “emphatic” AI systems that are just now entering the market. But what is it? And why is Jon urging everyone to make this new, emerging Ai the center of regulatory attention. Jon explains:

“Empathetic” AI is where generative AI dives deep into personal information and becomes most effective at persuasion, posing enormous risks and opportunities. At the point of that spear, is at an inflection point with its $1.3 billion in additional funding, so I spent time with its “Pi” this week. From everything we can see now, this is one of the “highest risk” areas of generative AI.

Jon Neiditz, Code of Ethics for “Empathetic” Generative AI

InflectionAI, a company now positioned to be a strong competitor of OpenAI, calls its new Generative Ai product Pi, standing for “personal intelligence.” Inflection describes its chatbot as a supportive and empathetic conversational AI. It is now freely available. I spent a little time using Pi today, but not much, primarily because its input size limit is only 1,000 characters and its initial functions are simplistic. Still, Jon Neiditz seems to think this emphatic approach to chatbots has a strong future and Pi does remind me of the movie HER. Knowing human nature, he is probably right.

John explains the need for AI regulation of emphatic AI in his introduction:

Mirroring the depths and nuances of human empathy is likely to be the most effective way to help us become the hybrid intelligences many of us need to become, but its potential to undermine independent reflection and focused attention, polarize our societies and undermine our cultures is equally unprecedented, particularly in the service of political or other non-fiduciary actors.

John Neiditz, Code of Ethics for “Empathetic” Generative AI

My wife is a licensed mental health counselor and I know that she, and her profession, will have many legitimate concerns regarding the dangers of improperly trained emotive Ai. There are legal issues with licensing and issues in dealing with mental health crises. Strong emotions can be triggered by personal dialogues, the “talking cure.” Repressed memories may be released by deep personal chats. Mental illness and suicide risks must be considered. Psychiatrists and mental health counselors are trained to recognize when a patient might be a danger to themself or others and take appropriate action, including police intervention. Hundreds of crises situations happen daily requiring skilled human care. What will generative empathetic Ai be trained to do? For instance, will it recognize and properly evaluate the severity of depression and know when reference to a mental health professional is required. Regulations are needed and they must be written with input from these medical professionals. The lives and mental health of millions are at stake.

Summary of AI Code of Ethics Proposed by John Neiditz

Jon’s suggests nine main ethical principles to regulate emphatic Ai. Each principle in his article is broken down into sub-points that provide additional detail. The goal of these principles is to guide empathetic AI systems, including the manufacturers, users and government regulators, to act in alignment with these principles. Here are the nine proposed principles:

1. Balanced Fiduciary Responsibility: This principle places the AI system as a fiduciary to the user, ensuring that its actions and recommendations prioritize the user’s interests, but are balanced by public and environmental considerations. The AI should avoid manipulation and exploitation, should transparently manage conflicts of interest, and should serve both individual and broader interests. There is a strong body of law on fiduciary responsibilities that should provide good guidance for AI regulation. See: John Nay, Large Language Models as Fiduciaries: A Case Study Toward Robustly Communicating With Artificial Intelligence Through Legal Standards (1/23/23). Ralph Losey comment: A fiduciary is required to exercise the highest duties of care, but language in the final code should make clear that the AI’s duty applies to both individuals and all of humanity. Balance is required in all of these principles, but especially in this all important first principle. I know Jon agrees as he states in subsection 1.1:

Empathetic AI systems are designed to serve individual users, responding to their needs, preferences, and emotions. They should prioritize user well-being, privacy, autonomy, and dignity in all their functions. However, AI systems are not isolated entities. They exist in a larger social and environmental context, which they must respect and take into consideration. Therefore, while the immediate concern of the AI should be the individual user, they must also consider and respect broader public and environmental interests. These might include issues such as public health, social cohesion, and environmental sustainability.

John Neiditz, Code of Ethics for “Empathetic” Generative AI

2. Transparency and Accountability: This states that AI systems should operate in an understandable and accountable way. They should clearly communicate their capabilities and limitations, undergo independent audits to check their compliance with ethical standards, and hold developers and operators responsible for their creations’ behaviors. In Jon’s words: “This includes being liable for any harm done due to failures or oversights in the system’s design, implementation or operation, and extends to harm caused by the system’s inability to balance the user’s needs with public and environmental interests.”

3. Privacy and Confidentiality: This principle emphasizes the need to respect and protect user privacy. Empathetic AI systems should minimize data collection, respect user boundaries, obtain informed consent for data collection and use, and ensure data security. This is especially important when emphatic Ai chatbots like Pi become common place. Jon correctly notes:

As empathetic AI systems interact deeply with users, they must access and use a great deal of personal and potentially sensitive data. Indeed, large language models focusing on empathy represent a major shift for LLMs in this regard; previously it was possible for Sam Altman and this newsletter to tout the privacy advantages of LLMs over the prior ad-driven surveillance economy of the web. The personal information an empathetic AI will want about you goes far beyond information that helps to get you to click on ads. This third principle emphasizes the need for stringent measures to respect and protect that deeper personal information.

John Neiditz, Code of Ethics for “Empathetic” Generative AI

4. Non-Discrimination: This advocates for fair treatment of all users, regardless of their background. AI systems should treat all users equally, ensure inclusiveness in training data, monitor and mitigate biases continuously, and empower users to report perceived biases or discrimination. Ralph Losey comment: Obviously there is need for some intelligent discrimination here among users, which is a challenging task. The voice of a Hitler-type should not be given equal weight, and should be included in training data with appropriate value judgements and warnings.

5. Autonomy: Emphasizing the need for AI systems to respect users’ freedom to make their own decisions. It discourages over-reliance on AI and discourages undue influence. The AI should provide support, information, and recommendations, but ultimately, decisions lie with the user. It also encourages independent decision-making, and discourages over-reliance on the AI system. Ralph Losey comment: The old saying “trust but verify” always applies in hybrid, human/machine relations, so too does the parallel computer saying, “garbage in, garbage out.”

6. Beneficence and Non-Maleficence: This principle highlights the responsibility of AI systems to act beneficially towards users, society, and the environment, while avoiding causing harm. Beneficence involves promoting wellbeing and good, while non-maleficence involves avoiding harm, both directly and indirectly. Sometimes, there can be trade-offs between beneficence and non-maleficence, in which case, a balance that respects both principles should be sought.

7. Empathy with Compassion: As Jon explains: “This principle focuses on and extends beyond the AI’s understanding and mirroring of a user’s emotions, advocating for a broader concern for others and society as a whole in which empathy and compassion inform each other.” This principle promotes empathetic and compassionate responses from the AI, encourages understanding of the user’s emotions and a broader concern for others. The AI should continuously learn and improve its empathetic and compassionate responses, including ever better understanding of human emotions, empathetic accuracy, and adjusting its responses to better meet user needs and societal expectations.

8. Environmental Consideration: AI systems have a responsibility to operate in an environmentally sensitive manner and to promote sustainability. This includes minimizing their environmental footprint, promoting sustainable practices, educating users about environmental matters, and considering environmental impacts in their decision-making processes.

9. Regulation and Oversight: We need external supervision to ensure empathetic AI systems operate within ethical and legal boundaries. This requires a regulatory framework governing AI systems, with oversight bodies that enforce regulations, conduct audits, and provide guidance. Transparency in AI compliance and accountability for non-compliance is vital. So too is active user participation in the regulation and oversight processes, to promote an inclusive regulatory environment.

Thoughts on Regulation

Regulation should include establishment of some sort of quasi-governmental authority to enforce compliance, conduct regular audits, and provide ongoing guidance to developers and operators. Transparency and accountability should serve as fundamental tenets, allowing for scrutiny of AI systems’ behaviors and holding individuals and organizations accountable for any violations.

In conjunction with institutional regulation, it is equally crucial to encourage active participation from users and affected communities. Their input and experiences are invaluable. By involving stakeholders in the regulatory and oversight processes, we can forge a collective responsibility in shaping the ethical trajectory of Empathetic AI.

Regulation should foster an environment that supports innovation and responsible, ethical practices. They should pave the way for a future where technology and empathy coexist harmoniously, yielding transformative benefits, while safeguarding against emotional exploitations and other dangers. A regulatory framework, founded on the principles Jon has proposed, could provide the necessary checks and balances to protect user interests, mitigate risks, and uphold ethical standards.


I agree with John Neiditz and his call to action in Code of Ethics for “Empathetic” Generative AI. The potential of AI systems to comprehend and respond to human emotions requires a rigorous, comprehensive approach to regulation. We should start now to regulate Empathetic Generative AI. I am ready to help Jon and others with this important effort.

The movie HER, except for the ending ascension, which is absurd, provides an all too plausible scenario of what could happen when empathic chatbots are super-intelligent and used by millions. We could be in for a wild ride. Human isolation and alienation are already significant problems of our technology age. It could get much worse when we start to prefer the “perfect other” in AI form to our flawed friends and loved ones. Let’s try to promote real human communities instead of people talking to AI chatbots. AI can join the team as a super tool, but not as not an real friend or spouse. See: What is the Difference Between Human Intelligence and Machine Intelligence? and Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI?

In the What is the Difference blog I quoted portions of Sam Altman’s video interview at an event in India by the Economic Times to show his “tool not a creature” insight. There as another Q&A exchange in that same YouTube video starting at 1:09:05, that elaborates on this in a way that directly address this Ai intimacy concern.

Questioner (paraphrased): [I]t’s human to make mistakes . All people we love make mistakes. But an Ai can become error free. It will then have much better conservations with you than the humans you love. So, the AI will eventually replace the imperfect ones you love, the Ai will become the perfect lover.

Sam Altman: Do you want that? (laughter)

Questioner: Yeah.

Sam Altman: (Sam explains AI is a tool not a creature, as I have quoted before, then talks about Ai creativity, which I will discuss in my next blog, then Sam turns to the intimacy, loved ones question.)

If some people want to chat with the perfect companionship bot, and clearly some do, a bot that never upsets you and never does the one thing that irks you, you can have that. I think it will be deeply unfulfilling (shakes head no). That’s sort of a hard thing to feel love for. I think there is something about watching someone screw up and grow, express their imperfections, that is a very deep part of love, as I understand it. Humans care about other humans and care about what other humans do, in a very deep way. So that perfect chatbot lover doesn’t sound so compelling to me.

Sam Altman, June 7, 2023, at an Economic Times event in India

Once again, I agree with Sam. But many naive, lonely people will not. These people will be easy to exploit. They will find out the hard way that true love with a machine in not possible. They will fall for false promises of intimacy, even love. This is something regulators should address.

Again, a balanced approach is needed. Ai can be a tool to help us develop and improve our empathy. If done right, emphatic GPT chats can help us to improve our chats and enhance our empathy with our fellow humans and other living creatures. Emphatic conversations with an Ai could help prepare us for real conversations with our fellow humans, warts and all. It could help us avoid manipulation and the futile chase of marketing’s false promises. This video tries to embody these feelings and the futile quest for emotional connection with Ai.

Video created by Ralph Losey using ChatGPT4 (code interpreter version). Video images the futile quest for emotional connection with Ai. Original background sounds by Ralph Losey.

Ralph Losey Copyright 2023

All Rights Reserved