Surprising Admissions by OpenAI Leaders Made in Recent Interviews

August 8, 2023

OpenAI’s head scientist, Ilya Sutskever, revealed in an interview by a fellow scientist near his level, Sven Strohband, how the emergent intelligence of his neural-net AI was the surprising result of scaling, a drastic increase of the size of the compute and data. This admission of surprise and some bewilderment was echoed in another interview of the CEO and President of OpenAI, Sam Altman and Greg Brockman. They said no one really knows how or why this human-like intelligence and creativity suddenly emerged from their GPT after scaling. It is still a somewhat mysterious process, which they are attempting to understand with the help of their GPT4. The interview was by a former member of OpenAI’s Board of Directors who also prompted them to disclose the current attitude of the company towards regulation. These new interviews are both on YouTube and will be shared here. The interview of Ilya Sutskever is on video, and the interview of CEO Sam Altman, and President, Greg Brockman, is an audio podcast.

Sam Altmann and Greg Brockman, Midjourney image by Ralph

Introduction

In these two July 2023 interviews of OpenAI’s leadership, they kind of admit that they lucked into picking the right model –  artificial neural networks on a very large scale. Their impromptu answers to questions by their peers can help you to understand how and why their product is changing the world. One interview, a pod cast, also provides an interesting glimpse into OpenAI’s preferred approach towards Ai regulation. You really should digest these two important YouTubes yourself, and not ask a chatbot to summarize it for you. What AI is Making Possible (YouTube video of Sutskever) and Envisioning Our Future With AI (YouTube audio podcast).

In the 25 minute video interview of Ilya Sutskever, the chief scientist of Open Ai, you will get a good feel for the man himself, how he takes his time to think and picks his words carefully. He comes across as honest and sincere in all of his interviews. My only criticism is his almost scary serious demeanor. This young man, a former student and successor to the legendary AI pioneer of neural-net and deep learning, Professor Geoffrey Hinton, could well be the Einstein of our day. Time will tell, but see for yourself.

Ralph’s Midjourney image of Ilya Sutskever,

I did manage to catch a brief moment of an inner smile by Ilya in this screen shot from the video. He provides a strong contrast with the nearly always upbeat executive leadership team, who did a podcast last month for Reid Hoffman, a past member of their Board of Directors.

Image of Ilya Sutskever from video What AI is Making Possible with a rare, almost smile

The interview of Ilya Sutskever was in response to excellent questions by Sven Strohband. The Youtube video of the interview Strohban called What AI is Making Possible. Sven is a Stanford PhD, computer scientist, who is now the Managing Director of Khosla Ventures. His questions are based on extensive knowledge and experience. Moreover, his company is a competitor of OpenAI. The answers from the deep thinking of Ilya Sutskever are somewhat surprising and crystal clear.

Ralph’s “Deep Thinking” Midjourney image of Ilya Sutskever

The second interview is of both the OpenAI CEO, Sam Altman, who has often been quoted here, and the President and co-founder of OpenAI, Greg Brockman. Both are very articulate in this late July 2023 audio interview by Reid Hoffman, former OpenAI Board member, and by Aria Finger. They call this podcast episode Envisioning Our Future With AI. The episode is part of Greg Hoffman’s Possible podcast series, also found on Apple podcasts. Greg was part of the initial group raising funds for OpenAI and, until recently, was on its Board of Directors. As they say, he knows where all the skeletons are buried, and got them to open up.

Greg Brockman by Ralph using Midjourney

This one hour interview covers all the bases, even asking about their favorite movie (spoiler – both Sam and Greg said HER). The interview is not technical, but it is informative. Since this is an audio-only interview, it is a good one to listen to in the background, although this is made difficult by how similar Sam and Greg’s voices sound.

Ilya Sutskever Video Interview – What AI is Making Possible

I have watched several videos of Ilya Sutskever, and What AI is Making Possible is the best to date. It is short, only twenty-five minutes, but sweet. In all of Ily’s interviews you cannot help but be impressed by the man’s sincerity and intellect. He is humble about his discovery, admits he was lucky, but he and his team are the ones who made it happen. They made AI real, and remarkably close to AGI, and they did it with a method that surprised most of the AI establishment and Big Tech, they used the human brain’s neural networks as a computer design. Most experts thought that approach was a dead end in Ai research, and would not go far. Surprise, the expert establishment was wrong and Ilya and his team were right.

Everyone was surprised, except for Geoffrey Hinton, who started the deep-thinking, neural net designs. But even he must have been astonished that his former student, Ilya, made the big breakthrough by simple size scaling. Moreover, Ilya did so way before Hinton’s competing team at Google. In fact, Hinton was so surprised and alarmed by how fast and far Ilya had gone with Ai, that he quit Google right after ChatGPT-4.0 came out. Then he began warning the world that Ai like GPT4 needed to be regulated and fast. ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (NYT, 5/5/23). These actions by Professor Hinton constitute an incredible admission. His protege, Ilya Sutskever, has to be smiling to himself from time to time after that kind of reaction to his unexpected win.

Image of Ilya Sutskever from video What AI is Making Possible

Ilya, and his diverse team of scientists and engineers, are the one’s who made the breakthrough. They are the real heroes here, not the promoters, fund raisers and management. Sam Altman and Greg Brockman’s key insight was to hire Ilya Sutskever and give him the space and equipment needed, hundreds of millions of dollars worth. By listening to Ilya, you get a good sense of how surprised he was to discover that the neural network approach actually worked, that his teachers and inner voice were right beyond his dreams. His significant engineering breakthrough came by “simply” scaling the size of neural network databases and computing power. Bigger was better and led to incredible intelligence. It is hard to believe, and yet, here it is. ChatGPT-4 does amazing things. Watch this interview and you will see what this means.

Greg Brockman and Sam Altman Audio Interview – Envisioning Our Future With AI

The hour long podcast, Envisioning Our Future With AI, interview of Brockman and Altman discusses the same surprise insight of scale, but from the entrepreneurs’ perspective. Listen to the Possible podcast at 16:57 to 18:24. By hearing the same thing from Sam, you get a pretty good idea of the key insight of scale. They are not sure why it works, nobody really is, including Ilya Sutskever, but they know it works, and so Sam and Greg went with it, boldly going where no one has gone before.

Greg Brockman by Ralph using Midjourney

The information Sam Altman and Greg Brockman provide, in their consistently upbeat Silicon Valley voice, pertains to their unique insights as the visionary front men. Their discussion of Ai Regulation is particularly interesting and starts at 18:34. It hints at many discussions the OpenAI Board has had over the years about Ai regulation, including two opposing views about product launch. Below is a excerpt, slightly edited for reading, of this portion of the podcast, starting at 19:07. (We recommend you listen to the full original podcast by Greg Hoffman.)

Midjourney image of a chatbot by Ralph who says: “Go ahead and try to regulate me. I’ll help.”

Question Aria Finger. What would you call for in terms of either regulation or global governance for bringing people in?

Answer by Sam Atlman. I think there’s a lot of anxiety and fear right now . . . I think people feel afraid of the rate of change right now. A lot of the updates that people at OpenAI, who work at OpenAI, have been grappling with for many years, the rest of the world is going through in a few months. And it’s very understandable to feel a lot of anxiety in that moment. 

We think that moving with great caution is super important, and there’s a big regulatory role there.  I don’t think a pause in the naive sense is likely to help that much. You know, we spent . . . somewhat more than six months aligning GPT-4 and  safety testing it since we finished training. Taking the time on that stuff is important. But really, I think what we need to do is figure out what regulatory approach, what set  of rules, what safety standards, will actually work, in the messy context  of reality. And then figure out how to  get that to be the sort of regulatory posture of the world. (20:32)

Lengthy Talking Question Follow-up by Reid Hoffman (former OpenAI Board member). You know, when people always focus on their fears  a little bit, like Sam, you were saying earlier, they tend to say, “slow down, stop,”  et cetera. And that tends to, I think, make a bunch of mistakes. One mistake is we’re kind of supercharging a bunch of industries and, you know, you want that, you want the benefit of that supercharging industry. I think that another thing is that one of the things we’ve learned with larger scale models, is we get alignment benefits. So  the questions around safety and safety precautions are better in the future, in some very arguable sense, than now. So with care, with voices, with governance, with spending months in safety testing, the ultimate regulatory thing that I’ve been suggesting has been along the lines of being able to remediate the harms from your models. So if something shows up that’s particularly bad, or in close anticipation, you can change it. That’s something I’ve already seen you guys doing in a pre-regulatory framework, but obviously getting that into a more collective regulatory framework, so that preferably everywhere in the world can sign on with that, is the kind of thing that I think is a vision. Do you have anything you guys would add to that, for when people think about what should be the way the people are participating?

Answer by Sam Altman (22:04). You touched on this, but to really echo it, I think what we believe in   very strongly, is that keeping the rate of change in the world relatively constant, rather than, say, go build AGI in secret and then deploy it all at once when you’re done, is much better. This idea that people relatively  gradually have time to get used to this incredible new thing that is going to transform so much of the world, get a feel for it, have time to update. You know, institutions and people do not update very well overnight. They need to be part of its evolution, to provide critical  feedback, to tell us when we’re doing dumb mistakes, to find the areas of great benefit and  potential harm, to make our mistakes and learn our lessons when the stakes are lower than they will  be in the future. Although we still would like   to avoid them as much as we can, of course. And I  don’t just mean we, I mean the field as a whole, sort of understanding, as with any new technology, where the tricky parts are going to be. 

I give Greg a lot of credit for pushing on this, especially when it’s been hard. But I think it is The Way to make a new technology like this safe. It is messy, it is difficult, it means we have to say a lot of times,  “hey, we don’t know the answer,” or, “hey, we were wrong there,” but relative to any alternative, I think this is the best way for society. It is the best way not only to get the safest outcome, but for the voices of all of society to have a chance to shape us all, rather than just the  people that, you know, would work in a secret lab.

Answer by Greg Brockman (23:51). We’ve really grappled with this question over time. Like, when we started OpenAI, really thinking about how to get from where we  were starting, which was kind of nothing in a lot of ways, to a safe AGI that’s deployed, that actually benefits all of humanity. How do you connect those two? How do you actually  get there? I think that the plan that Sam alludes to, of you just build in secret, and then you deploy it one day, there’s  a lot of people who really advocate for it and it has some nice properties. That means that  – I think a lot of people look at it and say, “hey there’s a technical safety problem of making sure the AI can even be steered, and there’s a  society problem. And that second one sounds really hard, but, I know technology, so I’ll just focus on this first one.” And that original  plan has the property that you can do that.  

But that never really sat well with me because I think you need to solve both of these  problems for real, right? How do you even know that your safety process actually worked. You don’t want it to be that you get one shot, to get this thing right. I think that there’s still a lot to learn, we’re still very much in the early days here, but this process that we’ve gone through, over the past four or five years now of starting to deploy this technology and to learn, has taught us so much.

We really weren’t in a position three, four years ago, to patch issues. You know, when there was an issue with GPT-3, we would sort of patch it in the way that GPT-3 was deployed, with filters, with non-model level interventions. Now we’re  starting to mature from that, we’re actually able to do model level interventions. It is definitely the case that GPT-4 itself is really critical in all of our safety pipelines. Being   able to understand what’s coming out of the model in an automated fashion, GPT-4 does an excellent job at this kind of thing. There’s a lot that we are learning and this process of doing iterative deployment has been really critical to that. (25:48)

Excerpt Envisioning Our Future With AI (slight editing for clarity) from 19:07 to 25:48.

“Possible” podcast interview of Sam Altman and Greg Brockman by Greg Hoffman.

Conclusion

Scaling the size of data in the LLM, and scaling the size of the compute, the amount of processing power put into the Neural-Network, is the surprising basis of OpenAI’s breakthrough with ChatGPT4. The scaling increase in size made the Ai work almost as good as the human brain. Size itself somehow led to the magic breakthrough in machine learning, a breakthrough that no one, as yet, quite understands, even Ilya Sutskever. Bigger was better. The Ai network is still not as large as the human brain’s neural net, not even close, but it is much faster, and like us, can learn on it own. It does so in its own way, taking advantage of its speed and iterative processes.

Human Brain Neurons

Large scale generative Ai now has every indication of intelligent thought and creativity, like a living human brain. Super-intelligence is not here yet, but hearing OpenAI and others talk, it could be coming soon. It may seem like a creature when it comes, but remember it is still just a tool, even though it is a tool with intelligence greater than our own. Don’t worship it, but don’t kill it either – Trust but Verify. It can bring us great good.

Verification requires reasonable regulations. The breakthrough in AI caused by scaling has impacted the attitude of Open AI executives and others towards current Ai regulation. As these interviews revealed, they want to get the input and feedback from the public, even messy critical input and complaints. This input from hundreds of millions of users provide information needed for revisions to the software. It allows the software to improve itself. The interviews revealed that GPT4 is already going that. Think about that.

OpenAI did not want to work in secret and then have super-intelligent, AGI level software, suddenly released, or worse escape, and stun the world. It would be the same level of public shock and disruption as flying saucers landing in Washington.

No one wants secret labs in a foreign dictatorship to do that either (except of course the actual and would be despots). The world needs a few years of constant but manageable change to get ready for the Singularity. Humans and our institutions can adapt, but we need some time. People will eventually get used to super-intelligent Ais and adapt to them. The Ai tech companies also need a few years to make course corrections and to regulate without stopping innovation. For more on these competing goals, and ideas on how to balance them, see the newly restated Intro and Mission Statement of AI-Ethics.com and related information at the AI Ethics web.

Balance is the way, and messy ad hoc processes, much like common law adjudication, seems to be the best method to walk that path, to find the right balance. At least, as these interview of Altman and Brockman revealed, that is the conclusion that OpenAI’s management has reached. There may be a bias here, but this process, which is very familiar to all attorneys and judges, seems like a good approach. This solution to Ai also means that the place and role of attorneys will remain important for many years to come. This is a trial and error, malleable, practice approach to regulation, a method that all litigation attorneys and judges in common law jurisdictions are very familiar with. That is a pleasant surprise.

Ralph Losey Copyright 2023 (does not include the quoted excerpts, nor the YouTube videos and podcast content)


AI Ethics Website Updated

August 7, 2023

Our related website, AI-Ethics.com, was completely updated this weekend. This is the first full rewrite since the web was launched in late 2016. Things have changed significantly in the past nine months and the update was overdue. The Mission Statement, which lays out the purpose of the web, remains essentially the same, but has been clarified and restated, as you will see. Below is the header of the AI Ethics web. Its subtitle is Law, Technology and Social Values. Just FYI, I am trying to transition my legal practice and speciality expertise from e-Discovery to AI Policy.

Below is the first half of the AI Ethics Mission Statement page. Hopefully this will entice you to read the full Mission Statement and check out the entire website. Substantial new research is shared. You will see there is some overlap with the Ai regulatory articles appearing on the e-Discovery Team blog, but there are many additional articles and new information not found here.


Intro/Mission

Our mission is to help mankind navigate the great dilemma of our age, well stated by Steven Hawking: “The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.” Our goal is to help make it the best thing ever to happen to humanity. We have a three-fold plan to help humanity to get there: dialogue, principles, education.

Our focus is to facilitate law and technology to work together to create reasonable policies and regulations. This includes the new LLM generative models that surprised the world in late 2022.

This and other images in Ai-Ethics by Ralph Losey using Ai software

Pros and Cons of the Arguments

Will Artificial Intelligence become the great liberator of mankind? Create wealth for all and eliminate drudgery? Will AI allow us to clean the environment, cure diseases, extends life indefinitely and make us all geniuses? Will AI enhance our brains and physical abilities making us all super-hero cyborgs? Will it facilitate justice, equality and fairness for all? Will AI usher in a technological utopia? See eg. Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI? People favoring this perspective tend to be opposed to regulation for a variety of reasons, including that it is too early yet to be concerned.

Or – Will AI lead to disasters? Will AI create powerful autonomous weapons that threaten to kill us all? Will it continue human bias and prejudices? Will AI Bots impersonate and fool people, secretly move public opinion and even impact the outcome of elections? (Some researchers think this is what happened in the 2016 U.S. elections.) Will AI create new ways for the few to oppress the many? Will it result in a rigged stock market? Will it bring great other disruptions to our economy, including wide-spread unemployment? Will some AI eventually become smarter than we are, and develop a will of its own, one that menaces and conflicts with humanity? Are Homo Sapiens in danger of becoming biological load files for digital super-intelligence?

Not unexpectedly, this doomsday camp favors strong regulation, including an immediate stop in development of new generative Ai, which took the world by surprise in late 2022. See: Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’ (NYT, 3/29/23); the Open Letter dated March 22, 2023 of the influential Future of Life Institute calling for a “pause in the development of A.I. systems more powerful than GPT-4. . . . and if such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” Also see: The problems with a moratorium on training large AI systems (Brookings Institute, 4/11/23) (noting multiple problems with the proposed moratorium, including possible First Amendment violations). Can research really be stopped entirely as this side proposes, can Ai be gagged?

One side thinks that we need government imposed laws and detailed regulations to protect us from disaster scenarios. The other side thinks that industry self-regulation alone is adequate and all of the fears are unjustified. At the present time there are strongly opposing views among experts concerning the future of AI. Let’s bring in the mediators to help resolve this critical roadblock to reasonable AI Ethics.

Balanced Middle Path

We believe that a middle way is best, where both dangers and opportunities are balanced, and where government and industry work together, along with help and input from private citizens. We advocate for a global team approach to help maximize the odds of a positive outcome for humanity.

AI-Ethics.com suggests three ways to start this effort:

  1. Foster a mediated dialogue between the conflicting camps in the current AI ethics debate.
  2. Help articulate basic regulatory principles for government, industry groups and the public.
  3. Inspire and educate everyone on the importance of artificial intelligence.

To read the rest, jump to the AI Ethics website Mission page.

Ralph Losey Copyright 2023. All Rights Reserved


White House Obtains Commitments to Regulation of Generative AI from OpenAI, Amazon, Anthropic, Google, Inflection, Meta and Microsoft

August 1, 2023
Chat Bots say ‘Catch me if you can! I move fast.’

In a landmark move towards the regulation of generative AI technologies, the White House brokered eight “commitments” with industry giants Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The discussions, held exclusively with these companies, culminated in an agreement on July 21, 2023. Despite the inherent political complexities, all parties concurred on the necessity for ethical oversight in the deployment of their AI products across several broad areas.

Introduction

These commitments, although necessarily ambiguous, represent a significant step to what may later become binding law. The companies not only acknowledged the appropriateness of future regulation across eight distinct categories, they also pledged to uphold their ongoing self-regulation efforts in these areas. This agreement thus serves as a kind of foundation blueprint for future Ai regulation. Also see prior efforts by U.S. government that precede this blueprint, AI Risk Management Framework, (NIST, January 2023), and the White House Blueprint for an AI Bill of Rights, (October 2022).

The eight “commitments” are outlined in this article with analysis, background and some editorial comments. Here is a PDF version of this article. For a direct look at the agreement, here is a link to the “Commitment” document. For those interested in the broader legislative landscape surrounding AI in the U.S., see my prior article, “Seeds of U.S. Regulation of AI: the Proposed SAFE Innovation Act” (June 7, 2023). It provides a comprehensive overview of proposed legislation, again with analysis and comments. Also see, Algorithmic Accountability Act of 2022 (requiring self-assessments of AI tools’ risks, intended benefits, privacy practices, and biases) and American Data Privacy and Protection Act (ADPPA) (requiring impact assessments for “large data holders” when using algorithms in a manner that poses a “consequential risk of harm,” a category which certainly includes some types of “high-risk” uses of AI). 

Government determined to catch and pin down wild chat bots.

The document formalizes a voluntary commitment, which is sort of like a non-binding agreement, an agreement to try to reach an agreement. The parties statement begins by acknowledging the potential and risks of artificial intelligence (AI). Then it affirms that companies developing AI should ensure the safety, security, and trustworthiness of their technologies. These are the three major themes for regulation that the White House and the tech companies could agree upon. The document then outlines eight particular commitments to implement these three fundamental principles.

Just Regulation of Ai Should Be Everyone’s Goal.

The big tech companies affirm they are already taking steps to ensure the safe, secure, and transparent development and use of AI. So these commitments just confirm what they are already doing. Clever wording here and of course, the devil is always in the details, which will have to be ironed out later as the regulatory process continues. The basic idea that the parties were able to agree upon at this stage is that these eight voluntary commitments, as formalized and described in the document, are to remain in effect until such time as enforceable laws and regulations are enacted.

The scope of the eight commitments is specifically limited to generative Ai models that are more powerful than the current industry standards, specified in the document as, or equivalent to: GPT-4, Claude 2, PaLM 2, Titan, and DALL-E 2 for image generation. Only these models, or models more advanced than these, are intended to be covered by this first voluntary agreement. It is likely that other companies will sign up later and make these same general commitments, if nothing else, to claim that their generative technologies are now of the same level as these first seven companies.

It is a good for discussions like this to start off in a friendly manner and reach general principles of agreement on the easy issues – the low hanging fruit. Everyone wants Ai to be safe, secure, and trustworthy. The commitments lays a foundation for later, much more challenging discussions between industry and government and the people the government is supposed to represent. Good work by both sides in what must have been very interesting opening talks.

What can we agree upon to start talking about regulation?

Dissent in Big Tech Ranks Already?

It is interesting to see that there is already a split among the seven big tech companies whom the White Hours talked into the commitments, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. Five of them went on to create an industry group focused on ensuring safe and responsible development of frontier AI models, which they call the Frontier Model Forum (announced July 26, 2023). Two did not join the Forum: Amazon and Inflection. And you cannot help but wonder about Apple, who apparently was not even invited to the party at the White House, or maybe they were, and decided not to attend. Apple should be in these discussions, especially since they are rumored to be well along in preparing a advanced Ai product. Apple is testing an AI chatbot but has no idea what to do with it, (Verge, July 19, 2023).

Inflection AI, Inc., the least known of the group, is a  $4 billion private start-up that claims to have the world’s best AI hardware setup. Inflection AI, The Year-Old Startup Behind Chatbot Pi, Raises $1.3 Billion, (Forbes, 6/29/23). Inflection is company behind the empathetic software, PI, which I previously wrote about in Code of Ethics for “Empathetic” Generative AI, (July 12, 2023). These kind of personal, be your best friend, chat bots present special dangers of misuse, somewhat different than the rest. My article delves into this and endorses Jon Neiditz’ proposed Code of Ethics for “Empathetic” Generative AI.

Control Promotion and Exploitation of Robot Love.

The failure of Inflection to join in the Frontier Model Forum is concerning. So too is Amazon’s recalcitrance, especially considering the number of Alexa ears there are in households world wide (I have two), not to mention their knowledge of most everything we buy.

Think Universal, Act Global

The White House Press Release on the commitments says the Biden Administration plans to “continue executive action and pursue bipartisan legislation for responsible innovation and protection.” The plan is to, at the same time, work with international allies to develop a code of conduct for AI development and use worldwide. This is ambitious, but appropriate for the U.S. government to think globally on these issues.

The E.U. is already moving fast in Ai regulation, many say too fast. The E.U. has a history of strong government involvement with big tech regulation, again, some say too strong, especially on the E.U.’s hot button issue, consumer privacy. The EU and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment, (Brookings Institution, 2/16/23). I am inclined towards the views of privacy expert, Jon Neiditz, who explains why generative Ais provide significantly more privacy than the existing systems. How to Create Real Privacy & Data Protection with LLMs, (The Hybrid Intelligencer, 7/28/23) (“… replacing Big Data technologies with LLMs can create attractive, privacy enhancing alternatives to the surveillance with which we have been living.“) Still, privacy in general remains a significant concern for all technologies, including generative Ai.

The free world must also consider the reality of the technically advanced totalitarian states, like China and Russia, and the importance to them of Ai. Artificial Intelligence and Great Power Competition, With Paul Scharre, (Council on Foreign Relations (“CFR”), 3/28/23) (Vladimir Putin said in September 2017: “Artificial intelligence is the future not only for Russia, but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world.” . . . [H]alf of the world’s 1 billion surveillance cameras are in China, and they’re increasingly using AI tools to empower the surveillance network that China’s building); AI Meets World, Part Two, (CFR, June 21, 2023) (good background discussion on Ai regulation issues, although some of the commentary and questions in the audio interview seem a bit biased and naive).

There is a military and power control race going on. This makes U.S. and other free-world government regulation difficult and demands eyes wide open international participation. Many analysts now speak of the need for global agreements along the lines of Nuclear Non-Proliferation treaties attained in the past. See eg., It is time to negotiate global treaties on artificial intelligence, (Brookings Institute, 3/24/21); OpenAI CEO suggests international agency like UN’s nuclear watchdog could oversee AI, (AP, 6/6/23); But see, Panic about overhyped AI risk could lead to the wrong kind of regulation, (Verge, 7/3/23).

Mad Would Be World Dictators Covet Ai.

Three Classes of Risk Addressed in the Commitments

Safety. Companies are all expected to ensure their AI products are safe before they are introduced to the public. This involves testing AI systems for their safety and capabilities, assessing potential biological, cybersecurity, and societal risks, and making the results of these assessments public. See: Statement on AI Risk, (Center for AI Safety, 5/30/23) (open letter signed by many Ai leaders, including Altman, Kurzweil and even Bill Gates, agreeing to this short statement “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.“). The Center for AI Safety provides this short statement of the kind of societal-scale risks it is worried about:

AI’s application in warfare can be extremely harmful, with machine learning enhancing aerial combat and AI-powered drug discovery tools potentially being used for developing chemical weapons. CAIS is also concerned about other risks, including increased inequality due to AI-related power imbalances, the spread of misinformation, and power-seeking behavior. 

FAQ of Center for AI Safety

These are all very valid concerns. The spread of misinformation has been underway for many years.

The disclosure requirement will be challenging in view of both competitive and intellectual property concerns. There are related criminal hacking and military concerns that disclosure and open source code may help criminal hackers and military espionage. Michael Kan, FBI: Hackers Are Having a Field Day With Open-Source AI Programs (PC Mag., 7/28/23) (Criminals are using AI programs for phishing schemes and to help them create malware, according to a senior FBI official). Foreign militaries, such as China and Russia are known to be focusing on Ai technologies for suppression and attacks.

The commitments document emphasizes the importance of external testing and the need for companies to be transparent about the safety of their AI systems. The external testing is a good idea and hopefully this will be by an independent group, and not just the leaky government, but again, there is the transparency concern with over-exposure of secrets and China’s well-known constant surveillance and theft of IP.

Testing new advanced Ai products before release to public.

Note the word “license” was not used in the commitments, as that seems to be a hot button for some. See eg. The right way to regulate AI, (Case Text, July 23, 2023) (claims that Sam Altman proposed no one be permitted to work with AI without first obtaining a license). With respect, that is not a fair interpretation of Sam Altman’s Senate testimony or OpenAI’s position. Altman talked said “licensing and testing of all Ai models.” This means licensing of Ai models to confirm to the public that the models have been tested and approved as safe. In context, and based on Altman’s many later explanations in his world tour that followed, it is obvious that Sam Altman, OpenAI’s CEO, meant a license to sell a particular product, not a license for a person to work with Ai at all, nor a license to create new products, or do research. See eg. the lengthy video interview of Sam Altman given to Bloomberg Technology on June 22, 2026.

Regulatory licensing under discussion so far pertains only to the final products, to certify to all potential users of the new Ai tech that it has been tested and certified as safe, secure, and trustworthy. Also the license scope would be limited to very advanced new products, which do, almost all agree, present very real risks and dangers. No one wants a new FDA, and certainly no one wants to require individual licenses for someone to use Ai, like a driver’s license, but it seems like common sense to have these powerful new technology products tested and approved by some regulatory body before a company releases it. Again, the devil in in the details and this will be a very tough issue.

Keeping Us Safe.

Security.The agreement highlights the duty of companies to prioritize security in their AI systems. This includes safeguarding their models against cyber threats and insider threats. Companies are also encouraged to share best practices and standards to prevent misuse of AI technologies, reduce risks to society, and protect national security. One of the underlying concerns here is how Ai can be used by criminal hackers and enemy states to defeat existing blue team protective systems. Plus, there is the related threat of commercially driven races of Ai products to the market before they are ready. Ai products need adequate red team testing before release, coupled with ongoing testing after release. The situation is even worse with third-party plug-ins. They often have amateurish software designs and no real security at all. In today’s world, cybersecurity must be a priority of everyone. More on this later in the article.

AI Cyber Security.

Trust. Trust is identified as a crucial aspect of AI development. Companies are urged to earn public trust by ensuring transparency in AI-generated content, preventing bias and discrimination, and strengthening privacy protections. The agreement also emphasizes the importance of using AI to address societal challenges, such as cancer and climate change, and managing AI’s risks so that its benefits can be fully realized. As frequently said on the e-Discovery Team blog, “trust but verify.” That is where testing and product licensing come in. For instance, how else would you really know that any confidential information you use with an Ai product is in fact kept confidential as the seller claims? Users are not in a position to verify that. Still, generative Ai is an inherently more privacy protective tech system than existing Big Data surveillance systems. How to Create Real Privacy & Data Protection with LLMs.

Ready to Trust Generative Ai?

Eight Commitments in the Three Classes

First, here is the quick summary of the eight commitments:

  1. Internal and external red-teaming of models,
  2. Sharing information about trust and safety risks,
  3. Investing in cybersecurity,
  4. Incentivizing third-party discovery of vulnerabilities,
  5. Developing mechanisms for users to understand if content is AI-generated,
  6. Publicly reporting model capabilities and limitations,
  7. Prioritizing research on societal risks posed by AI,
  8. Deploying AI systems to address societal challenges.
Preparing Early Plans for Ai Regulation.

Here are the document details of the eight commitments, divided into the three classes of risk. A few e-Discovery Team editorial comments are also included and, for clarity, are shown in (bold parenthesis).

Two Safety Commitments

  1. Companies commit to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns. (This is the basis for the President Biden’s call for hackers to attend DEFCON 31 to “red team” and expose errors and vulnerabilities that experts in Ai discover in open competitions. We will be at DEFCON to cover these events. Vegas Baby! DEFCON 31.) The companies all acknowledge that robust red-teaming is essential for building successful products, ensuring public confidence in AI, and guarding against significant national security threats. (An example of new employment opportunities made possible by Ai.) The companies also commit to advancing ongoing research in AI safety, including the interpretability of AI systems’ decision-making processes and increasing the robustness of AI systems against misuse. (Such research is another example of new work creation by Ai.)
  2. Companies commit to work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards. (Such information sharing is another example of new work creation by Ai.) They recognize the importance of information sharing, common standards, and best practices for red-teaming and advancing the trust and safety of AI. They commit to establish or join a forum or mechanism through which they can develop, advance, and adopt shared standards and best practices for frontier AI safety. (Another example of new, information sharing work created by Ai. These forums all require dedicated human administrators.)
Everyone Wants Ai to be Safe.

Two Security Commitments

  1. On the security front, companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. The companies treat unreleased AI model weights as core intellectual property, especially with regards to cybersecurity and insider threat risks. This includes limiting access to model weights to those whose job function requires it and establishing a robust insider threat detection program consistent with protections provided for their most valuable intellectual property and trade secrets. (Again, although companies already invest in these jobs, even more work, more jobs, will be created by these new AI IP related security challenges, which will, in our view, be substantial. We do not want enemy states to steal these powerful new technologies. The current cybersecurity threats from China, for instance, are already extremely dangerous, and may encourage their attack of Taiwan, a close ally who supplies over 90% of the world’s advanced computer chips. Taiwan’s dominance of the chip industry makes it more important, (The Economist, 3/16/23); U.S. Hunts Chinese Malware That Could Disrupt American  American Military Operations, (NYT, 7/29/23)).
  2. Companies also commit to incentivizing third-party discovery and reporting of issues and vulnerabilities, recognizing that AI systems may continue to have weaknesses and vulnerabilities even after robust red-teaming. (Again, this is the ongoing Red Teaming mentioned to incentivize researchers, hackers all, to find and report mistakes in Ai code. There have been a host of papers and announcements on Ai vulnerabilities and red team successes lately. See eg.: Zou, Wang, Kolte, Fredrikson, Universal and Transferable Attacks on Aligned Language Models, (July 27, 2023); Pierluigi Paganini, FraudGPT, a new malicious generative AI tool appears in the threat landscape, (July 26, 2023) (dangerous tools already on dark web for criminal hacking). Researchers should be paid rewards for this otherwise unpaid work. The current rewards should be increased in size to encourage the often not fully employed, economically disadvantaged hackers to do the right thing. Hackers who find errors and succumb to temptation and use them for criminal activities should be punished. There are always errors in new technology like this. There are also a vast number of additional errors and vulnerabilities created by third-party plugins in the gold rush to Ai profiteering. See eg: Testing a Red Team’s Claim of a Successful “Injection Attack” of ChatGPT-4 Using a New ChatGPT Plugin, (May 22, 2023). Many of the mistakes are already well known and some are still not corrected. This appears like inexcusable neglect and we expect future hard laws to dig into this much more deeply. All companies need to be ethically responsible and the big Ai companies need to police the small plug-in companies, much like Apple now polices its App Store. We think this area is of critical importance.)
Guard Against Ai “Prison Breaks”

Four Trust Commitments

  1. In terms of trust, companies commit to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated. This includes developing strong mechanisms, such as provenance and/or watermarking systems for audio or visual content created by any of their publicly available systems. (This is a tough one, and only will grow in importance and difficulty as these systems grow more sophisticated. OpenAI experimented with watermarking, but were disappointed at the results and quickly discontinued it. OpenAI Retires AI Classifier Tool Due to Low Accuracy, (Fagen Wasanni Technologies, July 26, 2023). How do we even know if we are actually talking to a person, and not just an Ai posing as a human? Sam Altman has launched a project outside of OpenAI addressing that challenge, among other things, the World Coin project. On July 27, 2023, they began to verify that an online applicant to World Coin membership is in fact human. They do that with in-person eye scans in physical centers around the world. An interesting example of new jobs being created to try to meet the ‘real or fake’ commitment.)
  2. Companies also commit to publicly reporting model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of the model’s effects on societal risks such as fairness and bias. (Again, more jobs and skilled human workers will be needed to do this.)
  3. Companies prioritize research on societal risks posed by AI systems, including avoidance of harmful bias and discrimination, and protection of privacy. (Again, more work and employment. Some companies might prefer to gloss over and minimize this work because it will slow and negatively impact sales, at least at first. Glad to see these human rights goals in an initial commitment list. We expect the government will set up extensive, detailed regulations in this area. It has a strong political, pro-consumer draw.)
  4. Finally, companies commit to developing and deploying frontier AI systems to help address society’s greatest challenges. These challenges include climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats. They also commit to supporting initiatives that foster the education and training of students and workers to prosper from the benefits of AI, and to helping citizens understand the nature, capabilities, limitations, and impact of the technology. (We are big proponents of this and the possible future benefits of Ai. See eg, ChatGTP-4 Prompted To Talk With Itself About “The Singularity”, (April 4, 2023), and Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI?, (July 7, 2023)).
Totally Fake Image of Congressman Lieu (pretty obvious to most, even without watermarks).

Conclusion

The Commitments document emphasizes the need for companies to take responsibility for the safety, security, and trustworthiness of their AI technologies. It outlines eight voluntary commitments to advance the principles. The voluntary agreement highlights the need for ongoing research, transparency, and public engagement in the development and use of AI. The e-Discovery Team blog is already doing its part on the “public engagement” activity, as this is our 38th article in 2023 on generative Ai.

The Commitments document closes by noting the potential of AI to address some of society’s greatest challenges, while also acknowledging the risks and challenges that need to be managed. It is important to do that, to remember we must strike a fair balance between protection and innovation. Seeds of U.S. Regulation of AI: the Proposed SAFE Innovation Act.

Justice depends on reasoning free from a judge’s personal gain.

The e-Discovery Team blog always tries to do that, in an objective manner, not tied to any one company or software product. Although ChatGPT-4 has so far been our clear favorite, and their software is the one we most frequently use and review, that can change, as other products enter the market and improve. We have no economic incentives or secret gifts tipping the scale of our judgments.

Although some criticize the Commitments as meaningless showmanship, we disagree. From Ralph’s perspective as a senior lawyer, with a lifetime of experience in legal negotiations, it looks like a good start and show of good faith on both sides, government and corporate. We all want to control and prevent Terminator robot dystopias.

Lawyer stands over Terminator robot he just defeated.

Still, it is just a start, far from the end goal. We have a long way to go and naive idealism is inappropriate. We must trust and verify. We must operate in the emerging world with eyes wide open. There are always conmen and power-seekers seeking to profit from new technologies. Many are motivated by what Putin said about Ai: “Whoever becomes the leader in this sphere will become the ruler of the world.

Trust But Verify!

Many believe AI is, or may soon be, the biggest technological advance of our age, perhaps of all time. Many say it will be bigger than the internet, perhaps equal to the discovery of nuclear energy. Just as Einstein’s discovery, with Oppenheimer’s engineering, resulted in the creation of nuclear weapons that ended WWII, these discoveries also left us with an endangered world living on the brink of total thermonuclear war. Although we are not there yet, Ai creations could eventually take us to the same DEFCON threat level. We need Ai regulation to prevent that.

Governments word-wide must come to understand that using Ai as an all out, uncontrolled weapon will result in a war game that cannot be won. It is a Mutually Assured Destruction (“MAD”) tactic. The global treaties and international agencies on nuclear weapons and arms control, including the military use of viruses, were made possible by the near universal realization that nuclear war and virus weapons were MAD ideas.

MAD AI War Apocalypse

All governments must be made to understand that everyone will lose an Ai world war, even the first strike attacker. These treaties and inspection agencies and MAD realization have, so far enabled us to avoid such wars. We must do the same with Ai. Governments must be made to understand the reality of Ai triggered species extermination scenarios. Ai must ultimately be regulated, bottled up, on an international basis, just as nuclear weapons and bio-weapons have been.

Ai must be regulated to prevent uncontrollable consequences.



What Lawyers Think About AI, Creativity and Job Security

July 28, 2023

This article continues the Ai creativity series and examines current thinking among lawyers about their work and job security. Most believe their work is too creative to be replaced by machines. The lawyer opinions discussed here are derived from a survey by Wolters Kluwer and Above the Law: Generative AI in the Law: Where Could This All Be Headed? (7/03/2023). It seems that most other professionals, including doctors and top management in businesses, feel the same way. They think they are indispensable Picassos, too cool for school.

All images and video created by Ralph Losey

The evidence discussed on this blog in the last few articles suggests they are wrong. It might just be vainglory on their part. Creativity and How Anyone Can Adjust ChatGPT’s Creativity Settings To Limit Its Mistakes and Hallucinations; and Creativity Test of GPT’s Story Telling Ability Based on an Image Alone and especially ChatGPT-4 Scores in the Top One Percent of Standard Creativity Tests. Some of the highest paid, most secure attorneys today are very creative, but so too are the new Generative Ais. Some of the latest Ais are very personable too, dangerously so. Code of Ethics for “Empathetic” Generative AI.

Introduction to the Lawyer Survey

The well-prepared Above The Law Wolters Kluwer report of July 3, 2023, indicates that two-thirds of lawyers questioned do not think ChatGPT-4 is capable of creative legal analysis and writing. For that reason, they cling to the belief they are safe from Ai and can ignore it. They think their creativity and legal imagination makes them special, irreplaceable. The survey shows they believe that only the grunt workers of the law, the document and contract reviewers, and the like, will be displaced.

I used to think that too. A self-serving vanity perhaps? But, I must now accept the evidence. Even if your legal work does involve considerable creative thinking and legal imagination, it is not for that reason alone secure from AI replacement. There may be many other reasons that your current job is secure, or that you only have to tweak your work a little to make it secure. But, for most of us, it looks like we will have to change our ways and modify our roles, at least somewhat. We will have to take on new legal challenges that emerge from Ai. The best job security comes from continuous active learning.

With some study we can learn to work with Ai to become even more creative, productive and economically secure.

Recent “Above The Law” – Wolters Kluwer Survey

Surprisingly, I agree with most of the responses reported in the survey described in Generative AI in the Law: Where Could This All Be Headed? I will not go over these, and instead just recommend you read this interesting free report (registration required). My article will only address the one opinion that I am very skeptical about, namely whether or not high-level, creative legal work is likely to be transformed by AI in the next few years. A strong majority said no, that jobs based on creative legal analysis are safe.

Most of the respondents to the survey did not think that AI is even close to taking over high-level legal work, the experienced partner work that requires a good amount of imagination and creativity. Over two-thirds of those questioned considered such skilled legal work to be beyond a chatbot’s abilities.

At page six of the report, after concluding that all non-creative legal work was at risk, the survey considered “high-level legal work.” A minority of respondents, only 31%, thought that AI would transform complex matters, like “negotiating mergers or developing litigation strategy.” Almost everyone thought AI lacked “legal imagination,” especially litigators, who “were the least likely to agree that generative AI will someday perform high-level work.” This is the apparent reasoning behind the conclusions as to whose jobs are at risk. As the ATL Wolters report observed:

The question is: Can an AI review a series of appellate opinions that dance around a subject but never reach it head on? Can the AI synthesize a legal theory from those adjacent points of law? In other words, does it have legal imagination? . . .

One survey respondent — a litigation partner — had a similar take: “AI may be increasingly sophisticated at calculation, but it is not replacing the human brain’s capacity for making connections that haven’t been made before or engaging in counterfactual analysis. . ..

The jobs of law firm partners are safest, according to respondents. After all, they’re the least likely group to consider themselves as possibly redundant. Corporate work is the area most likely to be affected by generative AI, according to almost half of respondents. Few respondents believe that AI will have a significant impact on practices involving healthcare, criminal law or investigations, environmental law, or energy law.

Generative AI in the Law: Where Could This All Be Headed? at pgs. 6,7.

Analysis

After having studied and used ChatGPT for hundreds of hours now, and after having been a partner in one law firm or another for what seems like hundreds of years, I reluctantly conclude that my fellow lawyers are mistaken on the creativity issue. Their response to this prompt appears to be a delusional hallucination, rather than insightful vision.

As Sam Altman has observed, and I agree, that it is an inherent tendency of the creative process to make mistakes and make stuff up, to hallucinate without even knowing it. Creativity and How Anyone Can Adjust ChatGPT’s Creativity Settings To Limit Its Mistakes and Hallucinations; (includes Sam Altman’s understanding of human “creativity” and how Ai creativity is somewhat similar), Creativity Test of GPT’s Story Telling Ability Based on an Image Alone (you be the judge, but ChatGPT’s stories seem just as good as that of most trial lawyers) and ChatGPT-4 Scores in the Top One Percent of Standard Creativity Tests (how many senior partners would score that high?). Also seeWhat is the Difference Between Human Intelligence and Machine Intelligence? (not much difference, and Ai is getting smarter fast).

The assumed safety of the higher echelons of the law shown in the survey is a common belief. But, like many common beliefs of the past, such as the sun and planets revolving around the Earth, the opinion may just be a vain delusion, a hallucination. It is based on the belief that humans in general, and these attorneys in particular, have unique and superior creativity. Yet, careful study shows that creativity is not a unique human skill at all. Ai seems very capable of creativity in all areas. That was shown by standardized TCTT creative testing scores in a report released the same day as the ATL Wolters Survey. ChatGPT-4 scored in the top 1% of standardized creativity testing.

ChatGPT-4 is Number One!

Also, consider how human creative skills are not as easy to control as generative Ai creativity. As previously shown here, GPT-4’s creativity can be precisely controlled by skilled manipulation of the Temperature and Top_P parameters. Creativity and How Anyone Can Adjust ChatGPT’s Creativity Settings. How many law firm partners can precisely lower and raise their creative imagination like that? (Having drinks does not count!) Imagine what a GPT-5 level tool will be able to do in a few years (or months)? The creativity skills of Ai may soon be superior to our own.

Conclusion

The ATL and Wolters Kluwer survey not only reveals an opinion (more like a hope) that creative legal work is safe, it shows most lawyers believe that legal work with little creativity will soon be replaced by Ai. That includes the unfairly maligned and often unappreciated document review attorneys. It also includes many other attorneys who review and prepare contracts. They may well be the first lawyers to face Ai layoffs.

Future Ai Driven Layoffs May Hit Younger Employees First

Free training and economic aid should be provided for these attorneys and others. McKinsey Predicts Generative AI Will Create More Employment and Add 4.4 Trillion Dollars to the Economy (recommending economic aid and training). Although the government should help with this aid, it should primarily come from private coffers, especially from the companies and law firms that have profited so handsomely from their grunt work. They should contribute financial aid and free training.

EDRM provides relevant free training and you should hook-up with EDRM today. Also, remember the free online training programs in e-discovery and Ai enhanced document review started on the e-Discovery Team blog years ago. They are still alive and well, and still free, although they are based on predictive coding and not the latest generative Ai released in November 2022.

  • e-Discovery Team Training. Eighty-five online law school proven classes. Started at UF in 2010. Covers the basics of e-discovery law, technology and ethics.
  • TAR Course. Eighteen online classes providing advanced training on Technology Assisted Review. Started in 2017, this course is updated and shown as a tab on the upper right corner of the e-Discovery Team blog. Below is a short YouTube that describes the TAR Course. The latest generative Ai was used by Ralph to create it.

The e-Discovery Team blog also provides the largest collection of articles on artificial intelligence from a practicing tech-lawyer’s perspective. So far in 2023, thirty-seven articles on artificial intelligence have been written, illustrated and published. It is now the primary focus of Ralph Losey’s research, writing and educational efforts. Hopefully many others will follow the lead of EDRM and the e-Discovery Team blog and provide free legal training in next generation, legal Ai based skills. Everyone agrees this trend will accelerate.

Get ready for tomorrow. Start training today, not only by the mentioned courses, but by playing with ChatGPT. It’s free, most versions, and its everywhere. For instance, there is a ChatGPT bot on the e-Discovery Team website (bottom right). Ask it some questions about the content of this blog, or about anything. Better yet, go sign up for a free account with OpenAI. They recently dropped all charges for the no-frills 4.0 version. Try to learn all that you can about Ai. ChatGPT can tutor you.

There is a bright future awaiting all legal professionals who can learn, adapt and change. We humans are very good at that, as we have shown time and again throughout history. We will evolve, and not only survive, we will prosper as never before. Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI?

This positive vision for the future of Law, for the future of all humanity, is suggested by the below video. It illustrates a bright future of human lawyers and their Ai bots, who, despite appearances, are tools not creatures. They are happily working together. The video was created using the Ai tools GPT-4 and Midjourney. The creativity of these tools both shaped and helped express the idea. In other words, the creative capacities of the Ai guided and improved the human creative process. It was a synergistic team effort. This same hybrid team approach also works with legal creativity, indeed with all creativity. We have seen this many times before as our technology advances exponentially. The main difference is that the Ai tools are much more powerful and the change greater than anything seen before. That’s why the lawyers shown here are happy working with the bots, rather then in competition with them.

Click on the photo to see the video, all by Ralph Losey using ChatGPT and Midjourney

Copyright Ralph Losey 2023 ALL RIGHTS RESERVED


%d bloggers like this: