Our related website, AI-Ethics.com, was completely updated this weekend. This is the first full rewrite since the web was launched in late 2016. Things have changed significantly in the past nine months and the update was overdue. The Mission Statement, which lays out the purpose of the web, remains essentially the same, but has been clarified and restated, as you will see. Below is the header of the AI Ethics web. Its subtitle is Law, Technology and Social Values. Just FYI, I am trying to transition my legal practice and speciality expertise from e-Discovery to AI Policy.
Below is the first half of the AI Ethics Mission Statement page. Hopefully this will entice you to read the full Mission Statement and check out the entire website. Substantial new research is shared. You will see there is some overlap with the Ai regulatory articles appearing on the e-Discovery Team blog, but there are many additional articles and new information not found here.
Our mission is to help mankind navigate the great dilemma of our age, well stated by Steven Hawking: “The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.”Our goal is to help make it the best thing ever to happen to humanity.We have a three-fold plan to help humanity to get there: dialogue, principles, education.
Our focus is to facilitate law and technology to work together to create reasonable policies and regulations. This includes the new LLM generative models that surprised the world in late 2022.
Pros and Cons of the Arguments
Will Artificial Intelligence become the great liberator of mankind? Create wealth for all and eliminate drudgery? Will AI allow us to clean the environment, cure diseases, extends life indefinitely and make us all geniuses? Will AI enhance our brains and physical abilities making us all super-hero cyborgs? Will it facilitate justice, equality and fairness for all? Will AI usher in a technological utopia? See eg.Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI? People favoring this perspective tend to be opposed to regulation for a variety of reasons, including that it is too early yet to be concerned.
Or – Will AI lead to disasters? Will AI create powerful autonomous weapons that threaten to kill us all? Will it continue human bias and prejudices? Will AI Bots impersonate and fool people, secretly move public opinion and even impact the outcome of elections? (Some researchers think this is what happened in the 2016 U.S. elections.) Will AI create new ways for the few to oppress the many? Will it result in a rigged stock market? Will it bring great other disruptions to our economy, including wide-spread unemployment? Will some AI eventually become smarter than we are, and develop a will of its own, one that menaces and conflicts with humanity? Are Homo Sapiens in danger of becoming biological load files for digital super-intelligence?
Not unexpectedly, this doomsday camp favors strong regulation, including an immediate stop in development of new generative Ai, which took the world by surprise in late 2022. See:Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’ (NYT, 3/29/23); the Open Letter dated March 22, 2023 of the influential Future of Life Institute calling for a “pause in the development of A.I. systems more powerful than GPT-4. . . . and if such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” Also see: The problems with a moratorium on training large AI systems (Brookings Institute, 4/11/23) (noting multiple problems with the proposed moratorium, including possible First Amendment violations). Can research really be stopped entirely as this side proposes, can Ai be gagged?
One side thinks that we need government imposed laws and detailed regulations to protect us from disaster scenarios. The other side thinks that industry self-regulation alone is adequate and all of the fears are unjustified. At the present time there are strongly opposing views among experts concerning the future of AI. Let’s bring in the mediators to help resolve this critical roadblock to reasonable AI Ethics.
Balanced Middle Path
We believe that a middle way is best, where both dangers and opportunities are balanced, and where government and industry work together, along with help and input from private citizens. We advocate for a global team approach to help maximize the odds of a positive outcome for humanity.
AI-Ethics.com suggests three ways to start this effort:
In a landmark move towards the regulation of generative AI technologies, the White House brokered eight “commitments” with industry giants Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The discussions, held exclusively with these companies, culminated in an agreement on July 21, 2023. Despite the inherent political complexities, all parties concurred on the necessity for ethical oversight in the deployment of their AI products across several broad areas.
These commitments, although necessarily ambiguous, represent a significant step to what may later become binding law. The companies not only acknowledged the appropriateness of future regulation across eight distinct categories, they also pledged to uphold their ongoing self-regulation efforts in these areas. This agreement thus serves as a kind of foundation blueprint for future Ai regulation. Also see prior efforts by U.S. government that precede this blueprint, AI Risk Management Framework, (NIST, January 2023), and the White House Blueprint for an AI Bill of Rights, (October 2022).
The document formalizes a voluntary commitment, which is sort of like a non-binding agreement, an agreement to try to reach an agreement. The parties statement begins by acknowledging the potential and risks of artificial intelligence (AI). Then it affirms that companies developing AI should ensure the safety, security, and trustworthiness of their technologies. These are the three major themes for regulation that the White House and the tech companies could agree upon. The document then outlines eight particular commitments to implement these three fundamental principles.
The big tech companies affirm they are already taking steps to ensure the safe, secure, and transparent development and use of AI. So these commitments just confirm what they are already doing. Clever wording here and of course, the devil is always in the details, which will have to be ironed out later as the regulatory process continues. The basic idea that the parties were able to agree upon at this stage is that these eight voluntary commitments, as formalized and described in the document, are to remain in effect until such time as enforceable laws and regulations are enacted.
The scope of the eight commitments is specifically limited to generative Ai models that are more powerful than the current industry standards, specified in the document as, or equivalent to: GPT-4, Claude 2, PaLM 2, Titan, and DALL-E 2 for image generation. Only these models, or models more advanced than these, are intended to be covered by this first voluntary agreement. It is likely that other companies will sign up later and make these same general commitments, if nothing else, to claim that their generative technologies are now of the same level as these first seven companies.
It is a good for discussions like this to start off in a friendly manner and reach general principles of agreement on the easy issues – the low hanging fruit. Everyone wants Ai to be safe, secure, and trustworthy. The commitments lays a foundation for later, much more challenging discussions between industry and government and the people the government is supposed to represent. Good work by both sides in what must have been very interesting opening talks.
Dissent in Big Tech Ranks Already?
It is interesting to see that there is already a split among the seven big tech companies whom the White Hours talked into the commitments, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. Five of them went on to create an industry group focused on ensuring safe and responsible development of frontier AI models, which they call the Frontier Model Forum (announced July 26, 2023). Two did not join the Forum: Amazon and Inflection. And you cannot help but wonder about Apple, who apparently was not even invited to the party at the White House, or maybe they were, and decided not to attend. Apple should be in these discussions, especially since they are rumored to be well along in preparing a advanced Ai product. Apple is testing an AI chatbot but has no idea what to do with it, (Verge, July 19, 2023).
The failure of Inflection to join in the Frontier Model Forum is concerning. So too is Amazon’s recalcitrance, especially considering the number of Alexa ears there are in households world wide (I have two), not to mention their knowledge of most everything we buy.
Think Universal, Act Global
The White House Press Release on the commitments says the Biden Administration plans to “continue executive action and pursue bipartisan legislation for responsible innovation and protection.” The plan is to, at the same time, work with international allies to develop a code of conduct for AI development and use worldwide. This is ambitious, but appropriate for the U.S. government to think globally on these issues.
The E.U. is already moving fast in Ai regulation, many say too fast. The E.U. has a history of strong government involvement with big tech regulation, again, some say too strong, especially on the E.U.’s hot button issue, consumer privacy. The EU and U.S. Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment, (Brookings Institution, 2/16/23). I am inclined towards the views of privacy expert, Jon Neiditz, who explains why generative Ais provide significantly more privacy than the existing systems. How to Create Real Privacy & Data Protection with LLMs, (The Hybrid Intelligencer, 7/28/23) (“… replacing Big Data technologies with LLMs can create attractive, privacy enhancing alternatives to the surveillance with which we have been living.“) Still, privacy in general remains a significant concern for all technologies, including generative Ai.
The free world must also consider the reality of the technically advanced totalitarian states, like China and Russia, and the importance to them of Ai. Artificial Intelligence and Great Power Competition, With Paul Scharre, (Council on Foreign Relations (“CFR”), 3/28/23) (Vladimir Putin said in September 2017: “Artificial intelligence is the future not only for Russia, but for all humankind. Whoever becomes the leader in this sphere will become the ruler of the world.” . . . [H]alf of the world’s 1 billion surveillance cameras are in China, and they’re increasingly using AI tools to empower the surveillance network that China’s building); AI Meets World, Part Two, (CFR, June 21, 2023) (good background discussion on Ai regulation issues, although some of the commentary and questions in the audio interview seem a bit biased and naive).
Three Classes of Risk Addressed in the Commitments
Safety. Companies are all expected to ensure their AI products are safe before they are introduced to the public. This involves testing AI systems for their safety and capabilities, assessing potential biological, cybersecurity, and societal risks, and making the results of these assessments public. See: Statement on AI Risk, (Center for AI Safety, 5/30/23) (open letter signed by many Ai leaders, including Altman, Kurzweil and even Bill Gates, agreeing to this short statement “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.“). The Center for AI Safety provides this short statement of the kind of societal-scale risks it is worried about:
AI’s application in warfare can be extremely harmful, with machine learning enhancing aerial combat and AI-powered drug discovery tools potentially being used for developing chemical weapons. CAIS is also concerned about other risks, including increased inequality due to AI-related power imbalances, the spread of misinformation, and power-seeking behavior.
FAQ of Center for AI Safety
These are all very valid concerns. The spread of misinformation has been underway for many years.
The disclosure requirement will be challenging in view of both competitive and intellectual property concerns. There are related criminal hacking and military concerns that disclosure and open source code may help criminal hackers and military espionage. Michael Kan, FBI: Hackers Are Having a Field Day With Open-Source AI Programs (PC Mag., 7/28/23) (Criminals are using AI programs for phishing schemes and to help them create malware, according to a senior FBI official). Foreign militaries, such as China and Russia are known to be focusing on Ai technologies for suppression and attacks.
The commitments document emphasizes the importance of external testing and the need for companies to be transparent about the safety of their AI systems. The external testing is a good idea and hopefully this will be by an independent group, and not just the leaky government, but again, there is the transparency concern with over-exposure of secrets and China’s well-known constant surveillance and theft of IP.
Note the word “license” was not used in the commitments, as that seems to be a hot button for some. See eg. The right way to regulate AI, (Case Text, July 23, 2023) (claims that Sam Altman proposed no one be permitted to work with AI without first obtaining a license). With respect, that is not a fair interpretation of Sam Altman’s Senate testimony or OpenAI’s position. Altman talked said “licensing and testing of all Ai models.” This means licensing of Ai models to confirm to the public that the models have been tested and approved as safe. In context, and based on Altman’s many later explanations in his world tour that followed, it is obvious that Sam Altman, OpenAI’s CEO, meant a license to sell a particular product, not a license for a person to work with Ai at all, nor a license to create new products, or do research. See eg. the lengthy video interview of Sam Altman given to Bloomberg Technology on June 22, 2026.
Regulatory licensing under discussion so far pertains only to the final products, to certify to all potential users of the new Ai tech that it has been tested and certified as safe, secure, and trustworthy. Also the license scope would be limited to very advanced new products, which do, almost all agree, present very real risks and dangers. No one wants a new FDA, and certainly no one wants to require individual licenses for someone to use Ai, like a driver’s license, but it seems like common sense to have these powerful new technology products tested and approved by some regulatory body before a company releases it. Again, the devil in in the details and this will be a very tough issue.
Security.The agreement highlights the duty of companies to prioritize security in their AI systems. This includes safeguarding their models against cyber threats and insider threats. Companies are also encouraged to share best practices and standards to prevent misuse of AI technologies, reduce risks to society, and protect national security. One of the underlying concerns here is how Ai can be used by criminal hackers and enemy states to defeat existing blue team protective systems. Plus, there is the related threat of commercially driven races of Ai products to the market before they are ready. Ai products need adequate red team testing before release, coupled with ongoing testing after release. The situation is even worse with third-party plug-ins. They often have amateurish software designs and no real security at all. In today’s world, cybersecurity must be a priority of everyone. More on this later in the article.
Trust. Trust is identified as a crucial aspect of AI development. Companies are urged to earn public trust by ensuring transparency in AI-generated content, preventing bias and discrimination, and strengthening privacy protections. The agreement also emphasizes the importance of using AI to address societal challenges, such as cancer and climate change, and managing AI’s risks so that its benefits can be fully realized. As frequently said on the e-Discovery Team blog, “trust but verify.” That is where testing and product licensing come in. For instance, how else would you really know that any confidential information you use with an Ai product is in fact kept confidential as the seller claims? Users are not in a position to verify that. Still, generative Ai is an inherently more privacy protective tech system than existing Big Data surveillance systems. How to Create Real Privacy & Data Protection with LLMs.
Eight Commitments in the Three Classes
First, here is the quick summary of the eight commitments:
Internal and external red-teaming of models,
Sharing information about trust and safety risks,
Investing in cybersecurity,
Incentivizing third-party discovery of vulnerabilities,
Developing mechanisms for users to understand if content is AI-generated,
Publicly reporting model capabilities and limitations,
Prioritizing research on societal risks posed by AI,
Deploying AI systems to address societal challenges.
Here are the document details of the eight commitments, divided into the three classes of risk. A few e-Discovery Team editorial comments are also included and, for clarity, are shown in (bold parenthesis).
Two Safety Commitments
Companies commit to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns. (This is the basis for the President Biden’s call for hackers to attend DEFCON 31 to “red team” and expose errors and vulnerabilities that experts in Ai discover in open competitions. We will be at DEFCON to cover these events. Vegas Baby! DEFCON 31.) The companies all acknowledge that robust red-teaming is essential for building successful products, ensuring public confidence in AI, and guarding against significant national security threats. (An example of new employment opportunities made possible by Ai.) The companies also commit to advancing ongoing research in AI safety, including the interpretability of AI systems’ decision-making processes and increasing the robustness of AI systems against misuse. (Such research is another example of new work creation by Ai.)
Companies commit to work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards. (Such information sharing is another example of new work creation by Ai.) They recognize the importance of information sharing, common standards, and best practices for red-teaming and advancing the trust and safety of AI. They commit to establish or join a forum or mechanism through which they can develop, advance, and adopt shared standards and best practices for frontier AI safety. (Another example of new, information sharing work created by Ai. These forums all require dedicated human administrators.)
Two Security Commitments
On the security front, companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. The companies treat unreleased AI model weights as core intellectual property, especially with regards to cybersecurity and insider threat risks. This includes limiting access to model weights to those whose job function requires it and establishing a robust insider threat detection program consistent with protections provided for their most valuable intellectual property and trade secrets. (Again, although companies already invest in these jobs, even more work, more jobs, will be created by these new AI IP related security challenges, which will, in our view, be substantial. We do not want enemy states to steal these powerful new technologies. The current cybersecurity threats from China, for instance, are already extremely dangerous, and may encourage their attack of Taiwan, a close ally who supplies over 90% of the world’s advanced computer chips. Taiwan’s dominance of the chip industry makes it more important, (The Economist, 3/16/23); U.S. Hunts Chinese Malware That Could Disrupt American American Military Operations, (NYT, 7/29/23)).
Companies also commit to incentivizing third-party discovery and reporting of issues and vulnerabilities, recognizing that AI systems may continue to have weaknesses and vulnerabilities even after robust red-teaming. (Again, this is the ongoing Red Teaming mentioned to incentivize researchers, hackers all, to find and report mistakes in Ai code. There have been a host of papers and announcements on Ai vulnerabilities and red team successes lately. See eg.: Zou, Wang, Kolte, Fredrikson, Universal and Transferable Attacks on Aligned Language Models, (July 27, 2023); Pierluigi Paganini, FraudGPT, a new malicious generative AI tool appears in the threat landscape, (July 26, 2023) (dangerous tools already on dark web for criminal hacking). Researchers should be paid rewards for this otherwise unpaid work. The current rewards should be increased in size to encourage the often not fully employed, economically disadvantaged hackers to do the right thing. Hackers who find errors and succumb to temptation and use them for criminal activities should be punished. There are always errors in new technology like this. There are also a vast number of additional errors and vulnerabilities created by third-party plugins in the gold rush to Ai profiteering. See eg: Testing a Red Team’s Claim of a Successful “Injection Attack” of ChatGPT-4 Using a New ChatGPT Plugin, (May 22, 2023). Many of the mistakes are already well known and some are still not corrected. This appears like inexcusable neglect and we expect future hard laws to dig into this much more deeply.All companies need to be ethically responsible and the big Ai companies need to police the small plug-in companies, much like Apple now polices its App Store. We think this area is of critical importance.)
Four Trust Commitments
In terms of trust, companies commit to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated. This includes developing strong mechanisms, such as provenance and/or watermarking systems for audio or visual content created by any of their publicly available systems. (This is a tough one, and only will grow in importance and difficulty as these systems grow more sophisticated. OpenAI experimented with watermarking, but were disappointed at the results and quickly discontinued it. OpenAI Retires AI Classifier Tool Due to Low Accuracy, (Fagen Wasanni Technologies, July 26, 2023). How do we even know if we are actually talking to a person, and not just an Ai posing as a human? Sam Altman has launched a project outside of OpenAI addressing that challenge, among other things, the World Coin project. On July 27, 2023, they began to verify that an online applicant to World Coin membership is in fact human. They do that with in-person eye scans in physical centers around the world. An interesting example of new jobs being created to try to meet the ‘real or fake’ commitment.)
Companies also commit to publicly reporting model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of the model’s effects on societal risks such as fairness and bias. (Again, more jobs and skilled human workers will be needed to do this.)
Companies prioritize research on societal risks posed by AI systems, including avoidance of harmful bias and discrimination, and protection of privacy. (Again, more work and employment. Some companies might prefer to gloss over and minimize this work because it will slow and negatively impact sales, at least at first. Glad to see these human rights goals in an initial commitment list. We expect the government will set up extensive, detailed regulations in this area. It has a strong political, pro-consumer draw.)
Finally, companies commit to developing and deploying frontier AI systems to help address society’s greatest challenges. These challenges include climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats. They also commit to supporting initiatives that foster the education and training of students and workers to prosper from the benefits of AI, and to helping citizens understand the nature, capabilities, limitations, and impact of the technology. (We are big proponents of this and the possible future benefits of Ai. See eg, ChatGTP-4 Prompted To Talk With Itself About “The Singularity”, (April 4, 2023), and Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI?, (July 7, 2023)).
The Commitments document emphasizes the need for companies to take responsibility for the safety, security, and trustworthiness of their AI technologies. It outlines eight voluntary commitments to advance the principles. The voluntary agreement highlights the need for ongoing research, transparency, and public engagement in the development and use of AI. The e-Discovery Team blog is already doing its part on the “public engagement” activity, as this is our 38th article in 2023 on generative Ai.
The e-Discovery Team blog always tries to do that, in an objective manner, not tied to any one company or software product. Although ChatGPT-4 has so far been our clear favorite, and their software is the one we most frequently use and review, that can change, as other products enter the market and improve. We have no economic incentives or secret gifts tipping the scale of our judgments.
Although some criticize the Commitments as meaningless showmanship, we disagree. From Ralph’s perspective as a senior lawyer, with a lifetime of experience in legal negotiations, it looks like a good start and show of good faith on both sides, government and corporate. We all want to control and prevent Terminator robot dystopias.
Still, it is just a start, far from the end goal. We have a long way to go and naive idealism is inappropriate. We must trust and verify. We must operate in the emerging world with eyes wide open. There are always conmen and power-seekers seeking to profit from new technologies. Many are motivated by what Putin said about Ai: “Whoever becomes the leader in this sphere will become the ruler of the world.”
Many believe AI is, or may soon be, the biggest technological advance of our age, perhaps of all time. Many say it will be bigger than the internet, perhaps equal to the discovery of nuclear energy. Just as Einstein’s discovery, with Oppenheimer’s engineering, resulted in the creation of nuclear weapons that ended WWII, these discoveries also left us with an endangered world living on the brink of total thermonuclear war. Although we are not there yet, Ai creations could eventually take us to the same DEFCON threat level. We need Ai regulation to prevent that.
Governments word-wide must come to understand that using Ai as an all out, uncontrolled weapon will result in a war game that cannot be won. It is a Mutually Assured Destruction (“MAD”) tactic. The global treaties and international agencies on nuclear weapons and arms control, including the military use of viruses, were made possible by the near universal realization that nuclear war and virus weapons were MAD ideas.
All governments must be made to understand that everyone will lose an Ai world war, even the first strike attacker. These treaties and inspection agencies and MAD realization have, so far enabled us to avoid such wars. We must do the same with Ai. Governments must be made to understand the reality of Ai triggered species extermination scenarios. Ai must ultimately be regulated, bottled up, on an international basis, just as nuclear weapons and bio-weapons have been.
This article continues the Ai creativity series and examines current thinking among lawyers about their work and job security. Most believe their work is too creative to be replaced by machines. The lawyer opinions discussed here are derived from a survey by Wolters Kluwer and Above the Law: Generative AI in the Law: Where Could This All Be Headed? (7/03/2023). It seems that most other professionals, including doctors and top management in businesses, feel the same way. They think they are indispensable Picassos, too cool for school.
The well-prepared Above The Law Wolters Kluwerreport of July 3, 2023, indicates that two-thirds of lawyers questioned do not think ChatGPT-4 is capable of creative legal analysis and writing. For that reason, they cling to the belief they are safe from Ai and can ignore it. They think their creativity and legal imagination makes them special, irreplaceable. The survey shows they believe that only the grunt workers of the law, the document and contract reviewers, and the like, will be displaced.
I used to think that too. A self-serving vanity perhaps? But, I must now accept the evidence. Even if your legal work does involve considerable creative thinking and legal imagination, it is not for that reason alone secure from AI replacement. There may be many other reasons that your current job is secure, or that you only have to tweak your work a little to make it secure. But, for most of us, it looks like we will have to change our ways and modify our roles, at least somewhat. We will have to take on new legal challenges that emerge from Ai. The best job security comes from continuous active learning.
Recent “Above The Law” – Wolters KluwerSurvey
Surprisingly, I agree with most of the responses reported in the survey described in Generative AI in the Law: Where Could This All Be Headed? I will not go over these, and instead just recommend you read this interesting free report (registration required). My article will only address the one opinion that I am very skeptical about, namely whether or not high-level, creative legal work is likely to be transformed by AI in the next few years. A strong majority said no, that jobs based on creative legal analysis are safe.
Most of the respondents to the survey did not think that AI is even close to taking over high-level legal work, the experienced partner work that requires a good amount of imagination and creativity. Over two-thirds of those questioned considered such skilled legal work to be beyond a chatbot’s abilities.
At page six of the report, after concluding that all non-creative legal work was at risk, the survey considered “high-level legal work.” A minority of respondents, only 31%, thought that AI would transform complex matters, like “negotiating mergers or developing litigation strategy.” Almost everyone thought AI lacked “legal imagination,” especially litigators, who “were the least likely to agree that generative AI will someday perform high-level work.” This is the apparent reasoning behind the conclusions as to whose jobs are at risk. As the ATL Wolters report observed:
The question is: Can an AI review a series of appellate opinions that dance around a subject but never reach it head on? Can the AI synthesize a legal theory from those adjacent points of law? In other words, does it have legal imagination? . . .
One survey respondent — a litigation partner — had a similar take: “AI may be increasingly sophisticated at calculation, but it is not replacing the human brain’s capacity for making connections that haven’t been made before or engaging in counterfactual analysis. . ..
The jobs of law firm partners are safest, according to respondents. After all, they’re the least likely group to consider themselves as possibly redundant. Corporate work is the area most likely to be affected by generative AI, according to almost half of respondents. Few respondents believe that AI will have a significant impact on practices involving healthcare, criminal law or investigations, environmental law, or energy law.
After having studied and used ChatGPT for hundreds of hours now, and after having been a partner in one law firm or another for what seems like hundreds of years, I reluctantly conclude that my fellow lawyers are mistaken on the creativity issue. Their response to this prompt appears to be a delusional hallucination, rather than insightful vision.
The assumed safety of the higher echelons of the law shown in the survey is a common belief. But, like many common beliefs of the past, such as the sun and planets revolving around the Earth, the opinion may just be a vain delusion, a hallucination. It is based on the belief that humans in general, and these attorneys in particular, have unique and superior creativity. Yet, careful study shows that creativity is not a unique human skill at all. Ai seems very capable of creativity in all areas. That was shown by standardized TCTT creative testing scores in a report released the same day as the ATL Wolters Survey. ChatGPT-4 scored in the top 1% of standardized creativity testing.
Also, consider how human creative skills are not as easy to control as generative Ai creativity. As previously shown here, GPT-4’s creativity can be precisely controlled by skilled manipulation of the Temperature and Top_P parameters. Creativity and How Anyone Can Adjust ChatGPT’s Creativity Settings. How many law firm partners can precisely lower and raise their creative imagination like that? (Having drinks does not count!) Imagine what a GPT-5 level tool will be able to do in a few years (or months)? The creativity skills of Ai may soon be superior to our own.
The ATL and Wolters Kluwer survey not only reveals an opinion (more like a hope) that creative legal work is safe, it shows most lawyers believe that legal work with little creativity will soon be replaced by Ai. That includes the unfairly maligned and often unappreciated document review attorneys. It also includes many other attorneys who review and prepare contracts. They may well be the first lawyers to face Ai layoffs.
EDRM provides relevant free training and you should hook-up with EDRM today. Also, remember the free online training programs in e-discovery and Ai enhanced document review started on the e-Discovery Team blog years ago. They are still alive and well, and still free, although they are based on predictive coding and not the latest generative Ai released in November 2022.
e-Discovery Team Training. Eighty-five online law school proven classes. Started at UF in 2010. Covers the basics of e-discovery law, technology and ethics.
TAR Course. Eighteen online classes providing advanced training on Technology Assisted Review. Started in 2017, this course is updated and shown as a tab on the upper right corner of the e-Discovery Team blog. Below is a short YouTube that describes the TAR Course. The latest generative Ai was used by Ralph to create it.
The e-Discovery Team blog also provides the largest collection of articles on artificial intelligence from a practicing tech-lawyer’s perspective. So far in 2023, thirty-seven articles on artificial intelligence have been written, illustrated and published. It is now the primary focus of Ralph Losey’s research, writing and educational efforts. Hopefully many others will follow the lead of EDRM and the e-Discovery Team blog and provide free legal training in next generation, legal Ai based skills. Everyone agrees this trend will accelerate.
Get ready for tomorrow. Start training today, not only by the mentioned courses, but by playing with ChatGPT. It’s free, most versions, and its everywhere. For instance, there is a ChatGPT bot on the e-Discovery Team website (bottom right). Ask it some questions about the content of this blog, or about anything. Better yet, go sign up for a free account with OpenAI. They recently dropped all charges for the no-frills 4.0 version. Try to learn all that you can about Ai. ChatGPT can tutor you.
This positive vision for the future of Law, for the future of all humanity, is suggested by the below video. It illustrates a bright future of human lawyers and their Ai bots, who, despite appearances, are tools not creatures. They are happily working together. The video was created using the Ai tools GPT-4 and Midjourney. The creativity of these tools both shaped and helped express the idea. In other words, the creative capacities of the Ai guided and improved the human creative process. It was a synergistic team effort. This same hybrid team approach also works with legal creativity, indeed with all creativity. We have seen this many times before as our technology advances exponentially. The main difference is that the Ai tools are much more powerful and the change greater than anything seen before. That’s why the lawyers shown here are happy working with the bots, rather then in competition with them.
I tried to poke holes in the standard tests used, the Torrance Tests of Creative Thinking, and the research of Professor Guzik, but ended up with respect for the TTCT test, its creator, Professor E. Paul Torrance and for Professor Guzik. My conclusion is that the standard testing research to date strongly supports the conclusion many others have already reached, that Generative AI has extraordinary abilities of creative thinking.
University of Montana Research on Humans and GPT-4 Using Standard Creativity Test
An announcement was made on July 5, 2023, by the University of Montana of research by Erik Guzik, PhD, a Professor at its College of Business, and his colleagues, that found ChatGPT scored in the top 1% of human thinkers on a standard creativity test. UM Research: AI Tests Into Top 1% For Original Creative Thinking. The scientific paper on the research has not yet been released, so I wrote Professor Guzik for more information. He confirmed that his group made a formal presentation of the details of their findings in May 2023 at the Creativity Conference 2023, Southern Oregon University, and a paper should be published in August, 2023. The conference session presentation was entitled: The creative potential of ChatGPT: An exploratory study of the Torrance Tests of Creative Thinking and the fluency, flexibility and originality of ChatGPT (GPT-4). Professor Guzik graciously sent me the slide deck his group used and a link of the video recording of the presentation, including questions and answers. The presentation to this group of creativity experts is persuasive.
Guzik’s team utilized the Torrance Tests of Creative Thinking (TTCT), a widely recognized tool employed for decades to evaluate human creativity. Eight test responses were generated by ChatGPT, each different. The short essay answers were submitted to The Scholastic Testing Service for assessment, along with 24 students in Professor Gizik’s entrepreneurship and personal finance classes. The ChatGPT-4 responses to the creativity questions were compared with those of Guzik’s human students, as well as with 2,700 college students who took the TTCT in 2016. The Scholastic Testing Service was oblivious to the involvement of the AI and independently scored all of the submissions.
ChatGPT ranked in the top one percentile for fluency, showcasing its ability to generate a large volume of ideas, and for originality, demonstrating the capacity to produce new ideas. On flexibility, the capacity to create different types and categories of ideas, the AI scored in the 97th percentile. The overall ranking by Scholastic put ChatGPT in the top 1%.
The test results are strong evidence of AI developing creative abilities comparable or even superior to human capabilities. Professor Guzik emphasized the surprise at how well ChatGPT performed in generating original ideas, typically considered a unique characteristic of human imagination. With the advanced GPT-4, ChatGPT has now topped the 1% of all human responses, surpassing previous research on GPT-3 where it did not score as well as humans on tasks involving original thinking.
Erik Guzik reported that he later told ChatGPT of the test results and asked for its comment. ChatGPT-4 responded by saying human creativity may not be fully understood and more sophisticated assessment tools may be needed “to differentiate between human and AI-generated ideas.” I agree with Chat GPT, but right now, all we have is the TTCT. In view of the way the test operates, which is explained in the conference presentation, it is not possible for GPT-4 to have been trained on the TTCT test results. The test itself is proprietary and student’s short-essay type answers are scored by individual experts at The Scholastic Testing Service.
To fact check and help prepare this article, I also asked ChatGPT-4 about the TTCT test, and, as usual, checked the answers with independent Google assisted research. I learned the TTCT is a series of standardized tests developed by psychologist E. Paul Torrance in the mid 20th Century. TTCT includes both verbal and figural sections, but according to Professor Guzik, only the verbal tests were given to ChatGPT. This was because the version they used could not review the drawings used in the figural sections of the test. The verbal section involves tasks such as asking the test-taker to think of problems that could arise from an unusual situation. Scoring not only considered originality of responses, but also viability, the practical value of the responses. The TTCT tests have been used and refined over decades to measure human creativity along, what they call, four “dimensions” of creativity:
Fluency: This measures the quantity of ideas produced. It involves tests like listing as many ideas as possible within a given timeframe. Again, the experts scoring the answers require and evaluate practical value to all of the responses.
Flexibility: This refers to the diversity of ideas and the ability to shift between different types of ideas or approaches to problems.
Originality: This assesses the uniqueness of the ideas generated. An original idea is one that is statistically infrequent among the responses of people in the same age and background. Again, they have to be more that just original. Gibberish Mad Hatter type responses will not score well.
Elaboration: This measures the amount of detail in the responses, or the ability to develop ideas and ‘build’ on them to create complex structures or plans.
The Torrance Tests of Creative Thinking are commonly used in a variety of settings, including versions for children in vocational counseling at schools, to adults and businesses evaluating executives’ creative entrepreneurial capabilities. That is why the test is well known and commonly used now in business schools, such as the school at the University of Montana where Professor Erik Guzik works.
The TTCT appears to be, as ChatGPT reports, one of the most reliable and valid measures of creativity and is used widely around the world. But ChatGPT also observes that some researchers question whether the TTCT, or any other test, can fully capture the complexity and multifaceted nature of creativity. ChatGPT reports debates among experts about the cultural bias of such tests and whether they measure creativity or some other related construct like divergent thinking, or in ChatGPT’s case, improbable thinking.
E. Paul Torrance and His Creativity Test
I was very skeptical about the creativity test results before I corresponded with Erik Guzik and studied the conference presentation. As part of my test vetting, I also dug deeper and researched the life and work of the man who created the TTCT, Ellis Paul Torrance. Torrance’s pre-graduate school background in the 1930s and 1940s was working as a counselor and high school teacher in south rural Georgia. (This is the same neck of the woods as former President Jimmy Carter, whom he reminds me of.) Paul Torrance later earned a Ph.D. from the University of Michigan, and eventually became a professor at the University of Georgia, in Athens Georgia, where he wrote 88 books!
This NIH published article also explained that Torrance had a personal vision of the definition of creativity, which reflected the way he measured creativity. Professor Torrance defined creativity as:
A process of becoming sensitive to problems, deficiencies, gaps in knowledge, missing elements, disharmonies, and so on; identifying the difficulty; searching for solutions, making guesses, or formulating hypotheses about the deficiencies; testing and retesting these hypotheses and possibly modifying and retesting them; and finally communicating the results.
Creativity is a mental spark that illuminates originality, crafting fresh ideas and unique solutions that hold value. It’s an intellectual dance, weaving thoughts together in ways unseen before. Not confined to art or literature, it flourishes in science, technology, and daily life, offering new perspectives and innovations. It’s our ability to venture beyond known boundaries, reimagining and reshaping our world. Creativity is the birthmark of human ingenuity and a key driver of evolution and progress.
ChatGPT-4 (Temperature setting 0.7)
I guess that is a top 1% worthy answer, although, as a long time student of creativity, I prefer ChatGPT’s conservative answer with a 0.0 Temperature setting.
Creativity is the human ability to generate unique and innovative ideas, products, or solutions that are both novel and valuable. It involves original thinking, imagination, and the capacity to transcend traditional ideas to create meaningful new ones. It’s present in arts, science, technology, and everyday problem-solving, reflecting our capability to envision, invent, and express in distinctive ways.
It is impressive to see that the tests first developed by E. Paul Torrance in mid-century rural Georgia have evolved and been successfully employed in 2023 to test the creativity of ChatGPT4, a difficult wild colt student if ever there was one. Hats off to Professor Torrance, whose TCTT test works even on robots. Can you imagine how thrilled he would have been to evaluate robot creativity in south Georgia?
It is important to remember that the last version of Open’s AI’s chatbot, ChatGPT-3.5, flunked this same TTCT creativity test. It just upgraded to version 4.0 in March 2023. Also remember there is no possibility that GPT-4 had seen and memorized the “best” creativity answers. Plus, the results were anonymously scored by the Scholastic Testing Service and compared with 2,700 college students who took the TTCT in 2016. Finally, recall this was not just a one time fluke of creativity by GPT-4, it took the test eight times.
All this makes me think these first tests of Generative AI are accurate and will be confirmed over time. It does not get much better than the top 1%, but if AI does keep improving in this area, it might become impossible to test it anonymously. AI super-intelligence is likely to be pretty obvious when it arrives, especially to the human experts in creativity at the Scholastic Testing Service who evaluate the answers.
These first test results are consistent with my ad hoc studies and use of Chat GPT-4 over hundreds of hours. It is also consistent with the general reactions of most other users of GPT, who were surprised by its many creative abilities, including a very high level of creativity with visual images.
As a Photoshop user since its early days in the 90s, and to a lesser extent, an amateur videographer and FinalCut software user since the early 2000s, I am blown away at the images ChatGPT-4 based software like Midjourney can now be prompted to create. Plus, it keeps getting better and better every day. It is very hard to keep up with the new software features.
Generative Ai is still far from the quality of the best human artists. Not yet. For instance, I had an opportunity recently to show Billy Collins a poem that I prompted GPT-4 to write in his style. Billy politely smiled at its amateur effort. But then he give me a suggestion on how to improve the prompt in any subsequent efforts. Other forms of generative Ai writing are not world class either, including music and, especially, non-digital arts like sculpture and ceramics.
Still, the day may come when Ai can compete with the greatest human creatives in all fields, including the creativity required for successful entrepreneurship, as Business School Professor Erik Guzik teaches. More likely, the top 1% in all fields will be humans and Ai working together in a hybrid manner. Each will synergistically boost the other’s abilities and productive output. That has been my experience in a small way, as reflected in the changes to my blog since GPT struck in November 2022.
Ralph Losey is a Friend of AI with over 740,000 LLM Tokens, Writer, Commentator, Journalist, Lawyer, Arbitrator, Special Master, and Practicing Attorney as a partner in LOSEY PLLC. Losey is a high tech oriented law firm started by Ralph's son, Adam Losey. We handle major "bet the company" type litigation, special tech projects, deals, IP of all kinds all over the world, plus other tricky litigation problems all over the U.S. For more details of Ralph's background, Click Here
All opinions expressed here are his own, and not those of his firm or clients. No legal advice is provided on this web and should not be construed as such.
Ralph has long been a leader of the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world. Ralph has written over two million words on e-discovery and tech-law subjects, including seven books.
Ralph has been involved with computers, software, legal hacking and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: Commercial Litigation; E-Discovery and Information Management Law; Information Technology Law; and, Employment Law - Management.
Ralph is the proud father of two children, Eva Losey Grossman, and Adam Losey, a lawyer with incredible litigation and cyber expertise (married to another cyber expert lawyer, Catherine Losey), and best of all, husband since 1973 to Molly Friedman Losey, a mental health counselor in Winter Park.