News Flash – First Suit Filed Against ChatGPT For Hallucinatory Libel

A liable suit was filed on June 5, 2023 in Georgia against OpenAI for Libelous Writings by ChatGPT. This appears to be the first law suit against an AI, at least the first against ChatGPT for Libel. See below for the full complaint.

In view of the spoofing and reliability issues prevalent now, I felt compelled to personally verify with plaintiff’s counsel, John R. Monroe, that this was a real case. He did so verify and also provided me with a true copy and correct of the actual complaint he filed in Georgia State Court. This was by email a few minutes ago. He said nothing to me about the case (as is entirely proper, although I did ask for a comment.) Mr. Monroe just sent me the complaint. I also personally verified that John R. Monroe, is a member in good standing of the Georgia Bar. The libel complaint is a quick read and I suggest you check it out. See below and attached. I will try to follow this case as it progresses.

Basically the allegations are that ChatGPT (version not identified) libeled the plaintiff by hallucinating and telling a journalist about a law suit that did not, and does not exist. In this made-up law suit, the libeled plaintiff here, Mark Walters, had supposedly been sued for “breach of fiduciary duty, fraud, and other claims arising from Walters’ misappropriation of SAF’s funds and assets for his own benefit, and his manipulation of SAF’s financial records and bank statements to conceal his activities.” See Exhibit “A” to the libel complaint below and attached.

This hallucination was prompted by a third-party journalist and OpenAI subscriber, Fred Riehl, who used ChatGPT to summarize a real case concerning Second Amendment issues. The journalist asked ChatGPT to provide a summary of the allegations and provided a link to ChatGPT of the complaint (thus I presume this was a newer version of ChatGPT that can read links, but perhaps not, which would explain a lot). The link and complaint appears to be a real case in federal court concerning Second Amendment rights, which seeks a declaration and injunctive relief: https://www.saf.org/wp-content/uploads/2023/05/Dkt-1-Complaint.pdf.

The summary response by ChatGPT of this real case is where the AI hallucination and libel begins. Walters alleges that ChatGPT told the journalist, Riehl, who was supposedly investigating the Second Amendment case, that the case was against Walters, which it was not (Walters was not a party to the case, although he is apparently well-known concerning Second Amendment issues). ChatGPT also told Riehl that the case accused Walters of defrauding and embezzling funds. There are no such allegations. The case has nothing to do with Walters. The journalist then asked ChatGPT “to provide him with a copy of the portion of the complaint related to Walters.” Chat GPT replied and provided the same made up allegations against Walters that were supposedly in the complaint. Now Riehl responded by asking ChatGPT to provide him the entire text of the complaint. That is when ChatGPT responded by generating the fictitious complaint, Exhibit “A” to Walter’s complaint. It also purports to be in district court and even has a made-up case number, but no signature line.

I am going to refrain from analysis and adopt a non-interference position here; just report the facts and get this news out fast. See below for the libel complaint and its exhibit. This is real.

Ralph Losey Copyright 2023 – All Rights Reserved

6 Responses to News Flash – First Suit Filed Against ChatGPT For Hallucinatory Libel

  1. Gregory Bufithis says:

    My response needs a longer post. I am a subject matter expert on a collateral study on these very libel issues for a digital media association on which I serve in an advisory capacity. I cannot detail my full opinion but can offer this:

    Suffice it to say I think “libel” is a stretch given the law was written to apply to a defendant with “a state of mind.” That means the AI would have to know the output was false, or wrote a response with reckless disregard for whether it was true, a difficult standard to apply to an inanimate tool. The pleadings in this case will be interesting 🧐 🤔

    I suspect we’ll see a claim that the training process did not adequately train the model to account for the sequence of words needed to represent the correct facts. That could result in the model “recalling” incorrect facts or simply *hallucinating* a response.

    Another possibility is that the defamatory statement resulted by chance. The model allows a degree of randomness in how it applies the probabilities. This is called the “temperature”. A higher temperature allows more creative writing (that is, randomness) in the response that it is returned. A lower temperature will tend to produce the same response each time.

    What we’ve seen so far is that ChatGPT is *creative*, so successive attempts at the same prompt will usually return different responses. Similarly, different users may see different responses to the same prompt.

    This will be a fascinating legal area. But not simple, not straightforward. I am also watching a few other libel cases where ChatGPT is not named (just the user) so those pleadings will also be of interest.

  2. Aaron Taylor says:

    Simply a non-lawyer observation here: In addition to Mr. Bufithis’s astute observations, a troubling feature (among others) in this ‘hallucination’ was the intentional creation of a new case number; that act seems to stand differently from those explained as simply the result of putting together a randomly-prepared summary or compiled set of information based on available information or a ‘training’ set of data.

    Some sort of algorithmic logic would have to exist that compelled the chatbot to create a fictitious number…and least, human logic would tend in that direction. Perhaps I’m having an hallucination.

  3. Gregory Bufithis says:

    Aaron: you are on to something, and without getting too much into the tubes and wires and pipes of this thing, a few thoughts:

    I think what large language models have done surprising – in a funny way. I think what these LLMs say, interestingly, is how much of our language is very much rote, R-O-T-E, rather than generated directly, because it can be collapsed down to these sets of parameters. What the large language models are good at is saying what an answer should SOUND like, which is different from what an answer SHOULD BE. And this goes to its *creation* of a new case number.

    And this also goes to my analysis of why I think copyright claims against Midjourney and other image generators is false. My answer is that AI art models do not use/store images, but rather are mathematical representations of patterns collected from these images. The software does not piece together bits of images in the form of a collage, either, but creates pictures from scratch based on these mathematical representations.

    I do not think GPT-5 or GPT-6 or GPT-7 is going to make a lot of progress on these issues. Because it still will not have any underlying model of the world. It doesn’t have any connection to the world. It is correlation between language.

    I have a short, kind of “spacey wacey” look at these things (no, I was not on mushrooms) which I’ll share on this thread tomorrow. Midnight 🕛 here and I am toast.

    But do look at the following piece. Not an easy read but stick with it and much will be made clear:

    https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

  4. […] Team® blog (News Flash – First Suit Filed Against ChatGPT For Hallucinatory Libel, available here), the allegations are that ChatGPT (version not identified) libeled the plaintiff by hallucinating […]

  5. Eric Thompson says:

    The lawsuit is going to harm Plaintiff’s reputation much more than ChatGPT did.

  6. https://mrloginbet.com/casino-mr-bet/

    News Flash – First Suit Filed Against ChatGPT For Hallucinatory Libel | e-Discovery Team ®

%d bloggers like this: