Exclusive Report: New Appellate Court Opinions on Artificial Intelligence and Hallucinations

October 4, 2023

I unearthed important new case law this week that has never been seen before or discussed. This article gives an exclusive report of three appellate court opinions that discuss artificial intelligence and hallucinations. This is a key issue of our times.

Fake image by Ralph of AI Zombies Mind Controlled by CIA.

The hallucinations in question are not, mind you, by an AI, although AIs play a part in the hallucinations. The hallucinations are by the plaintiffs themselves, including, just for instance, allegations of AI robot zombies and vast CIA conspiracies. Did you know Charles Barkley was an agent using mind control to turn humans into artificial intelligence? The pro se plaintiff wanted $35 Million in damages for that claim!

You may well wonder, but I assure you these appellate court opinions are all quite real.

Aljindi v. United States

I’ll start with a my favorite, the case of the PhD who invented AI, or claims to, Dr. Ahmad Aljindi. Aljindi v. United States, 2023 U.S. App. LEXIS 8069 *; 2023 WL 2778689 (Fed. Cir., 4/5/23); Aljindi v. United States, 2022 U.S. Claims LEXIS 2586 (Fed. Cl., Nov. 28, 2022). In his latest pro se suit, this time against the U.S. government for copyright infringement, Aljindi claims that he not only invented Artificial Intelligence, but also Information Security and Legacy Information Systems. Ahmad Aljindi, who has a history of pro se litigation, got a PhD in 2015 in Business Administration from an online school, NorthCentral University. His PhD dissertation must have been awe inspiring.

Image of Delusional Young Man at Work by Ralph using various ingenious AI tools.

Did he really invent all these things, or is he hallucinating? Hard to say, isn’t it. Better take this one all the way up to the Appeals Court. I am surprised the U.S. Supreme Court did not weigh in too.

The history of this case also seems like a hallucination. This suit, as first pled, “alleged various claims, including employment discrimination; intellectual property theft; ‘negligence and tort,‘” Aljindi v. United States, No. 2022-1117, 2022 U.S. App. LEXIS 12530, 2022 WL 1464476, at *1 (Fed. Cir. May 10, 2022). Aljindi’s pro se complaint included allegations of “ongoing judicial corruption, abuse, and torture in addition to the Government’s abuse and torture.” The usual thing.

Delusional thinking and hallucinations trouble many humans, not just Generative AI. Psychedelic art image by Ralph.

The law suit was dismissed by the District Court. Then the good doctor appealed. The appeals court affirmed the dismissal of course, but, and here is the funny part, the dismissal was only affirmed in part. That’s right, the appeals court remanded the case back to the trial judge, who must have been thrilled, since it is well known that they love to abuse and torture. Just ask any attorney. In ordering the remand the appellate court, no doubt with substantial help from its law clerks, explained it actions:

But we vacated-in-part the trial court’s dismissal because Dr. Aljindi’s complaint “mentioned copyrights law violations in the relief section,” which could “be liberally construed as a copyright infringement claim over which the Court of Federal Claims would have jurisdiction.” 2022 U.S. App. LEXIS 12530, [WL] at *3 (cleaned up). Accordingly, we remanded for the trial court “to consider the Government’s position that Dr. Aljindi’s complaint fails to state a claim for copyright infringement.” Id.

Aljindi v. U.S., 2023 U.S. App.LEXIS 8069 *2 (4/5/23)
Genius at work inventing AI and Cybersecurity. Digital image by Ralph.

Apparently some appellate law clerks wanted to read more of Aljindi’s amazing claims and talked their judges into a partial remand, out of an abundance of caution, of course. They were not disappointed. Aljindi on remand now claimed to have invented AI, Information Security and Legacy Information Systems, cause, why not? Al Gore did invent the Internet, after all.

These “unusual” claims were made by Dr. Aljini to try to support his pleading for copyright violation. Surprisingly, that tactic did not work. The copyright claims were dismissed by the trial judge because duh, you cannot copyright ideas, even hallucinatory ones. Aljini, of course, appealed again, much to the appellate clerks’ delight. I can almost see them pump fisting and saying, yes! Plan well done.

Young law clerks celebrate a rare moment of levity. Fake photo by Ralph.

The Court of Federal Claims took time, again, to write a per curiam affirmed opinion. All part, I suppose, of what Aljindi called “ongoing judicial corruption, abuse, and torture.” Here are some select quotes, again, you be the judge, hallucinations or not? (citations to the record omitted)

Dr. Aljindi argued on appeal that the “Government used [his] property in ALL formal AI Strategies published by the federal government . . . as [he had] discovered this entire scientific field in its entirety.”

Dr. Aljindi clarifies in his briefing, however, that his copyright claim is not founded on any alleged infringement of the copyrightable aspects of his dissertation; rather, he explains that “[t]he scientific intellectual property” at issue is “the discovery of the entire Information Security, AI, and LIS scientific field in its entirety and establishing this scientific field from scratch.” Dr. Aljindi arguing that “[e]verything is based on [his] scientific research and [his] own property“) … . Dr. Aljindi does not identify any specific expression of these ideas and concepts that the government allegedly copied; instead, he repeatedly contends generally that “everything built on top of [his] property is [his] property.”

[H]ow did these federal agencies . . . know about the relationship between AI, Information Security, and LIS without reading and taking my property and building on its formal scientific findings!

Aljindi v. U.S., 2023 U.S. App.LEXIS 8069 *2, *3 (4/5/23)

How indeed?

Sometimes tortured souls have delusions of grandeur to try to cope. I’ve done that a few times myself. Image of a mad genius by Ralph.

I can imagine Dr. Aljindi thinking to himself, how else could they have possibly known? It’s mine, all mine, I say, stolen by the evil feds. I will sue you all!

Yes, I swear, this is a real opinion, not an delusion. So are the next two, which, in some ways, are even better.

Emrit v. Barkley

This is another pro se case, they are the best for hallucinations, where the Third Circuit bothered to write a per curiam opinion on AI and hallucinations. Once again, I suspect the judges’ clerks talked them into it. Emrit v. Barkley, 2023 U.S. App. LEXIS 11188 *; 2023 WL 3300970 (3rd Cir., 5/8/23). The plaintiff here is infamous, having filed over 500 pro se lawsuit across the country. This one is against former NBA basketball player, Charles Barkley, and the Subway fast-food chain. It involves both AI and the CIA. Of course, the CIA has long been known to be using AI for nefarious ends. What we did not know, until this law suit enlightened us, is how closely involved Barkey and Subway were involved. Pro se plaintiff to the rescue!

Image by Ralph depicting Charles Barkley as an AI evil genius.

Emrit claims in his Appeal Brief that the “CIA utilizes advertisements of Charles Barkley, Subway, Fan Duel, and sporting goods to annoy or harass” [*2]  him. Id. at 5. Emrit requested $45 million in damages. Id. at 9.” The trial judge dismissed the original pleading as frivolous. Can you imagine? Still, Emrit appealed to the Third Circuit and tried again.

Emrit argued in his appeal that the Barkley Subway and other “advertisements provided a way for technology companies to ‘engage in a form of mind control to turn humans into artificial intelligence.’” Yup, Barkley and Subway are part of a secret CIA mission to turn humans into Artificial Intelligence. Apparently, all the big tech companies are in on it too. Maybe they have already been turned into AI. It’s not clear from the pleadings. What is clear is the allegation that the CIA is able to turn humans into AI by mind control using television and advertisements, especially ones with Charles Barkley in them. Who can resist the trance inducing eyes of Charles Barkley?

Those are not the kind of allegations that appellate court law clerks, usually fresh out of law school, read every day. Usually it is pretty boring stuff. One company suing another, blah, blah. I have no doubt the Clerks of the Third Circuit were happy to read this nonsense and eagerly passed the Barkley briefs around.

Law clerks at a glass table celebrating. Digital art image by Ralph.

Of course, the third Circuit affirmed the lower court dismissal without even a partial remand, “because Emrit’s complaint is frivolous.” Really? But what about copyright? I guess these clerks were not as persuasive as the ones in Aljindi v. United States. Still, they provided the explanations of the Barkley AI hallucinations in the per curiam opinion quoted above, and we are all better for that.

I have a suspicion that we have not seen the last of this particular hallucination. We may see it in a movie some day. Turning people into plain old zombies is getting kind of old. Robot-Artificial Intelligence zombies are much better. Plus, it is well known, that anything with AI in it these days sells, especially if they are crazy AIs. No doubt a copyright suit or two will eventually come out of all of this this as well.

Hallucinatory image by Ralph of humans turned into AI robots by CIA mind control.

Mateen v. FNU LNU

Now it’s the Fifth Circuit clerk’s time to have fun and write a per curiam affirmance opinion on a different AI hallucination. Mateen v. FNU LNU, 857 Fed. Appx. 209 (5th Cir. 2021). If you are at all squimish, you might not want to read on. By the way, the mysterious defendant in this case, FNU LNU, is an acronym commonly used in the justice system for when the identity of the person or persons charged or sued remains a mystery. In that case, they are often listed in court records as “Fnu Lnu,” shorthand for “First name unknown, Last name unknown.”

This one involves a pro se prisoner, Shazizz Mateen, aka Reginald Bowers, with a very serious criminal record. As a prisoner he sued in federal court in Texas unknown people in an unknown ambulance company and unknown people in an unknown hospital. Shazizz alleged that these unknown persons were all part of a “a vast conspiracy pursuant to which, inter alia, he was lobotomized and had an artificial-intelligence chip inserted into his brain that turned him into an android slave.” The appeal case was heard by Judge Jolly. I kid you not.

Fake AI Photo by Ralph of a prisoner after brain surgery.

It is bad enough to be in prison, but to also have an AI chip put in your brain, that’s tough. Then to have an AI chip turn you into an android slave? Well, it does not get any worse than that. Maybe he deserves it, who knows. Still, his allegations were bizarre enough that most law clerks would want to write an opinion about it for their judge. The law clerks here almost didn’t get their way. The appeals court panel, led by Judge Jolly, agreed to allow a per curiam affirmed opinion, but only if it was technically an “unpublished” opinion with no precedent value. That’s why it can only be found online as part of the Federal Appendix.

So apparently the law clerk’s got their way and the case was published, but it was unpublished at the same time. Sounds like a zombie opinion to me. Another sign that an AI chip in prisoners’ brains zombie film will likely be coming soon too.

AI zombie robots image created by Ralph using AI.

Conclusion

Humans are just as likely to be wrong and make things up as AIs are, maybe even more likely to do so, especially when they play lawyer. So do not be scared away from using AI tools like ChatGPT just because they sometime make mistakes and make things up, i.w. hallucinate. So do humans, even lawyers – just read the news. That is why lawyers still have job security to double check and quality control for errors and hallucinations.

Digital art by Ralph and his AIs depicting a human lawyer in court making a fool of himself.

By the way, no AI was used in the research for this article, nor for any of the writing. From my experience, GPTs are not that good at writing humor and do not understand the subtle nuances of satire. Still, AI is pretty good at accurate summaries, so I asked ChatGPT-4 to summarize this article. Here are its words.

Summary: The blog discusses three appellate court opinions centered on artificial intelligence (AI) and hallucinations. The discussed hallucinations are by the plaintiffs, not by AI, including outlandish claims like AI robot zombies and conspiracy theories involving Charles Barkley using mind control to turn humans into AI, with a demand for $35 million in damages.

  1. Aljindi v. United States:
    • Dr. Ahmad Aljindi, claiming to have invented AI, Information Security, and Legacy Information Systems, filed a pro se suit against the U.S. government for copyright infringement.
    • The trial court dismissed his claims, but the appellate court remanded for further consideration on the copyright infringement claim.
    • On remand, Aljindi continued his astounding claims, but the trial judge again dismissed them, resulting in another appeal.
  2. Emrit v. Barkley:
    • Pro se plaintiff filed a case against Charles Barkley and Subway, alleging a conspiracy with the CIA using advertisements for mind control and turning humans into AI.
    • The plaintiff sought $45 million in damages.
    • The case was dismissed as frivolous by the trial judge and affirmed by the Third Circuit, highlighting the wild accusations of mind control and AI transformation.
  3. Mateen v. FNU LNU:
    • A pro se prisoner alleged that unknown persons inserted an AI chip into his brain, turning him into an android slave as part of a vast conspiracy.
    • The appeals court allowed a per curiam affirmed opinion but noted it as unpublished with no precedent value.

The blog concludes by noting that humans, like AI, make mistakes and hallucinate, and while AI tools like ChatGPT might sometimes err, human oversight can help manage these issues, offering a degree of job security for legal professionals.

Chat GPT-4.0 summarizing this article

It is a decent, albeit dry summary. The summary does not seem to grasp the satirical and inside baseball humor of this article, well, at least attempts at humor. That confirms my point about AI’s not being ready for prime time comedy. Still, to be fair, I asked ChatGPT-4 to prepare jokes based on this article. Here is what it came up with. You be the judge of its quality.

Why did the man sue Charles Barkley and the CIA after eating a sandwich?

Because he believed the “sub-way” was leading him on a path to becoming an artificially intelligent robot, and he wanted his “byte” of the $45 million in damages for extra “processing” power! 🥪🤖

_______________

Why did the blogger love covering appellate court cases about AI and hallucinations? 

Because where else can you find “artificial intelligence” and “genuine delusions” co-starring in the same legal drama!

ChatGPT-4.0

Digital art image by Ralph summarizing this Blog.

Ralph Losey Copyright 2023. All Rights Reserved.


REAL OR FAKE? New Law Review Article Provides a Good Framework for Judges to Make the Call

June 13, 2023
Losey Midjourney Image of a GPT Judge

The GPTJUDGE: Justice in a Generative AI World article will be published in October by Duke Law & Technology Review. The authors are Maura Grossman, Paul Grimm, Daniel Brown and Molly Xu. In addition to suggesting a legal framework for judges to determine if proffered evidence is Real or Fake, the article provides good background on generative AI. It does so in an entertaining way, touching on a wide variety of issues.

The evidentiary issues raised by generative type AI and deep fakes, and analysis of federal rules, are the parts of their article that interest me the most. Their proposed legal framework for adjudication of authenticity is excellent. It deserves attention by judges and arbitrators everywhere. ‘Real or Fake’ is not just a meme, it is an important issue of the day, both in the law and general culture. Justice depends on Truth, on true facts, reality. Justice is difficult, perhaps impossible to attain when lies and fakes confuse the courtroom.

Before I go into the article, lets play the `Real or Fake game’ now sweeping the Internet with some pictures of the two lead authors, both public figures, Paul Grimm and Maura Grossman. What do you think, which of these pictures are Real and which are Fake? There will be more tests like this to come. Leave a comment with your best guesses or use your AI to get in touch with my AI.

Introduction

The GPTJUDGE: Justice in a Generative AI World, will be published in October in Vol. 23, Iss. 1 of Duke Law & Technology Review (Oct. 2023). Maura Grossman was kind enough to provide me with an author’s advance version. Doug Austin of E-Discovery Today has already written a good summary of the entire twenty-six page article. The GPT Judge is a very ambitious article that covers A to Z on Generative AI and law. My good colleague, Doug Austin, describes the entire article. I recommend you read Doug’s article, or better yet, read the whole GPTJudge article for yourself.

Unlike Doug’s article, I will, as mentioned, only focus on one part of the article. This is the part of The GPTJudge, found at pages 12-18, which addresses the thorny evidentiary issues concerning LLM AI. Is it real or fake evidence offered? What rules govern these issues? And what a judge should do, or in my case, an arbitrator do, when these issues arise.

Although Doug Austin’s article is a real article, not a fake, it appears to me that Doug did have a wee bit of help in writing from the devil itself here, namely ChatGPT-4. He carefully reviewed and edited the Generative AI’s work, I am sure. It is a fine article, but has a familiar ring. Parts of my article will also have that familiar generative AI tone. It is a real Ralph Losey writing, not a fake. I am pretty certain of that. But, truth be told, I too use GPT AI – ChatGPT-4 to help me write this article. My own tiny human brain needs all of the AI help it can get to accomplish the ambitious task I have set myself here of summarizing this complex corner of the Duke Law Review Article. ChatGPT is a good writing tool, and so is the WordPress software that I also use to create these blogs. I now also use another tool to craft my blogs, a generative AI program called Midjourney. Here, for instance, Midjourney helped me create some pretty cool, but fake images of Grimm and Grossman. Another `Real or Fake’ test will be coming soon, but first some more background on the lead authors whom I know well. This is not to slight the the fellow professor with Maura Grossman at David R. Cheriton School of Computer Science at the University of Waterloo, who is a co-author, Daniel G. Brown, Ph.D., nor Professor Brown and Professor Grossman’s undergraduate student here who helped, Molly(Yiming) Xu.

Lead Authors, the Very Real Paul Grimmand Maura Grossman

One of the lead authors of GPT Judge was a real District Court Judge in Baltimore until just recently, Paul Grimm. He is now a Professor at Duke Law and Director of the Bolch Judicial Institute, and, as all who know him will agree, a truly outstanding, very real person. Paul Grimm is without question one of the top judicial scholars of our time, especially when it comes to evidentiary issues. I have had the privilege of listening to him speak many times and even teaching with him a few times. He was even kind enough to write the Forward to one of my books.

Now back to `Real or Fake’ starring Paul Grimm. You be the judge.

The lead author of The GPTJUDGE: Justice in a Generative AI World, is a top expert in law and technology and friend, Maura Grossman, PhD. She is now a practicing attorney, with her own law and consulting firm, Maura Grossman Law, in Buffalo, NY, a Special Master for Courts throughout the country, a Research Professor in the School of Computer Science at the University of Waterloo, an Adjunct Professor at Osgoode Hall Law School at the University of Waterloo and an affiliate faculty member of the Vector Institute in Ontario. Phew! How does she do it all? I suspect substantial help from AI. Now it is Maura’s turn to be the subject of my `Real of Fake’ quiz, then we will get into the article proper.

What do you think, `Real or Fake’ Maura Grossman?

One more thing before I begin, Judge Grimm and Maura Grossman have recently written another article that you should also put on your must read pile, or better yet, click the link and see it now and bookmark: Paul W. Grimm, Maura R. Grossman, and Gordon V. Cormack, Artificial Intelligence as Evidence, 19 Nw. J. Tech. & Intell. Prop. 9, 84 (2021). The well known information scientist, and Maura’s husband, Gordon Cormack, is also an author. So you know the technical AI details in all of these articles are top notch.

Summary of the Segment of the Law Review Article, The GPTJudge, Covering Evidentiary Issues

Judges will soon be required to routinely face the issues raised by evidence created by GPT AI and any evidence alleged to be fake. This will force judges to assess the authenticity and admissibility of evidence challenged as inaccurate or as potential deepfakes. The existing Federal Rules of Evidence and their state counterparts can, the authors contend, be flexibly applied to accommodate the emerging AI technology. Although not covered by the article, in my personal opinion the rules governing arbitrations are also flexible enough. The author’s contend, and I generally agree, that It is infeasible to amend these rules for every new technological development, such as deepfakes, due to the time-consuming revision process required to amend federal rules. Paul Grimm should know as he used to be on the rules committee that revises the Federal Rules of Civil Procedure.

The admissibility of AI evidence hinges on several key areas under the Federal Rules of Evidence: relevance (401), authenticity (901 and 902), the judge’s gatekeeper role in evaluating evidence (104(a)), the jury’s role in determining the authenticity of contested evidence (104(b)), and the need to exclude prejudicial evidence, even if relevant (403).

Judges should adapt the rules to allow for their application to new technologies like AI, without rigidly adhering to them, to promote the development of evidence law. Federal Rule of Evidence 702, which pertains to scientific, technical, and specialized evidence, requires judges to ensure such evidence is based on sufficient facts, reliable methodology, and has been applied accurately to the case.

These evidence rules provide judges and lawyers with enough guidance. Arbitrators probably do not need special new rules either, especially because there are no juries in arbitration.

It is important when assessing potential GPT AI evidence, or alleged deepfake evidence, that judges pay close attention to the rule requiring the exclusion of evidence that could lead to unfair prejudice (Rule 403). This stresses the importance of ensuring that such evidence is both valid and reliable before being presented to the jury. This rule obviously has no direct application to arbitration, but still, arbitrators must take care they are not fooled by fakes.

When evaluating the authenticity of potential evidence, including deepfakes, or other disputed evidence, judges should refer to Federal Rule of Evidence 702 and the Daubert factors to assess the evidence’s validity and reliability. Careful consideration of the potential for unfair prejudice that can occur with the introduction of unreliable technical evidence is of prime importance. The authors stress that admissibility should not solely hinge on whether the evidence is more likely than not to be genuine (the preponderance standard), but also should depend on the potential risks or negative outcomes if the evidence is proven fake, or insufficiently valid and reliable. Evidence should be excluded if the authenticity is doubtful and the risk of unfair or incorrect outcomes is high. Again the presence of a jury or not tempers this risk.

So how are judges supposed to make the call on `Real or Fake’? Judge Paul Grimm and Maura Grossman recommend Judges follow three steps to make these determinations:

  1. Scheduling Order: Judges should set a deadline in their scheduling order for parties intending to introduce potential GPT AI evidence to disclose this to the opposing party and the court well in advance. This allows the opposing counsel time to decide whether they want to challenge the admissibility of the evidence and seek necessary discovery.
  2. The Hearing: When there’s a challenge to the admissibility of the evidence as AI-generated or deepfake, judges should set an evidentiary hearing on the testimony and other evidence needed to rule on the admissibility. This hearing should be scheduled significantly ahead of the trial to allow the judge enough time to evaluate the evidentiary record and make a ruling.
  3. The Ruling: Following the hearing, judges should carefully consider the evidence and arguments presented during the hearing and issue a ruling. This ruling should assess whether the proponent of the evidence has adequately authenticated it. The judge should address the relevance, authenticity, and prejudice arguments. Special attention should be given to the validity and reliability of the challenged evidence, weighing its relevance against the risk of an unfair or excessively prejudicial outcome.

This is good advice for any judge facing these issues, arbitrators too, as well as attorneys and litigants. Everyone should walk these three simple steps to escape the fake traps that Generative AI can create.

Conclusion

I commend the entire law review article for your reading, but especially the section on evidentiary issues. This section, pgs. 12-18, was, for me, particular interesting and helpful. Also see the earlier article by Grimm, Grossman and Cormack, Artificial Intelligence as Evidence. Judges and Arbitrators will all soon be facing many challenges regarding the authenticity and admissibility of evidence related to AI. `Real or Fake’ may be a key question of our times.

The authors insist, and I somewhat reluctantly agree, that it’s not practical to amend the existing Federal Rules of Evidence for every new technological development. Instead, as the authors also point out, and I again agree, these rules provide a flexible framework. Our rules already allow judges to evaluate factors like relevance, authenticity, and potential prejudice of AI-generated evidence. Rule 702 in particular is crucial because it requires that scientific, technical, or specialized evidence be based on ample facts, reliable methodology, and a sound application to the case at hand. The same situation applies to arbitrations, although arbitrations are typically more informal and there are no juries. Still, arbitrators should be on the lookout for fake or unreliable evidence and look for general guidance from the federal evidence rules.

Judge Grimm and Dr. Grossman propose a three-step process to guide judges when handling potential GPT AI or deepfake evidence. I like the simple three-step procedure proposed. There is more to it than described in this summary, of course. You need to read the whole article – THE GPTJUDGE: JUSTICE IN A GENERATIVE AI WORLD.

I urge other lawyers, arbitrators and judges to try out the three-steps proposed when they face these issues. The just, speedy and inexpensive resolution of disputes must remain the polestar of all our dispute resolution. These suggestions, if employed in a reasoned and prudent manner, can help us to do that. Ensuring the reliability of evidence is important because Justice arises out of truth, not lies and fakes.

All of the Fake Images Here are by Losey and Midjourney with profuse apologies to Judge Grimm.

Copyright Ralph Losey 2023, to text and fake images only.