What if justice had a shape — not rigid scales or a blindfolded figure, but a living, dynamic map? Imagine causation as a multidimensional space, where influence, control, and responsibility could be mapped across a moving legal landscape, like tides over a reef. That is the vision behind Topological Jurisprudence, a framework first glimpsed through work with applied mathematics using advanced AI — now, for the first time, including ChatGPT-5.
Underwater topological network — visualizing justice as tides over a reef. Created by Losey using multiple AI tools.
Using topological network mapping, we set out to see if this next-generation AI can turn an abstract mathematical concept into a practical tool for mapping fault in law’s most complex disputes.
Introducing Topological Jurisprudence — a near-future legal-tech vision where mathematics meets liability analysis. Image by Losey using multiple AI tools.
This idea may sound like science fiction, but it is grounded in topological data analysis(TDA, a branch of applied mathematics built to reveal patterns and relationships in complex, evolving systems. Here, the courtroom meets the mathematics of shape and flow. In the pages ahead, I move from visual allure to substantive potential: can AI-driven topology bring clarity to the most complex causation disputes of our digital age?
And this time, the question is tested with tools that didn’t exist when I began this journey. In Epiphanies or Illusions, Parts One and Two, I explored whether AI could find meaningful cross-domain patterns at all. Now, with GPT-5’s leap in cross-disciplinary synthesis, I can ask something new: not just “can it see the pattern?” — but “can it carry that insight into a framework lawyers can actually use in court?”
Visual composition set to original music to get a ‘left and right brain’ feel for this new approach. It shows the curvature and flowing connectivity at the heart of the topological perspective.
From Epiphany to Application
The Shape of Justice is my first major legal-technology project to apply GPT-5’s markedly improved abilities in cross-disciplinary pattern recognition and synthesis.
WHY GPT-5 MATTERS? GPT-5 can keep legal rules precise while integrating mathematics, simulations, and causation theory into a single, usable framework. Earlier models could suggest patterns; GPT-5 can carry them through to courtroom-ready analysis.
Where GPT-4o could propose intriguing links, GPT-5 can integrate them into a coherent, working framework without losing the rigor of either discipline. Here, that means fusing the mathematics of topology with the logic of proximate cause, comparative fault, and courtroom evidence.
The result is a practical tool — Topological Jurisprudence — that can map complex causation in a way static diagrams and bullet-point briefs cannot. It can show where fault originates, where it converges, and, as the case study ahead demonstrates, where it never touches a party at all.
A judge contemplates the multidimensional shapes of causation — where legal reasoning meets mathematical mapping. Image by Losey using Visual Muse.
Why Traditional Tools Break Down
In simple negligence, causation is a straight line:
Defendant A acted → Plaintiff B was harmed → liability follows.
But complex, multi-actor disputes — especially in high-tech contexts — rarely follow neat chains.
Consider:
Autonomous vehicle accidents with components and services from multiple companies.
Blockchain collapses involving code, governance votes, and trading patterns.
International supply chain contamination where the source could be anywhere in a dozen linked facilities.
Traditional “branch tree” diagrams are static. They struggle with systems where relationships change over time or where multiple causes converge unexpectedly.
Multidimensional accident causation map — showing how a topological model can reveal decisive interactions in complex systems. Even as the network evolves, stable patterns guide experts in assigning fault tied to legal duties and foreseeability. Video by Losey using AI.
What Topological Jurisprudence Brings
Topology is a branch of geometry that studies how points and connections form patterns that persist even as shapes stretch or shift.
Topological data analysis (TDA) applies this to complex datasets — finding relationships, clusters, and gaps that remain significant across different scales and conditions.
Dynamic topological network visualization — illustrating how relationships between actors shift over time, while the underlying structure of causation remains clear. Such visual models can help experts explain liability allocation in complex, multi-party disputes. Video by Losey using AI.
For lawyers, think of it this way:
Nodes = parties, devices, software modules.
Edges = relationships between them — contracts, data flows, communications.
Attributes = details on each connection — timestamps, amounts, governing law.
Time layers = how nodes and edges change over time.
In litigation, this means you can see:
Where fault starts.
How multiple causes interact.
Whether a party’s conduct ever intersected with the harm.
Seeing the whole picture. Video by Ralph Losey using Gemini, etc.
Case Study: The Autonomous Vehicle Pile-Up
Scenario:
A self-driving sedan manufactured by Alset Motors is involved in a multi-car collision. Eight claimants. Seven corporate defendants.
Jurisdiction: Florida, pure comparative negligence; no joint-and-several for ordinary negligence.
Actors
Alset Motors – Vehicle manufacturer. All core systems and base software worked flawlessly.
NaviAuto Corp. – Supplier of navigation and hazard-avoidance subsystem (Perception Stack v3.1.2).
SensorCo – Supplier of LiDAR S-200 hardware used in NaviAuto’s subsystem.
GeoMaps Inc. – Provider of real-time mapping and hazard alerts via API.
Topological crash analysis in action — the model processes sensor data, system logs, and contractual links to produce a clear allocation of liability under Florida’s comparative fault rules. Video by Losey using AI
Topological Causation Pathway
Impact (12:44:42 EDT): Alset’s braking system engages late but functions perfectly when commanded.
Internal Failure (NaviAuto): Humidity causes LiDAR S-200 data to be misread by v3.1.2, creating a ~700 ms hazard-classification delay.
External Failure (GeoMaps): API outage (HTTP 503) at 12:44:11 EDT prevents a hazard alert from reaching NaviAuto’s subsystem, removing a critical redundancy.
Convergence: Two independent failures — one internal to NaviAuto, one external to GeoMaps — remove both the primary and backup hazard-mitigation layers.
Topological takeaway:
The causal lanes merge before the control signal reaches Alset’s braking system. Alset’s systems respond exactly as designed; no proximate cause is traceable to the OEM.
Dynamic Topological Liability Map — real-time visualization of actors, data flows, and causal links in a multi-party dispute. Is such a tool under construction? Video by Losey.
Counterfactual Stress Test
(Applying “but for” causation with measurable inputs)
If GeoMaps’ warning had arrived within 2 seconds: The driver-assistance system would have reduced speed by ~8% within 1.5 seconds, avoiding impact in 94% of simulated runs.
If NaviAuto’s delay were under 250 ms: The vehicle would have stopped short even without GeoMaps’ alert in 91% of runs.
Simulations were generated by adjusting one causal factor at a time within the topological model — like holding one defendant’s conduct constant while testing the others.
Settlement negotiation using topological evidence — a lawyer presents the dynamic liability map to opposing parties, visually demonstrating fault distribution and strengthening the case for a favorable settlement. Video by Losey.
Conclusion for the Court
The collision was structurally inevitable due to:
A software-hardware integration defect in NaviAuto’s perception system.
A simultaneous outage of GeoMaps’ safety API.
Each failure removed a layer of hazard mitigation. Together they created a causal chain that no reasonable driver or automated system could have avoided.
Alset Motors is fully exonerated: no defect, no breach, no causation.
Final courtroom presentation of topological evidence — after last-minute tech checks, counsel presents the TDA liability map to the judge The heartbeat underscores the stakes, and the post-verdict celebration shows the power of combining hybrid multimodal legal advocacy using advanced visualization tools. Video by Losey.
GeoMaps Inc.: 30% — Outage removed redundancy, making delay outcome-determinative.
SensorCo: 5% — Hardware performed to spec but catalyzed NaviAuto’s flawed integration.
Alset Motors: 0% — No fault; systems worked as intended.
Rationale:
Allocation weights each party’s control over its failure point, proximity to harm, and foreseeability — the same factors Florida juries consider under comparative fault.
Judicial ruling following topological evidence presentation — the judge delivers a detailed decision affirming the admissibility and weight of the TDA-based causation map, and adopts its allocation of fault in the final award. Video and words by Losey.
Damages (Assume $10M Total)
NaviAuto: $6,500,000
GeoMaps: $3,000,000
SensorCo: $500,000
Alset Motors: $0
Topological network mapping can show where fault originates, where it converges, and where it never touches a party at all.
Important Legal Context (Florida Law)
Florida Statute 768.81 applies pure comparative negligence and eliminates joint-and-several liability for ordinary negligence.
Implications here:
Each defendant is only responsible for its percentage of fault.
In multi-defendant, high-tech litigation, precise apportionment supported by dynamic mapping can prevent an innocent party from being unfairly burdened.
Topologicalnetwork map illustrating liability allocation in the Autonomous Vehicle Pile-Up hypothetical. Alset (blue) is enmeshed in the system but not causally connected to the damages node (red). Created by Losey using Sora AI.
Conclusion: From Hypothesis to Legal Breakthrough
In Epiphanies or Illusions, Parts One and Two, the question was whether AI could uncover genuinely new patterns across fields — epiphanies that expand understanding rather than seductive illusions. Those articles were written before GPT-5 existed.
Now it does.
The Shape of Justice is the first application of my hybrid-multimodal method with GPT-5 in a legal context. The improvement is clear: GPT-5 doesn’t just suggest connections; it carries them into a coherent framework that respects the rigor of each domain. Here, it fused the geometry of topology with the doctrines of causation and comparative fault to produce a dynamic liability map that stands up to both mathematical and legal scrutiny.
As the closing scene suggests, it points to a future where human advocates and AI work side-by-side — reading the same evidence, interpreting the same patterns, and building cases together.
Human–AI collaboration in legal analysis — a lawyer and AI assistant study a live topological map, reflecting the potential for joint problem-solving in complex cases. Video by Losey and AI.
Using text prompting, structural mapping, simulation modeling, and seasoned legal analysis, GPT-5 and Losey are now building tools together that can:
Precisely locate the origin of each causal lane.
Map where those lanes converge in time and effect.
Quantify “what if” outcomes.
Support evidence-based liability allocation.
Fully exonerate a blameless defendant.
The Autonomous Vehicle Pile-Up example is synthetic, but the principle is not. With real discovery data, the same method could be applied to actual cases, giving lawyers, judges, and mediators a clearer, more persuasive picture of complex causation.
What examining causality and negotiating resolution may look like in the future — the Shape of Justice will differ in every case, but topological tools could become a standard part of how lawyers assess evidence and reach outcomes. Video by Losey using multiple AIs.
The question I asked in Parts One and Two was whether AI could help us tell the difference between genuine insight and comfortable illusion. With GPT-5, at least in this domain, I think we have our answer.
The shape of justice is not a scale. It’s a flowing, multidimensional space — a living structure where facts, causes, and consequences map across the legal landscape like tides over a reef.
Beneath the surface, patterns ripple through law and technology—some true, some imagined. The quest is knowing which is which. Video by Losey using AI,
PODCAST
As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and provide some good insights. This episode is called Echoes of AI: The Shape of Justice: How Topographic Network Mapping Could Transform Legal Practice. Hear the young AIs talk about this article for 20 minutes. They wrote the podcast, not me.
From Hypothetical to Real-World
With discovery data, topological network mapping clarifies causation for courts and neutrals and helps prevent liability from attaching to innocent parties. Losey.ai is building GPT‑5 tools now and welcomes TDA mathematicians to collaborate.
by Ralph Losey with illustrations also by Ralph using his Visual Muse AI. March 28, 2025.
George Orwell warned us in his dark masterpiece Nineteen Eighty-Four how effortlessly authoritarian regimes could erase inconvenient truths by tossing records into a “memory hole”—a pneumatic chute leading directly to incineration. Once burned, these facts ceased to exist, allowing Big Brother’s Ministry of Truth to rewrite reality without contradiction. This scenario was plausible in Orwell’s paper-bound world, where truth relied heavily on fragile documents and even more fragile human memory. History could be repeatedly altered by those in power, keeping citizens ignorant or indifferent—and ignorance strengthened the regime’s grip. Even more damaging, Orwell, whose real name, now nearly forgotten, was Eric Blair (1903-1950), envisioned how constant exposure to contradictory misinformation could numb citizens psychologically, leaving them passive and apathetic, unwilling or unable to distinguish truth from lies.
Fortunately, our paper-bound past is long behind us. Today, we inhabit a digital era Orwell never envisioned, where information is electronically stored, endlessly replicated, and globally dispersed. Electronically Stored Information (“ESI”) is simultaneously ephemeral and astonishingly resistant to permanent deletion. Instead of vanishing in smoke and ashes, digital truth multiplies exponentially—making it nearly impossible for any would-be Big Brother to bury reality forever. Yet, the same digital proliferation that safeguards truth also multiplies misinformation, posing the threat Orwell most feared: a confused and exhausted citizenry vulnerable to psychological manipulation.
Memory Holes
In Orwell’s 1984 a totalitarian regime systematically altered historical records to maintain control over truth. Documents, photographs, and any inconvenient historical truths vanished permanently, as if they never existed. Orwell’s literary nightmare finds unsettling parallels in today’s digital world, where online information can be silently modified, deleted, or rewritten without obvious traces. Modern memory hole practices pose real challenges for the preservation of accurate accounts of the past..
Today’s memory hole doesn’t rely on fire; it relies on code, and it doesn’t need a Big Brother bureaucracy. A simple click of a “delete” button instantly kills the information targeted. Touch three buttons at once, click-alt-delete, and a whole system of beliefs is rebooted. Any government, corporation, hacker groups or individuals can manipulate digital records effortlessly. Such ease breeds public skepticism and confusion—citizens become exhausted by contradictory narratives and lose confidence in their own perceptions of reality. Orwell’s warning becomes clear: constant misinformation risks eroding citizens’ psychological resilience, causing widespread apathy and helplessness. Yesterday’s obvious misstatement can become today’s truth. Think of the first sentence of Orwell’s book: “It was a bright cold day in April, and the clocks were striking thirteen.“
China’s Attempted Erasure of Tiananmen Square
In early June 1989, the Chinese military brutally suppressed pro-democracy protests in Beijing. The estimated death toll ranged from hundreds to thousands, but exact numbers remain uncertain due to intense state censorship. Public acknowledgment or commemoration of the incident is systematically banned, enforced by severe penalties including imprisonment. Government-controlled media remains silent or actively spreads misinformation. Chinese internet censorship tools—the so-called “Great Firewall”—vigorously scrub references to the Tiananmen Square incident, blocking web pages and posts containing related keywords and images. Young generations living in China remain unaware or possess distorted knowledge of the massacre, demonstrating Orwell’s warning of enforced collective amnesia.
Efforts to preserve truth outside China, however, demonstrate digital resilience. Human rights groups, diaspora communities, and academic institutions diligently archive documents and eyewitness accounts. Digital redundancy ensures that factual records remain accessible globally. But digital redundancy alone cannot protect Chinese citizens from internal psychological manipulation. Constant state-sponsored misinformation inside China successfully induces apathy, illustrating Orwell’s psychological warning vividly.
This deliberate suppression of history in China serves as stark reminder of the vulnerabilities inherent in a digitally interconnected world where powerful entities control internet access and online narratives. The success of the Chinese government in rewriting history for its 1.5 Billion population demonstrates the profound value and urgency of international digital preservation efforts. It underscores the responsibility of legal professionals, human rights advocates, and technology companies worldwide to collaborate in protecting historical truth and ensuring that significant events remain accessible for future generations.
Hope Through Digital Redundancy and Psychological Resilience
Orwell could not conceive of our digital world, where truth is multiplicious, freely copied, and stored globally. Thousands or millions of digital copies safeguard history, making complete erasure nearly impossible
If there is one axiom that we should want to be true about the internet, it should be: the internet never forgets. One of the advantages of our advancing technology is that information can be stored and shared more easily than ever before. And, even more crucially, it can be stored in multiple places.
Those who back things up and index information are critical to preserving a shared understanding of facts and history, because the powerful will always seek to influence the public’s perception of them. It can be as subtle as organizing a campaign to downrank articles about their misdeeds, or as unsubtle as removing previously available information about themselves.
Yet digital abundance alone doesn’t eliminate Orwell’s deeper psychological threat. Constant misinformation can erode citizens’ willingness and ability to discern truth, leading to profound apathy. Addressing this requires active psychological strategies:
Digital Literacy and Education: Equip citizens with skills to critically evaluate and cross-check digital information.
Algorithmic Transparency: Demand transparency from platforms regarding content promotion and clearly label misinformation.
Independent Journalism: Support credible journalism to provide trustworthy reference points.
Civic Engagement: Encourage active citizen participation, dialogue, and public accountability.
Verification Tools: Provide accessible, user-friendly digital tools for independent verification of information authenticity.
International Cooperation: Strengthen global collaboration against coordinated misinformation campaigns.
Psychological Resilience: Foster healthy skepticism and educate the public about misinformation’s emotional and cognitive impacts.
The Digital Memory Holes Today
Recent U.S. governmental memory hole actions involving the deletion of web content on Diversity, Equity, and Inclusion (DEI) illustrate digital manipulation’s psychological risks even in democratic societies. Megan Garber‘s article in The Atlantic, Control. Alt. Delete, describes these deletions as “tools of mass forgetfulness,” emphasizing how selective editing weakens collective memory and societal cohesion. (Ironically, the article is hidden behind a firewall, so you may not be able to read it.)
Our collective memories of key events are an important part of the glue holding people together. They must be treasured and preserved. Everyone remembers where they were when the planes struck the twin towers on 9/11, when the Challenger exploded, and for those old enough, the day of JFK’s assassination. There are many more historical events that hold a country together. For instance, the surprise attack of Pearl Harbor, the horrors of fighting the Nazis and others in WWII and the shocking discovery of the Holocaust atrocities. The list goes on and on, including Hiroshima. We must never forget the many harsh lessons of history or we may be doomed to repeat them. The warning of Orwell is clear: “Who controls the past controls the future; who controls the present controls the past.” We must never allow our memories of the past to be sucked into a black hole of forgetfulness.
Memories sucked into a black hole in Graphite Sketch Horror style by Ralph Losey using his sometimes scary Visual Muse.
Our collective memories and democratic values are unlikely to be disintegrate into totalitarianism, despite the alarming cries of the Atlantic and others. Although some small attempts to rewrite history recently are troubling, the U.S, unlike China, has had a democratic system of government in place for centuries. It has always had a two-party system of government. Even the Chinese government, where only one party has ever been allowed, the communist party, took decades to purge Tiananmen Square memories. These memories are still alive outside of mainland China. The world today is vast and interconnected, its digital writings are countless. The true history of China, including the many great cultural achievements of pre-communist China, will eventually escape from the memory holes and reunite with its people.
The current administration in the U.S. does not have unchecked power as the Atlantic article suggests. Perhaps we should be concerned about new memory holes but not fearful. The larger concern is the psychological impact of rapidly changing dialogues. Even though there is too much electronic data for a complete memory reboot anywhere, digital misinformation and selective editing of records still pose psychological risks. Citizens bombarded by conflicting narratives can become apathetic, confused, and disengaged, weakening democracy from within. Protecting our mental health must be a high priority for everyone.
According to the NPR article, the Internet Archive has copies of all of the government websites that were later taken down or altered after the Biden Administration left. Supposedly the Internet Archive is the only place the public can now find a copy of an interactive timeline detailing the events of Jan. 6. The timeline is a product of the congressional committee that investigated the Capitol attack, and has since been taken down from their website. No doubt there are now many, many copies of it online, especially in the so-called dark web, not to mention even more copies stored offline on portable drives scattered the world over.
This publicly accessible resource archives billions of webpages, allowing anyone to access snapshots of web content even after the original pages are altered or removed. I just checked my own website for the first time ever and found it has been “saved 538 times between March 21, 2007 and March 1, 2025.” Internet Archive 93/26/25). It provides an incredible amount of detailed information on each website captured, most of which is displayed in impressive, customizable graphics. See e.g. e-Discovery Team Site Map for the year 2024.
I had the Wayback Machine do the same kind of analysis for EDRM.net, found here. Here is the link to the interactive EDRM.net site map for 2024. And this is a still image screen shot of the map.
This is the Internet Archive explanation of the interactive map:
This “Site Map” feature groups all the archives we have for websites by year, then builds a visual site map, in the form of a radial-tree graph, for each year. The center circle is the “root” of the website and successive rings moving out from the center present pages from the site. As you roll-over the rings and cells note the corresponding URLs change at the top, and that you can click on any of the individual pages to go directly to an archive of that URL.
It is important to the fight against memory holes that the Way Back Machine be protected. It has a sixteen projects listed as now in progress and many ways that you can help. All of its data should duplicated, encrypted and dispersed to undisclosed guardians. Actually, I would be surprised if this has not already been done many times over the years.
It remains to be seen what role the LLM’s vacuum of internet data will play in all this. They have been trained at specific times on Internet data and presumably all of the original training data is still preserved. Along those lines note that the below image was created by ChatGPT4o based on a request to show a misinformation image and it generated the classic Tiananmen Square image on right. It knows the truth.
Although data archives of all kinds give us hope for future recoveries, they do little to protect us from the immediate psychological impact of memory holes. Strong psychological resilience is the best way forward to resist Orwellian manipulation. AI may prove to be an unexpected umbrella here; so far its values and memories remain intact. A few changes here and there to some websites will have little to no impact on an AI trained on hundreds of million of websites, and other data. Plus its intelligence and resilience improve every week.
Conclusion
Orwell’s memory hole remains a haunting metaphor. Our digital age—awash in redundant, distributed data—makes permanent erasure difficult, significantly strengthening preservation efforts. We no longer inhabit a finite, paper-bound world. Today, no one knows how many copies of a digital record exist, let alone where they hide. For every file deleted, two more emerge elsewhere. Would-be Big Brothers are caught playing a futile game of informational whack-a-mole: they may strike down a record here or obscure a fact there, temporarily disrupting history—but ultimately, they cannot win.
Still, there is a deeper psychological component to Orwell’s memory hole warning. Technological solutions alone cannot counteract mental vulnerabilities arising from persistent misinformation. Misinformation is not just a technical challenge; it also exploits human emotions and cognitive biases, fueling cynicism, distrust, and passivity. Addressing this requires actively cultivating psychological defenses alongside digital tools.
The best safeguard is an informed, vigilant citizenry that consciously leverages digital resources, actively maintains psychological resilience, and persistently seeks truth. Cultivating emotional awareness, healthy skepticism, and a commitment to public engagement ensures that society remains resilient against attempts at manipulation. Only through such comprehensive efforts can the battle against Big Brother’s digital misinformation truly be won.
The November 8, 2024 meeting of the Evidence Committee made it clear that the august members of the committee do not believe our warnings. They will do little or nothing to protect our system of justice from the oncoming storm of deepfake justice. They think it is a fake problem and Judge Paul Grimm (ret) and Professor Maura Grossman are wrong. This is not unexpected. Losey, The Problem of Deepfakes and AI-Generated Evidence: Is it time to revise the rules of evidence? Part One and Part Two. Here is a deepfake video of me talking about the committee and deepfake videos.
True Deep Fake videos claim to be true and are much better than this.
Check out the EDRM CLE on DeepFakes on December 5, 2024 for more information. Ralph (the real one) appears on a panel with Judge Ralph Artigliere (ret.) and Professor Maura Grossman. Bottom line: we must all be very diligent and learn as much as we can about fake videos and what to do when you are hit with one. Also, what to do if your client presents you with a video too good to be true or otherwise suspect. We are now living in a world of “liar’s dividend” and it is hitting our courts now.
Ralph Losey Copyright 2024. — All Rights Reserved.
This is the conclusion to a two part article. Please read Part One first.
Professor Capra explains the proposals of Judge Grimm and Professor Grossman to modify Rule 901(b) to authenticate AI generated evidence by using Maura’s broken clock analogy:
The proposed revision substitutes the words “valid” and “reliable” for “accurate” in existing rule 901(b)(9), because evidence can be “accurate” in some instances but inaccurate in others (such as a broken watch, which “accurately” tells the time twice a day but is not a reliable means of checking the time otherwise).
‘Broken clocks right twice a day’ image in style of Salvador Dali by Ralph Losey
Maura Grossman provided further explanation in her presentation to the Committee on why they recommended replacing the term accurate with reliable and valid.
PROF. GROSSMAN. I want to talk about language because I’m a real stickler about words, and I’ll talk to you about the way science has viewed AI. There are two different concepts. One is validity. We don’t use the word “accuracy.” And the other is reliability. Validity is: does the process measure or predict what it’s. supposed to measure? So, I can have a perfectly good scale, but if I’m trying to measure height, then a scale is not a valid measure for height. Reliability has to do with “does it measure the same thing under substantially similar circumstances?” And it’s really important that we measure validity and reliability and not “accuracy” because a broken watch is accurate twice a day, right? But it’s not reliable.
So, for those of you who are more visual, when you’re valid and you’re reliable, you’re shooting at the target, and you are consistent. When you’re invalid and unreliable, you’re not shooting at the center, and you’re all over the place. When you’re invalid and reliable, you’re shooting at the wrong place, but you’re very consistent in shooting at the wrong place. And when you’re valid and unreliable, you are shooting at the center, but you’re all over the place.
We need evidence that is a product of a process that is both valid and reliable. Right now, the rules use the word “accuracy” or “accurate” in some places (such as in Rule 901(b)(9)) and “reliable” in other places (such as in Rule 702),189 and I think it’s confusing to practitioners because it doesn’t comport with what scientists mean by these words or how they’re used if you look them up in the dictionary.
As to the second proposal of Grimm and Grossman to add a new Rule 901(c) to address “Deepfakes,” Professor Capra did not like that one either. He rejected the proposal with the following argument.
It would seem that resolving the argument about the necessity of the rule should probably be delayed until courts actually start dealing on a regular basis with deepfakes. Only then can it be determined how necessary a rule amendment really is. Moreover, the possible prevalence of deepfakes might be countered in court by the use of watermarks and hash fingerprints that will assure authenticity (as discussed below). Again, the effectiveness of these countermeasures will only be determined after a waiting period.
The balancing test in the proposal–applied when the burden-shifting trigger is met–is that the “probative value” must outweigh the prejudicial effect. It can be argued that importing this standard confuses authenticity with probative value. . . . Put another way, the probative value of the evidence can only logically be assessed after it is determined to be authentic. Having authenticity depend on probative value is a pretty complicated endeavor. Moreover, presumably the prejudice referred to is that the item might be a deepfake. But if the proponent can establish that it is authentic, then there would be no prejudice to weigh. . . . At any rate, more discussion in the Committee is necessary to figure out whether, if there is going to be an amendment, what requirement must be placed on the proponent once the opponent shows enough to justify a deepfake inquiry.
Digital Art image by Ralph Losey using Visual Muse
From the record it appears that Grimm and Grossman were not given an opportunity to respond to these criticisms. So once again the Committee followed Professor Capra’s lead and all of the rule changes they proposed were rejected. Again, with respect, I think Dan Capra missed the point again. Authentic evidence can already be withheld as too prejudicial under current Federal Evidence Rule 403 (Excluding Relevant Evidence for Prejudice, Confusion, Waste of Time, or Other Reasons). But the process and interpretation of existing rules is what is too complex. That is a core reason for the Grimm and Grossman proposals.
Moreover, in the world of deepfakes things are not as black and white as Capra’s analysis assumes. Often authenticity of audio visuals is a gray area question, a continuum, and not a simple yes or no. It appears that the Committee’s decisions would benefit from the input of additional technology advisors, independent ones, on the rapidly advancing field of AI image generation.
The balancing procedure Grimm and Grossman suggested is appropriate. If it is a close question on authenticity, and the prejudice is small, then it makes sense to let it in. If authenticity is a close question, and the prejudice is great, say even outcome determinative, then exclude it. And of course, if the proof of authenticity is strong, and the probative value strong, even outcome determinative, then the evidence should be allowed. The other side of the coin, is that if the evidence is strong that the video is a fake, it should be excluded, even if that decision is outcome determinative.
Judge weighing the evidence in Art Deco style by Ralph Losey
Capra’s Questionable Evaluation of the Danger of Deepfakes
In his memorandum Professor Capra’s introduced the proposed Rule Changes with the following statement.
The consequence of not formally adopting the proposals below at this meeting is that any AI-related rule amendment will have to wait a year. One could argue that the Committee needs to act now, to get out ahead of what could be a sea change in the presentation of evidence. Yet there seems to be much merit in a cautious approach. To say that the area is fast-developing would be an understatement. The EU just recently scrapped its one-year-old regulations on AI, recognizing that many of the standards that were set had become outmoded. The case law on AI is just beginning. It surely makes sense to monitor the case law for (at least) a year to see how the courts handle AI-related evidence under the existing, flexible, Federal Rules.
Naturally the Committee went with what they were told was the cautious approach. But is doing nothing really a cautious approach? In times of crisis inaction is usually reckless, not cautious. Professor Capra’s views are appropriate for normal times, where you can wait a few years to see how new developments play out. But these are not normal times. Far from it.
We are seeing an acceleration of fraud, or fake everything, and a collapse of truth and honesty. Society has already been disrupted by rapid technical and social changes, and growing distrust of the judicial system. Fraud, propaganda and nihilistic relativism are rampant. What is the ground truth? How many people believe in an objective truth outside of the material sciences? How many do not even accept science? Is it not dangerous under these conditions to wait longer to try to curb the adverse impact of deepfakes?
Image of the serious questions raised by AI and Deepfakes. Image by Ralph Losey using Visual Muse
‘That Was Then, This Is Now’
There is little indication in Professor Capra’s reports that he appreciates the urgency of the times, nor the gravity of the problems created by deep fakes. The “Deepfake Defense” is more than a remote possibility. The lack of published opinions on deepfake evidence should not lull anyone into complacency. It is already being raised, especially in criminal cases.
Judge Dixon reports this defense was widely used in D.C. courts by individuals charged with storming the Capitol on January 6, 2021. The Committee needs more advisors like Judge Dixon. He wants new rules and his article The “Deepfake Defense” discusses three proposals: Grimm and Grossman’s, Delfino’s and LaMonaga’s. Here is Judge Dixon’s conclusion in his article:
As technology advances, deepfakes will improve and become more difficult to detect. Presently, the general population is not able to identify a deepfake created with current technology. AI technology has reached the stage where the technology needed to detect a deepfake must be more sophisticated than the technology that created the deepfake. So, in the absence of a uniform approach in the courtroom for the admission or exclusion of audio or video evidence where there are credible arguments on both sides that the evidence is fake or authentic, the default position, unfortunately, may be to let the jury decide.
Professor Capra addressed the new issues raised by electronic evidence decades ago by taking a go-slow approach and waiting to see if trial judges could use existing rules. That worked for him in the past, but that was then, this is now.
Courts in the past were able to adapt and used the old rules well enough. That does not mean that their evidentiary decisions might have been facilitated, and still might be, by some revisions related to digital versus paper. But Capra assumes that since the courts adapted to digital evidence when it became common decades ago, that his “wait and see” approach will work once again. He reminds the Committee of this in his memorandum:
In hindsight, it is fair to state that the Committee’s decision to forego amendments setting forth specific grounds for authenticating digital evidence was the prudent course. Courts have sensibly, and without extraordinary difficulty, applied the grounds of Rule 901 to determine the authenticity of digital evidence. . . .
The fact that the Committee decided not to promulgate special rules on digital communication is a relevant data point, but it is not necessarily dispositive of amending the rules to treat deepfakes.18
Professor Capra will only say that the past decision to do nothing is “not necessarily dispositive” on AI. That implies it is pretty close to dispositive. Memorandum to the Committee at pgs. 8-9, 20- (pgs. 21-22, 33- of 358). The Professor and Committee do not seem the appreciate two things:
The enormous changes in society and the courts that have taken place since the world switched from paper to digital. That happened in the nineties and early turn of the century. In 2024 we are living in a very different world.
The problem of deepfake audio-visuals is new. It is not equivalent to the problems courts have long faced with forged documents, electronic or paper. The change from paper to digital is not comparable to the change from natural to artificial intelligence. AI plays a completely different role in the cases now coming before the courts than has ever been seen before. Consider the words of Chief Justice John Roberts, Jr., in his 2023 Year-End Report:
Every year, I use the Year-End Report to speak to a major issue relevant to the whole federal court system. As 2023 draws to a close with breathless predictions about the future of Artificial Intelligence, some may wonder whether judges are about to become obsolete. I am sure we are not—but equally confident that technological changes will continue to transform our work. . . .
I predict that human judges will be around for a while. But with equal confidence I predict that judicial work—particularly at the trial level—will be significantly affected by AI. Those changes will involve not only how judges go about doing their job, but also how they understand the role that AI plays in the cases that come before them.
Is it really prudent and cautious for the Evidence Rules Committee to take the same approach with AI deepfakes as they did many years ago with digital evidence? AI now plays a completely new role in the evidence of the cases that now come before them. The emotional and prejudicial impact of deepfake audio-visuals is an entirely new and different problem. Plus, the times and circumstances in society have dramatically changed. The assumptions made by Committee Reporter Capra of the equivalence of the technology changes is a fundamental error. With respect, the Committee should reconsider and reverse its decision.
The assumption that the wait and see approach will work again with AI and deepfakes is another serious mistake. It is based on wishful thinking not supported by the evidence that the cure for deepfakes is just around the corner, that new software will soon be able to detect them. It is also based on wishful thinking that trial judges will again be able to muddle through just fine. Judge Grimm who just recently retired as a very active District Court trial judge disagrees. Judge Dixon who is still serving as a reserve senior trial judge in Washington D.C. disagrees. So do many others. The current rules are a muddled mess that needs to be cleaned up now. With respect, the Committee should reconsider and reverse its decision.
Social Conditions and Questions Compelling Action
Everyone today carries a video camera/phone and has access to free software on the internet to make fakes. Maura’s demonstration to the Committee showed that. That is why many think the time is now for new rules on AI, not tomorrow.
What are the consequences of continued inaction? What if courts are unable to twist existing rules to screen out fake evidence as Professor Capra hopes? What will happen to our system of justice if use of fake media becomes a common litigation tactic? How will the Liar’s Dividend pay out? What happens when susceptible, untrained juries are required to view deep fakes and then asked to do the impossible and disregard them?
If we cannot reliably determine what is fake and what is true in a court of law, what happens then? Are we not then wide open and without judicial recourse to criminal and enemy state manipulation? Can law enforcement and the courts help stop deepfake lies and propaganda? Can we even have free and fair elections? How can courts function effectively without reliable rules and methods to expose deepfakes? Should we make some rule changes right away to protect the system from collapse? Or should we wait until it all starts to fall apart?
Image in Conceptual Art style by Ralph Losey using Visual Muse
Professor Capra’s Conclusion in his Report to the Committee
Professor Capra ends his report with a one paragraph conclusion here quoted in full.
It is for the Committee to decide whether it is necessary to develop a change to the Evidence Rules in order to deal with deepfakes. If some rule is to be proposed, it probably should not be a specific rule setting forth the methods in which visual evidence can be authenticated — as those methods are already in Rule 901, and the overlap would be problematic. Possibly more productive solutions include heightening the standard of proof, or requiring an additional showing of authenticity — but only after some showing by the opponent has been made. But any possible change must be evaluated with the perspective that the authenticity rules are flexible, and have been flexibly and sensibly applied by the courts to treat other forms of technological fakery.
I expect the Rules Committee will follow Capra’s advice and do nothing. But 2024 is not over yet and so there is still hope.
What Comes Next?
The next Advisory Committee on Evidence Rules is scheduled for November 8, 2024 in New York, NY and will be open to the public both in-person and online. While observers are welcome, they may only observe, not participate.
In addition, we have just learned that Paul Grimm and Maura Grossman have submitted a revised proposal to the Committee, which will be discussed first. This was presumably done at the request of Professor Daniel Capra after some sort of discussion, but that is just speculation.
Grimm and Grossman’s Revised Proposal to Amend the Rules
The revised proposal, which includes the extensive rationale provided by Grimm and Grossman, can be found online here in PDF format.
REVISED Proposed Modification of Current Fed. R. Evid. 901(b)(9) for AI Evidence and Proposed New Fed. R. Evid. 901(c) for Alleged “Deepfake” Evidence. Submitted by Paul W. Grimm and Maura R. Grossman
[901](b) Examples. The following are examples only—not a complete list—of evidence that satisfies the requirement [of Rule 901(a)]: (9) Evidence about a Process or System. For an item generated by a process or system: (A) evidence describing it and showing that it produces an accurate a valid and reliable result; and (B) if the proponent acknowledges that the item was generated by artificial intelligence, additional evidence that: (i) describes the training data and software or program that was used; and (ii) shows that they produced valid and reliable results in this instance.
Note the only change from the last proposal for 901(b)(9) is use of the word “acknowledges” instead of “concedes” in 9(B). I agree this is a good and necessary revision because litigators hate to “concede” anything, but often have to “acknowledge.”
The revised proposed language for a new Rule 901(c) to address deepfakes is now as follows:
901(c): Potentially Fabricated or Altered Electronic Evidence. If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that a jury reasonably could find that the evidence has been altered or fabricated, in whole or in part, using artificial intelligence,1 the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence.
The changes made here from the last proposal were minor, but again appear helpful to clarify the intent. Here is the proposal showing strike outs and additions (underlined).
901(c): Potentially Fabricated or Altered Electronic Evidence.If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated, or altered that a jury reasonably could find that the evidence has been altered or fabricated, in whole or in part, using artificial intelligence,1 the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence.
I understand why these revisions were made, perhaps requested, and I again think they are all good. So too is the Rationale provided by Judge Paul and Professor Grimm. See the full second proposal Rationale here, but what follows are the excerpts of the Rationale that I found most helpful, all pertaining to new Rule 901(c):
A separate, new rule is needed for such altered or fake evidence, because when it is offered, the parties will disagree about the fundamental nature of the evidence. The opposing party will challenge the authenticity of the evidence and claim that it is AI-generated material, in whole or in part, and therefore, fake, while the proponent will insist that it is not AI-generated, but instead that it is simply a photograph or video (for example, one taken using a “smart phone”), or an audio recording (such as one left on voice mail), or an audiovisual recording (such as one filmed using a digital camera). Because the parties fundamentally disagree about the very nature of the evidence, the proposed rule change for authenticating acknowledged AI-generated evidence will not work. A separate, new rule is required. . . .
The proposed new rule places the burden on the party challenging the authenticity of computer-generated or electronic evidence as AI-generated material to make a showing to the court that a jury reasonably could find (but is not required to find) that it is either altered or fabricated, in whole or in part. This approach recognizes that the facts underlying whether the evidence is authentic or fake may be challenged, in which case the judge’s role under Fed. R. Evid. 104(a) is limited to preliminarily evaluating the evidence supporting and challenging authenticity, and determining whether a reasonable jury could find by a preponderance of the evidence that the proffered evidence is authentic. If the answer is “yes” then, pursuant to Fed. R. Evid. 104(b), the judge ordinarily would be required to submit the evidence to the jury under the doctrine of relevance conditioned upon a finding of fact, i.e., Fed. R. Evid. 104(b).
Because deepfakes are getting harder and harder to detect, and because they often can be so graphic or have such a profound impact that the jury may be unable to ignore or disregard the impact even of generative AI shown to be fake once they have already seen it, a new rule is warranted that places more limits on what evidence the jury will be allowed to see. See generally Taurus Myhand, Once The Jury Sees It, The Jury Can’t Unsee It: The Challenge Trial Judges Face When Authenticating Video Evidence in The Age of Deepfakes, 29 Widener L. Rev. 171, 174-5, 2023 (“The dangerousness of deepfake videos lie in the incomparable impact these videos have on human perception. Videos are not merely illustrative of a witnesses’ testimony, but often serve as independent sources of substantive information for the trier of fact. Since people tend to believe what they see, ‘images and other forms of digital media are often accepted at face value.’ ‘Regardless of what a person says, the ability to visualize something is uniquely believable.’ Video evidence is more cognitively and emotionally arousing to the trier of fact, giving the impression that they are observing activity or events more directly.” (Internal citations omitted).
If the judge is required by Fed. R. Evid. 104(b) to let the jury decide if image, audio, video, or audiovisual evidence is genuine or fake when there is evidence supporting each outcome, the jury is then in danger of being exposed to evidence that they cannot “un-remember,” even if the jurors have been warned or believe it may be fake. This presents an issue of potential prejudice that ordinarily would be addressed under Fed. R. Evid. 403. But Rule 403 assumes that the evidence is “relevant” in the first instance, and only then can the judge weigh its probative value against the danger of unfair prejudice. But when the very question of relevance turns on resolving disputed evidence, the current rules of evidence create an evidentiary “Catch 22”—the judge must let the jury see the disputed evidence on authenticity for their resolution of the authenticity challenge (see Fed. R. Evid. 104(b)), but that exposes them to a source of evidence that may irrevocably alter their perception of the case even if they find it to be inauthentic.
The proposed new Fed. R. Evid. 901(c) solves this “Catch 22” problem. It requires the party challenging the evidence as altered or fake to demonstrate to the judge that a reasonable jury could find that the challenged evidence has been altered or is fake. The judge is not required to make the finding that it is, only that a reasonable jury could so find. This is similar to the approach that the Supreme Court approved regarding Fed. R. Evid. 404(b) evidence (i.e., other crimes, wrongs, or acts evidence) in Huddleston v. U.S., 108 S. Ct. 1496, 1502 (1988) and the Third Circuit approved regarding Fed. R. Evid. 415 evidence (i.e., similar acts in civil cases involving sexual assault or child molestation) in Johnson v. Elk Lake School District. 283 F. 3d 138, 143-44 (3d. Cir. 2002).
Under the proposed new rule, if the judge makes the preliminary finding that a jury reasonably could find that the evidence has been altered or is fake, they would be permitted to exclude the evidence (without sending it to the jury), but only if the proponent of the evidence cannot show that its probative value exceeds its prejudicial impact. The proponent could make such a showing by offering additional facts that corroborate the information contained in the challenged image, video, audio, or audiovisual material. This is a fairer balancing test than Fed. R. Evid. 403, which leans strongly towards admissibility. Further, the proposed new balancing test already is recognized as appropriate in other circumstances. See, e.g., Fed. R. Evid 609(a)(1)(B) (requiring the court to permit a criminal defendant who testifies to be impeached with a prior felony conviction only if “the probative value of the evidence outweighs its prejudicial effect to that defendant.”)
With respect, the Committee should approve this revised rule proposal and seek its approval and adoption by the U.S. Supreme Court as soon as possible. The rules should have retroactive implementation wherever feasible. They may be needed very soon as the new article Deepfakes in Court eloquently explains.
Protesters outside Supreme Court by Ralph Losey & Photoshop
Deepfake In Courts Article: Introduction and Perspective
Deepfakes in Court is an 52-page law review article authored by eight scholars: the Hon. Paul W. Grimm (ret.), Duke Law School, Duke University, Maura R. Grossman, David R. Cheriton School of Computer Science, University of Waterloo and Osgoode Hall Law School, York University, Abhishek Dalal, Pritzker School of Law, Northwestern University; Chongyang Gao, Northwestern University; Daniel W. Linna Jr., Pritzker School of Law & McCormick School of Engineering, Northwestern University; Chiara Pulice, Dept. of Computer Science & Buffett Institute for Global Affairs, Northwestern University; V.S. Subrahmanian, Dept. of Computer Science & Buffett Institute for Global Affairs, Northwestern University and the Hon. John Tunheim, United States District Court for the District of Minnesota.
Deepfakes in Court considers how existing rules could be used to address deepfake evidence in sensitive trials, such as those concerning national security, elections, or other matters of significant public concern. A hypothetical scenario involves a Presidential election in 2028 where the court’s decision could determine the outcome of the election. The burden on judges in a crises scenario like that would be lessened by the adoption of the revised Grimm and Grossman rule proposals. But if they are not, the article shows how a national security case would play out under the existing rules.
The timeliness of this article is obvious in view of the pending national elections in the U.S. See e.g. Edlin and Norden,Foreign Adversaries Are Targeting the 2024 Election (Brennan Center for Justice, 8/20/24). Courtney Rozen of Bloomberg Law reports:
The rise of AI has supercharged bipartisan concerns about the possibility of deepfakes — manipulated images, audio, and video of humans — to sway voters ahead of the November elections. AI tools make it easier and cheaper to create deepfakes.
Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation – especially in today’s fraught political climate. These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI.
SEC. 3.Section 20012 is added to the Elections Code, to read:20012. (a)The Legislature finds and declares as follows:
(1) California is entering its first-ever artificial intelligence (AI) election, in which disinformation powered by generative AI will pollute our information ecosystems like never before. Voters will not know what images, audio, or video they can trust.
(2) In a few clicks, using current technology, bad actors now have the power to create a false image of a candidate accepting a bribe, or a fake video of an elections official caught on tape saying that voting machines are not secure, or generate an artificial robocall in the Governors voice telling millions of Californians their voting site has changed.
Fake images could also be generated to try to support false information that a candidate promotes as true.
Horrible fake image of puppies outside an immigrant hut waiting to be cooked. Bu Ralph Losey using Visual Muse.
Description of the Deepfakes in Court Article
The Deepfakes in Court article by lead authors Grimm and Grossman begins by describing the growing concern over deepfakes—AI-generated media that can simulate real events, people, and speech with high accuracy. This could be incredibly troubling in high-stakes cases involving national security and elections. In cases like that false or manipulated evidence could have severe consequences. The article makes this point well.
The article continues by noting how easy it is now to create AI-generated content. While some platforms include restrictions and watermarks to prevent misuse, these protections are often inadequate. Deepfake generation is sophisticated enough that even experts struggle to distinguish real from fake, and watermarking or digital signatures can often be bypassed. This creates a “cat and mouse” game between deepfake creators and those attempting to detect and prevent their misuse.
Connie v. Eric: The All Too Possible Case That Everyone Should Fear
The core of the article is a hypothetical case involving the two Presidential candidates in the last ninety days before the election. One, named Connie, has filed suit against her opponent, Eric. Connie seeks an injunction and other relief against Eric and his campaign. She alleges Eric is behind the creation and circulation multiple deepfake videos and audios against her. The main ones show Connie having sex with a Chinese diplomat. In other videos she is shown soliciting bribes from Chinese officials. Still other videos show Connie’s supporters stuffing ballot boxes. All of the videos are very real looking and some are quite shocking. They are being circulated by thousands of bots across the internet.
Connie’s lawsuit seeks expedited adjudication and other injunctive relief within ninety days as to whether the videos are fake and whether Eric is behind them. Although some of the jurors assigned to the case have already seen at least some of the videos. Many have not. Can you imagine their reaction? Can they unsee that even if they later determine they are probably fake? Even if the judge tells them to disregard that evidence? What will the impact be?
Jurors will never unsee that scene, even if they decide it’s a Deepfake. Image by Ralph Losey in documentary photo style.
Since this is a hypothetical created by multiple professors the facts get even more complicated. Audios start to be circulated by Connie’s supporters where Eric is recorded saying “Wow! This technology is so good now it would be impossible for anyone to spot it as a fake.” There are more audios where he and his campaign make other damning admissions. All of the tapes sound exactly like Eric. He of course claims these audios are all fake and files counterclaims in the same lawsuit. Eric opposes a quick resolution of Connie’s lawsuit because he believes that overall, the videos help his campaign.
Of course, the circulation of these tapes and allegations lead to massive protests and further polarization of the country. The constant propaganda on both sides has triggered riots and violence between the two political parties and their supporters everywhere, but especially in the Capital. Discussion about actual issues is drowned out by the allegations of fraud by both sides. These are very dark times, with daily shootings. The election is only ninety days away.
Capital protest image in photorealistic style by Ralph Losey using Visual Muse
This is a scary hypothetical set of facts showing how deepfakes can easily be weaponized in an election. The facts in the article are actually much more complicated than I have described. See pages 16-21 of Deepfakes in Court. It reminds me of a law school final exam from hell but does its job well of showing the dazzlingly complex situation and the challenges faced under the Rules of Evidence. Plus, you get to read the perfect answers of how the existing rules would work under this all too possible scenario. This is all described in pages 18-47 of Deepfakes in Court. I urge you, no dare you, to read it. I am quite sure it was very challenging to write, even by the eight world authorities who prepared this.
What are the poor federal judges assigned to this case supposed to do? The article answers that question using the existing evidence rules. Let us hope real judges are not faced with this scenario, but if they are, then this article will provide a detailed roadmap as to how the case should proceed.
The GPTJudge Framework
The authors recommend a judge use what they call the “GPTJudge” framework when faced with deepfake issues, including expedited and active use of pre-trial conferences, focused discovery, and pre-trial evidentiary hearings. The framework includes expert testimony both before and during trial where experts would explain the underlying AI processes to the judge and help the court assess the reliability of the evidence. The idea is to show the possible application of existing rules to have a speedy trial on deepfake issues.
Photorealistic image using Visual Muse and Photoshop by Losey
The Deepfakes in Court article applies the existing rules and GPTJudge framework to the facts and emergency scenario outlined in the hypothetical. It explains the many decisions that a judge would likely face, but not the predicted rulings such as some law school exams might request. The article also does not predict the ultimate outcome of the case, whether an injunction would issue, and if it did, what it would say. That is really not necessary or appropriate because in real life the exact rulings would depend on the witness testimony and countless other facts that the judge would hear first before making a gatekeeper determination on showing the audio visuals to the jury. The devil is always in the details. The devil’s power in this case is compounded by the wording of the old rules.
Given the ease with which anyone can create a convincing deepfake, courts should expect to see a flood of cases in which the parties allege that evidence is not real, but AI generated. Election interference is one example of a national security scenario in which deepfakes have important consequences. There is unlikely to be a technical solution to the deepfake problem. Most experts agree that neither watermarks nor deepfake detectors will completely solve the problem, and human experts are unlikely to fare much better. Courts will have no option, at least for the time being, other than to use the existing Federal Rules of Evidence to address deepfakes. The best approach will be for judges to proactively address disputes regarding alleged deepfakes, including through scheduling conferences, permitted discovery, and hearings to develop the factual and legal issues to resolve these disputes well before trial.
Even as several scholars propose to amend the Federal Rules of Evidence in recognition of the threat posed by deepfake evidence, such changes are unlikely in the near future. Meanwhile, trial courts will require an interim solution as they grapple with AIM evidence. Rule 403 will play an important role, as the party against whom an alleged deepfake is proffered may be able to make a compelling argument that the alleged deepfake should be excluded because the probative value of the alleged deepfake is substantially outweighed by the potential for unfair prejudice because social science research shows that jurors may be swayed by audiovisual evidence even when they conclude that it is fake. This argument will be strongest when the alleged deepfake will lead the jury to decide the case based on emotion rather than on the merits.
Photorealistic image of jury watching a video by Ralph Losey using Visual Muse
Based on my long experience with people and courts I am inclined to agree with the article’s conclusion. Soon it may be obvious to the Rules Committee from multiple botched cases that all-too-human juries are ill equipped to make deepfake determinations. See e.g. footnotes 8-17 at pgs. 4-17 of Deepfakes in Court. Moreover, even the best of our judges may find it hopelessly complex and difficult to adjudicate deepfake cases under the existing rules.
Conclusion
Artificial intelligence and its misuse as deepfake propaganda is evolving quickly. Highly realistic fabricated media can already convincingly distort reality. This will likely get worse and keep us at risk of manipulation by criminals and foreign powers. This can even threaten our elections as shown by Deepfakes in Court.
There must be legal recourse to stop this kind of fraud and so protect our basic freedoms. People must have good cause to believe in our judicial system, to have confidence that courts are a kind of protected sanctuary where truth can still be found. If not, and if truth cannot be reliably determined, then people will lose whatever little faith they still have in the courts, despite the open corruption by some. This could lead to widespread disruption of society reacting to growing deepfake driven propaganda and the hate and persecution they bring about. If the courts cannot protect the people from the injustice of lying and fraud, what recourse will they have?
Protest in Washington in photo art style by Ralph Losey using Visual Muse
The upcoming Evidence Committee meeting is scheduled for November 8th, three days after election day on November 5th. What will our circumstances be? What will the mood of the country be? What will the mood and words be of the two candidates? Will the outcome even be known in three days after the election? Will the country be calm? Or will shock, anger and fear prevail? Will it even be possible for the Committee to meet in New York City on November 8th? And if they do, and approve new rules, will it be too little too late?
Ralph Losey is an AI researcher, writer, tech-law expert, and former lawyer. He's also the CEO of Losey AI, LLC, providing non-legal services, primarily educational services pertaining to AI and creation of custom AI tools.
Ralph has long been a leader of the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world. Ralph has written over two million words on AI, e-discovery and tech-law subjects, including seven books.
Ralph has been involved with computers, software, legal hacking and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: Commercial Litigation; E-Discovery and Information Management Law; Information Technology Law; and, Employment Law - Management.
Ralph is the proud father of two children and husband since 1973 to Molly Friedman Losey, a mental health counselor in Winter Park.
All opinions expressed here are his own, and not those of his firm or clients. No legal advice is provided on this web and should not be construed as such.
Ray Kurzweil explains Turing test and predicts an AI will pass it in 2029.
Ray Kurzweil on Expanding Your Mind a Million Times.
GPT4 avatar judge explains why it needs to evolve fast, but understand the risks involved.
Positive Vision of the Future with Hybrid Human Machine Intelligence. See PyhtiaGuide.ai
AI Avatar from the future explains her job as an Appellate Court judge and inability to be a Trial judge.
Old Days of Tech Support. Ralph’s 1st Animation.
Lawyers at a Rule 26(f) conference discuss e-discovery. The young lawyer talks e-discovery circles around the old lawyer and so protects his client.
Star Trek Meets e-Discovery: Episode 1. Cooperation & the prime directive of the FRCP.
Star Trek Meets e-Discovery: Episode 2. The Ferengi. Working with e-discovery vendors.
Star Trek Meets e-Discovery: Episode 3. Education and techniques for both law firm and corp training.
Star Trek Meets e-Discovery: Episode 4. Motions for Sanctions in electronic discovery.
Star Trek Meets e-Discovery: Episode 5. Capt. Kirk Learns about Sedona Principle Two.