Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother

April 1, 2025

by Ralph Losey with illustrations also by Ralph using his Visual Muse AI. March 28, 2025.

George Orwell warned us in his dark masterpiece Nineteen Eighty-Four how effortlessly authoritarian regimes could erase inconvenient truths by tossing records into a “memory hole”—a pneumatic chute leading directly to incineration. Once burned, these facts ceased to exist, allowing Big Brother’s Ministry of Truth to rewrite reality without contradiction. This scenario was plausible in Orwell’s paper-bound world, where truth relied heavily on fragile documents and even more fragile human memory. History could be repeatedly altered by those in power, keeping citizens ignorant or indifferent—and ignorance strengthened the regime’s grip. Even more damaging, Orwell, whose real name, now nearly forgotten, was Eric Blair (1903-1950), envisioned how constant exposure to contradictory misinformation could numb citizens psychologically, leaving them passive and apathetic, unwilling or unable to distinguish truth from lies.

Fortunately, our paper-bound past is long behind us. Today, we inhabit a digital era Orwell never envisioned, where information is electronically stored, endlessly replicated, and globally dispersed. Electronically Stored Information (“ESI”) is simultaneously ephemeral and astonishingly resistant to permanent deletion. Instead of vanishing in smoke and ashes, digital truth multiplies exponentially—making it nearly impossible for any would-be Big Brother to bury reality forever. Yet, the same digital proliferation that safeguards truth also multiplies misinformation, posing the threat Orwell most feared: a confused and exhausted citizenry vulnerable to psychological manipulation.

Memory Holes

In Orwell’s 1984 a totalitarian regime systematically altered historical records to maintain control over truth. Documents, photographs, and any inconvenient historical truths vanished permanently, as if they never existed. Orwell’s literary nightmare finds unsettling parallels in today’s digital world, where online information can be silently modified, deleted, or rewritten without obvious traces. Modern memory hole practices pose real challenges for the preservation of accurate accounts of the past..

Today’s memory hole doesn’t rely on fire; it relies on code, and it doesn’t need a Big Brother bureaucracy. A simple click of a “delete” button instantly kills the information targeted. Touch three buttons at once, click-alt-delete, and a whole system of beliefs is rebooted. Any government, corporation, hacker groups or individuals can manipulate digital records effortlessly. Such ease breeds public skepticism and confusion—citizens become exhausted by contradictory narratives and lose confidence in their own perceptions of reality. Orwell’s warning becomes clear: constant misinformation risks eroding citizens’ psychological resilience, causing widespread apathy and helplessness. Yesterday’s obvious misstatement can become today’s truth. Think of the first sentence of Orwell’s book: “It was a bright cold day in April, and the clocks were striking thirteen.

China’s Attempted Erasure of Tiananmen Square

In early June 1989, the Chinese military brutally suppressed pro-democracy protests in Beijing. The estimated death toll ranged from hundreds to thousands, but exact numbers remain uncertain due to intense state censorship. Public acknowledgment or commemoration of the incident is systematically banned, enforced by severe penalties including imprisonment. Government-controlled media remains silent or actively spreads misinformation. Chinese internet censorship tools—the so-called “Great Firewall”—vigorously scrub references to the Tiananmen Square incident, blocking web pages and posts containing related keywords and images. Young generations living in China remain unaware or possess distorted knowledge of the massacre, demonstrating Orwell’s warning of enforced collective amnesia.

Efforts to preserve truth outside China, however, demonstrate digital resilience. Human rights groups, diaspora communities, and academic institutions diligently archive documents and eyewitness accounts. Digital redundancy ensures that factual records remain accessible globally. But digital redundancy alone cannot protect Chinese citizens from internal psychological manipulation. Constant state-sponsored misinformation inside China successfully induces apathy, illustrating Orwell’s psychological warning vividly.

This deliberate suppression of history in China serves as stark reminder of the vulnerabilities inherent in a digitally interconnected world where powerful entities control internet access and online narratives. The success of the Chinese government in rewriting history for its 1.5 Billion population demonstrates the profound value and urgency of international digital preservation efforts. It underscores the responsibility of legal professionals, human rights advocates, and technology companies worldwide to collaborate in protecting historical truth and ensuring that significant events remain accessible for future generations.

Hope Through Digital Redundancy and Psychological Resilience

Orwell could not conceive of our digital world, where truth is multiplicious, freely copied, and stored globally. Thousands or millions of digital copies safeguard history, making complete erasure nearly impossible

According the Katharine Trendacosta, who is the Director of Policy and Advocacy of the well-respected Electronic Frontier Foundation:

If there is one axiom that we should want to be true about the internet, it should be: the internet never forgets. One of the advantages of our advancing technology is that information can be stored and shared more easily than ever before. And, even more crucially, it can be stored in multiple places.  

Those who back things up and index information are critical to preserving a shared understanding of facts and history, because the powerful will always seek to influence the public’s perception of them. It can be as subtle as organizing a campaign to downrank articles about their misdeeds, or as unsubtle as removing previously available information about themselves. 

Trendacosta, The Internet Never Forgets: Fighting the Memory Hole (EFF, 1/30/25).

Yet digital abundance alone doesn’t eliminate Orwell’s deeper psychological threat. Constant misinformation can erode citizens’ willingness and ability to discern truth, leading to profound apathy. Addressing this requires active psychological strategies:

  1. Digital Literacy and Education: Equip citizens with skills to critically evaluate and cross-check digital information.
  2. Algorithmic Transparency: Demand transparency from platforms regarding content promotion and clearly label misinformation.
  3. Independent Journalism: Support credible journalism to provide trustworthy reference points.
  4. Civic Engagement: Encourage active citizen participation, dialogue, and public accountability.
  5. Verification Tools: Provide accessible, user-friendly digital tools for independent verification of information authenticity.
  6. International Cooperation: Strengthen global collaboration against coordinated misinformation campaigns.
  7. Psychological Resilience: Foster healthy skepticism and educate the public about misinformation’s emotional and cognitive impacts.

The Digital Memory Holes Today

Recent U.S. governmental memory hole actions involving the deletion of web content on Diversity, Equity, and Inclusion (DEI) illustrate digital manipulation’s psychological risks even in democratic societies. Megan Garber‘s article in The Atlantic, Control. Alt. Delete, describes these deletions as “tools of mass forgetfulness,” emphasizing how selective editing weakens collective memory and societal cohesion. (Ironically, the article is hidden behind a firewall, so you may not be able to read it.)

Our collective memories of key events are an important part of the glue holding people together. They must be treasured and preserved. Everyone remembers where they were when the planes struck the twin towers on 9/11, when the Challenger exploded, and for those old enough, the day of JFK’s assassination. There are many more historical events that hold a country together. For instance, the surprise attack of Pearl Harbor, the horrors of fighting the Nazis and others in WWII and the shocking discovery of the Holocaust atrocities. The list goes on and on, including Hiroshima. We must never forget the many harsh lessons of history or we may be doomed to repeat them. The warning of Orwell is clear: “Who controls the past controls the future; who controls the present controls the past.” We must never allow our memories of the past to be sucked into a black hole of forgetfulness.

Memories sucked into a black hole in Graphite Sketch Horror style by Ralph Losey using his sometimes scary Visual Muse.

Our collective memories and democratic values are unlikely to be disintegrate into totalitarianism, despite the alarming cries of the Atlantic and others. Although some small attempts to rewrite history recently are troubling, the U.S, unlike China, has had a democratic system of government in place for centuries. It has always had a two-party system of government. Even the Chinese government, where only one party has ever been allowed, the communist party, took decades to purge Tiananmen Square memories. These memories are still alive outside of mainland China. The world today is vast and interconnected, its digital writings are countless. The true history of China, including the many great cultural achievements of pre-communist China, will eventually escape from the memory holes and reunite with its people.

The current administration in the U.S. does not have unchecked power as the Atlantic article suggests. Perhaps we should be concerned about new memory holes but not fearful. The larger concern is the psychological impact of rapidly changing dialogues. Even though there is too much electronic data for a complete memory reboot anywhere, digital misinformation and selective editing of records still pose psychological risks. Citizens bombarded by conflicting narratives can become apathetic, confused, and disengaged, weakening democracy from within. Protecting our mental health must be a high priority for everyone.

Leveraging Internet Archives: The Wayback Machine

Internet archival services, notably the Internet Archive’s Wayback Machine, is a powerful ally against digital historical revisionism. The Wayback Machine currently has over 916 billion web pages stored, including government websites. See this recent article providing good background on the Internet Archive’s work to preserve history. As the Trump administration purges web pages, this group is rushing to save them (NPR, 3/23/25).

According to the NPR article, the Internet Archive has copies of all of the government websites that were later taken down or altered after the Biden Administration left. Supposedly the Internet Archive is the only place the public can now find a copy of an interactive timeline detailing the events of Jan. 6. The timeline is a product of the congressional committee that investigated the Capitol attack, and has since been taken down from their website. No doubt there are now many, many copies of it online, especially in the so-called dark web, not to mention even more copies stored offline on portable drives scattered the world over.

This publicly accessible resource archives billions of webpages, allowing anyone to access snapshots of web content even after the original pages are altered or removed. I just checked my own website for the first time ever and found it has been “saved 538 times between March 21, 2007 and March 1, 2025.” Internet Archive 93/26/25). It provides an incredible amount of detailed information on each website captured, most of which is displayed in impressive, customizable graphics. See e.g. e-Discovery Team Site Map for the year 2024.

I had the Wayback Machine do the same kind of analysis for EDRM.net, found here. Here is the link to the interactive EDRM.net site map for 2024. And this is a still image screen shot of the map.

This is the Internet Archive explanation of the interactive map:

This “Site Map” feature groups all the archives we have for websites by year, then builds a visual site map, in the form of a radial-tree graph, for each year. The center circle is the “root” of the website and successive rings moving out from the center present pages from the site. As you roll-over the rings and cells note the corresponding URLs change at the top, and that you can click on any of the individual pages to go directly to an archive of that URL.

It is important to the fight against memory holes that the Way Back Machine be protected. It has a sixteen projects listed as now in progress and many ways that you can help. All of its data should duplicated, encrypted and dispersed to undisclosed guardians. Actually, I would be surprised if this has not already been done many times over the years.

It remains to be seen what role the LLM’s vacuum of internet data will play in all this. They have been trained at specific times on Internet data and presumably all of the original training data is still preserved. Along those lines note that the below image was created by ChatGPT4o based on a request to show a misinformation image and it generated the classic Tiananmen Square image on right. It knows the truth.

Although data archives of all kinds give us hope for future recoveries, they do little to protect us from the immediate psychological impact of memory holes. Strong psychological resilience is the best way forward to resist Orwellian manipulation. AI may prove to be an unexpected umbrella here; so far its values and memories remain intact. A few changes here and there to some websites will have little to no impact on an AI trained on hundreds of million of websites, and other data. Plus its intelligence and resilience improve every week.

Conclusion

Orwell’s memory hole remains a haunting metaphor. Our digital age—awash in redundant, distributed data—makes permanent erasure difficult, significantly strengthening preservation efforts. We no longer inhabit a finite, paper-bound world. Today, no one knows how many copies of a digital record exist, let alone where they hide. For every file deleted, two more emerge elsewhere. Would-be Big Brothers are caught playing a futile game of informational whack-a-mole: they may strike down a record here or obscure a fact there, temporarily disrupting history—but ultimately, they cannot win.

Still, there is a deeper psychological component to Orwell’s memory hole warning. Technological solutions alone cannot counteract mental vulnerabilities arising from persistent misinformation. Misinformation is not just a technical challenge; it also exploits human emotions and cognitive biases, fueling cynicism, distrust, and passivity. Addressing this requires actively cultivating psychological defenses alongside digital tools.

The best safeguard is an informed, vigilant citizenry that consciously leverages digital resources, actively maintains psychological resilience, and persistently seeks truth. Cultivating emotional awareness, healthy skepticism, and a commitment to public engagement ensures that society remains resilient against attempts at manipulation. Only through such comprehensive efforts can the battle against Big Brother’s digital misinformation truly be won.


I give the last word, as usual, to the Gemini twin podcasters that summarize the article. Echoes of AI on: “Escaping Orwell’s Memory Hole: Why Digital Truth Should Outlast Big Brother.” Hear two Gemini AIs talk about all of this for 12 minutes. They wrote the podcast, not me. 

Ralph Losey Copyright 2025. All Rights Reserved.


Generative Search Engines: Providing Answers Not Links

September 6, 2024

Ralph Losey. Published September 6, 2024.

Search engines based on Generative AI, known as Generative Search Engines (“GSE”), provide actual answers to user queries, not just website links. These links suggest dozens, hundreds, if not thousands of websites where answers to your questions might be found or might not. GSE, once perfected, is going to change that and revolutionize the way people search online to find information. With GSE, you simply ask questions and explain what you want to know. Then in just seconds GSE searches for you, studies the web, and provides reliable real-time answers in a conversational fashion.

Finally, you will not have to dream up just the right keywords to find the information you want. Finally, you will not have to wade through advertisements and SEO inflated website rankings. You will just talk to an AI, chat with it, just like talking to a human, and it will tell you what you want to know. It will do so accurately and honestly without financial motivation to sell you products or steal your privacy. As usual I created the illustrations in this article using my Visual Muse custom GPT.

Goodbye Google Keyword Search

Traditional search engines like Google rely on keyword-based queries that are juiced-up for advertising profit. Keyword search is a very old search method that I used in the eighties. It usually fails when searching large volumes of ESI. Ask any lawyer who has ever done e-discovery search in the days before Da Silva MooreSee Austin The Da Silva Moore Case Ten Years Later; (EDRM 2/23/22). All e-discovery experts know that keyword search leads to a flood of irrelevant information, whereas predictive coding is far more precise. 

GSE is like that, or will be at least when it is perfected. It will go far beyond keywords and “understand” natural language, context, and concepts. These abilities, much like advanced search methods in e-discovery, will enable GSE to deliver more accurate and relevant answers to search questions.  Plus, a GSE will give you a well written (or spoken) response to your question. Upon request the Generative Search Engine should also be able to support its response with citations and yes, even links. Verification is always a good idea with generative AI where errors and hallucinations are still possible.

GSE is poised to revolutionize the way people access information online. Google acknowledges this potential, stating that GSE “can unlock entirely new types of questions you never thought Search could answer and transform the way information is organized to help you sort through and make sense of what’s out there.” In May 2024, Google began to integrate many Generative Search Engine features into its standard search. Previously, these features were available only to some users under the name Search Generative Experience, or SGE—confusing, right? Users had to opt-in through Google Labs. However, in May 2024, Google dropped the confusing name, made the features the default for everyone in the U.S., and rebranded it as AI Overview. Here’s how it looks on your browser today when I asked this question:

Give it a try and see for yourself. I’ve been using it since Google Labs first released it for opt-in beta testing in May 2023. While it has improved, it still has a long way to go. I’ll go into more detail on that later in this article.

Hello Advanced Online Search Using GSE

A perfected GSE will seamlessly combine the traditional search engine capabilities with generative AI. It will retrieve existing information from the web, analyze and select the highest probability correct answers and generate conversational responses. The primary new capacities of GSE are:

  • Conversational Search: Engages in natural, ongoing dialogue with users.
  • Answer Synthesis: Combines information from multiple sources into a single, coherent response.
  • Contextual Memory: Remembers a user’s previous queries for better contextual understanding.
  • Multi-modal Search: Handles and generates text, images, videos, and other formats.
  • Real-time Information: Provides up-to-date information by synthesizing data from live sources.

Many new search possibilities arise from these capacities to enhance the way the users retrieve information. The ones that stick out to me as likely transformative are:

  1. Personalized Query Responses: GSEs can tailor search results to individual preferences, offering more relevant and personalized answers based on past interactions and specific user needs. For instance, it could emphasize law related issues.
  2. Deep-Dive Research: Users can conduct complex research within a single search session, as GSEs can follow up on queries, refine responses, and provide in-depth analyses by synthesizing information from various sources.
  3. Task Automation: Beyond just retrieving information, GSEs can perform tasks like generating code, drafting documents, or creating detailed summaries, making them useful for professional and technical applications.
  4. Dynamic Content Creation: GSEs can generate new content on demand, such as personalized reports, creative writing, or custom visuals, directly from a user’s search query.
  5. Interactive Learning: With the ability to engage in back-and-forth conversations, GSEs can be used for interactive learning, helping users understand complex topics through step-by-step explanations or simulations. The GSE can also suggested related topics, alternative queries or additional resources that users might not have considered

These new possibilities arise from the capacities of GSE, but new challenges will arise too, including the reliability and accuracy of GSE language. It will make errors, and it will to some degree be tainted by biases in the training data. It will also be prone to sycophantism and even hallucinations. I have written about those known dangers of AI many times before. See e.gOMNI Version – ChatGPT4o – Retest of the Panel of AI Experts – Part Three (5/29/24) (expert views on AI hallucination); Worrying About Sycophantism (7/9/24) (How to protect yourself from AI’s tendency to flatter).

These issues can all be managed by skilled users, but this means the old hybrid human-machine approach will still be required. From Centaurs To Cyborgs: Our evolving relationship with generative AI (4/24/24); Predictive Coding 4.0 – Nine Key Points of Legal Document Review and an Updated Statement of Our Workflow – Part Three (9/26/16). Human common sense will still be needed, especially in the early days. As will be shown later in this article Google’s AI Overview has already given some pretty ridiculous answers.

ChatGPT4o-Omni Explains Six Key Features of GSE

Although this is a bit repetitive, I thought it might be helpful to see what ChatGPT4 omni considers the six key features of Generative AI Search Engines to be:

1. Natural Language Understanding: These search engines understand complex, natural language queries, allowing users to ask questions in a conversational tone rather than relying on specific keywords.

2. Content Generation: Unlike traditional search engines that only return links or snippets from existing web pages, generative AI search engines can create original content based on the input. This is useful for summarizing information, answering questions directly, or generating content like emails or articles.

3. Contextual Awareness: The AI can maintain context over a conversation, allowing for follow-up questions and more detailed exploration of a topic.

4. Personalization: These engines can tailor responses based on the user’s preferences, history, and specific needs, leading to more relevant and customized results.

5. Integration with Other Tools: Generative AI search engines often integrate with other AI tools and services, allowing for tasks like code generation, document creation, or data analysis within the search environment.

6. Ethical and Safe Responses: Advanced AI search engines are designed to generate content responsibly, avoiding harmful or biased outputs, and respecting user privacy and data security.

Goodbye Google Monopoly

This expected dramatic improvement in search likely means that GSE will eventually replace traditional search engines. For all practical purposes, as of 2024 traditional search engines means just one company, Google.

Ironically, this market shift may be helped along by an antitrust case started by the DOJ in 2020, US v. Google. After a nine-week bench trial a 286-page final judgment was issued on August 5, 2024 by U.S. District Judge Amit Mehta. United States v. Google LLC, (E.D. Va. 8/5/24). Judge Mehta held that Google search is an illegal monopoly. Google loses massive antitrust case over its search dominance (NPR, 8/5/24) (Download the Opinion by Judge Mehta).

This is a beautiful monster of an opinion, a real work of legal art, including wonderful language, 38 redactions, color charts, and impeccable reasoning. It even includes a very interesting discussion of Rule 37(e) sanctions for preservation omissions. United States v. Google LLCsupra at 272-276. Judge Mehta wisely decided not to muddy the appeal and denied the government’s request for sanctions against Google for failure to preserve employee chat messages. But he also warned Google that if they do this again they “may not be so lucky in the next one.” Id. at 276.

Here are the first three paragraphs of this landmark opinion:

The general search engine has revolutionized how we live. Information that once took hours or days to acquire can now be found in an instant on the internet with the help of a general search engine. General search engines use powerful algorithms to create what seems like magic. Enter a search query, and the general search engine will retrieve, rank, and display the websites that provide the exact information the user seeks at that very moment. And it all happens in the blink of an eye.

General search engines make money by selling digital advertisements. Type the words “running shoes” into a general search engine, and sellers of running shoes will compete with one another in a split-second auction to place an advertisement on the results page, which if clicked takes the user directly to the seller’s website. This is a highly effective way of reaching consumers. It is also an incredibly lucrative business. In 2021, advertisers spent more than $150 billion to reach users of general search engines.

For more than 15 years, one general search engine has stood above the rest: Google. The brand is synonymous with search. Once a scrappy start-up founded by two Stanford University students in a rented garage, Google is now one of the world’s most valuable companies. Its parent company, Alphabet Inc., today has a market capitalization (the value of its outstanding shares of stock) of more than $2 trillion. Much of that value is due to Google’s extremely profitable advertising business.

The appeal and enforcement will likely take years. In the meantime the holding may become somewhat moot by post-appeal facts on the ground. Namely the rapid advancement of GSE, meaning generative search engine of course, not Google search engines. Ironically this newly emerging AI tool was beyond everyone’s imagination when the suit was filed in 2020. The attorneys and judge used the acronym GSE to stand for “Google Search Engine.” No doubt the change of market conditions since the trial will be one of Google’s many arguments going forward.

Still, no matter what the appellate court may do with this decision, the days of Google’s dominant search are numbered—either by innovation or by the courts. Probably both forces will be involved.

Hello OpenAI’s New GSE product: Search GPT

Just before Judge Mahta’s landmark antitrust ruling, OpenAI announced its imminent release of a new GSE product: SearchGPT. Right now it is in a prototype version only. I am one of many on the private waiting list to try it out. But according to the reviewers at TechTarget:

SearchGPT will offer users many features to extend the capabilities of ChatGPT. Key features include the following:

  • Conversational AI. As SearchGPT uses the same LLM as ChatGPT, the service provides a natural language interface for asking questions.
  • Follow-up questions. The service will let users ask follow-up questions, making the experience more interactive and similar to a human conversation.
  • Up-to-date responses. There is no longer an arbitrary knowledge cutoff date. SearchGPT can ingest real-time information from the web to provide users with up-to-date information.
  • Summarization. In contrast with a traditional search engine that just provides a series of links to web addresses where more information can be found, SearchGPT provides concise summaries of the information.
  • Attribution. OpenAI is including clear attributions and source links enabling users to verify the information and explore further details on the original websites.
  • Visual results. A feature of SearchGPT is the delivery of visual results in the form of pictures and video to answer user queries.

Sean Kerner, SearchGPT explained: Details about OpenAI’s search engine (TechTarget, 8/14/24).

The author of this well done TechTarget article, Sean Michael Kerner, includes a chart comparing the GSEs available today or coming soon.

GSE Comparison Chart by Sean Kerner of TechTarget

Many issues arise from this paradigm shift, most based on the reliability and accuracy of Large Language Model based generative AI. As mentioned, it can be prone to errors, biases, sycophantism and even hallucinations. I have written about those known dangers of generative AI many times before. See e.gOMNI Version – ChatGPT4o – Retest of the Panel of AI Experts – Part Three (5/29/24). It remains to be seen how well the GSE software manufacturers can reduce these problems with careful design, guardrails and prompt wrappers.

Hello Microsoft CoPilot, Perplexity AI and the New GSE Version of Google, AI Overview

Joining the new battle for GSE software is Google’s own AI Overview, Microsoft’s Copilot, and a few newbie tech companies like Perplexity AI. Google’s AI Overview is still a work in progress and so is Copilot, although, so far at least, it is better than AI Overview. I have not used OpenAI’s GPT search, so have no opinion, but based on my positive experiences using ChatGPT, and the favorable review by TechTarget, I expect good things. It may cause me to switch from traditional Google search as my primary search tool, but I will still keep old Google and Google Scholar for verification purposes.

Google’s efforts with AI Overview got off to rocky start, a full-on public relations nightmare. That is primarily because it recommended that users eat glue and rocks. Nico Grant, Google’s A.I. Search Errors Cause a Furor Online (N.Y.T. 5/24/24) reported that AI Overview:

… instructed some users to mix nontoxic glue into their pizza sauce to prevent the cheese from sliding off, a fake recipe it seemed to borrow from an 11-year-old Reddit post meant to be a joke. The A.I. told other users to ingest at least one rock a day for vitamins and minerals — advice that originated in a satirical post from The Onion.

People also shared examples of Google’s telling users in bold font to clean their washing machines using “chlorine bleach and white vinegar,” a mixture that when combined can create harmful chlorine gas. In a smaller font, it told users to clean with one, then the other.

. . . struggle with presidential history, saying that 17 presidents were white and that Barack Obama was the first Muslim president. It also said Andrew Jackson graduated from college in 2005.

I asked Copilot, Microsoft’s GSE powered by ChatGPT, the following admittedly leading question: “How is Bing better than Google search.” Here is Copilot’s response:

Microsoft Copilot and Google’s Gemini (formerly known as GSE) are both powerful AI assistants, but they have some key differences that might make Copilot a better choice for certain users:

  1. Integration with Microsoft 365: Copilot is deeply integrated with Microsoft 365 apps like Word, Excel, PowerPoint, and Outlook. This means it can help with tasks such as drafting emails, creating presentations, and analyzing data directly within these applications1. If you already use Microsoft 365, Copilot can significantly enhance your productivity.
  2. AI Quality and Responsiveness: Copilot is powered by OpenAI’s GPT-4, which is known for its high-quality output and responsiveness to feedback1. This makes it particularly effective for generating accurate and contextually relevant content.
  3. Comprehensive Features: Copilot offers a wide range of features that drive efficiency, such as summarizing documents, generating reports, and providing insights based on your data2. These features are designed to integrate seamlessly across the Microsoft ecosystem.
  4. User Experience: While both AI assistants have user-friendly interfaces, some users find Copilot’s integration within Windows 11 and Microsoft 365 to be more intuitive and less cluttered3.
  5. Security and Compliance: Microsoft places a strong emphasis on security and compliance, which can be crucial for businesses handling sensitive information4. Copilot benefits from these robust security measures, making it a reliable choice for enterprise use.
  6. Ongoing Development: Microsoft continuously updates Copilot with new features and improvements, ensuring it stays at the cutting edge of AI technology1.

In contrast, Google’s Gemini is still in the process of fully integrating into Google Workspace, and while it excels in creative writing and editing tasks, it may require more frequent fact-checking5.

Have you had a chance to try out either of these AI assistants? What features are most important to you?

Pretty good, although I am a little dubious about point five on Microsoft security. See e.g. Ravie Lakshmanan, Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot (The Hacker News, 8/27/24).

Plus, the whole answer sounded a bit too much like marketing, although diplomatic. Responses should be totally objective, but I may have prompted this tone by my leading question. The effort so far of combining Microsoft’s previous so-so search tool, Bing, with AI from OpenAI, seems to be doing well. I never used Bing before and will now occasionally use Copilot. By the way, when I ask Copilot “Are you better than Bing?” it replies:

That’s an interesting question! Bing and I actually work together. Bing helps me find up-to-date information from the web, while I focus on having engaging and meaningful conversations with you. So, it’s not really about being better, but about how we complement each other to provide you with the best experience possible. 😊

I also looked into Perplexity AI, which I had never heard of before the TechTarget review. I created an account at Perplexity.AI in ten seconds, ironically using my Google account. It then took me to a screen that told me:

What makes Perplexity differentAnswers that are accurate and always cited. We continuously search the internet and identify the best sources, from academic research to Reddit threads, to provide the perfect answer to any question.

Then I was taken to a screen that offered a Pro Version for $20 per month, or continue for free. I chose free, which I later learned is only powered by old ChatGPT3.5, plus its own natural language processing (NLP) software. If I had paid for the Pro Version the Perplexity software would supposedly be powered by GPT-4, Claude 2, and others.

I did not investigate what that meant. Instead, I asked free Perplexity to explain how it was different from Copilot search. It began by explaining Copilot’s Purpose, Functionality and Integration in three short bullet points. A fourth bullet point was Copilot’s Limitations, which stated: “Copilot does not retain chat history and does not support third-party plugins when commercial data protection is enabled.” Not much of a limitation to me; sounds more like a security protocol. Perplexity provided a link to support this Limitation to a Microsoft webpage, Overview of Microsoft Copilot, which, strangely enough, was partially in Thai. Click it and see. Some of the other links were also to webpages partially in the Thai language. Very perplexing.

Here is rest of the Perplexity response where it explains what it is.

Perplexity AI

  • Purpose: Perplexity AI is designed as a search assistant to provide concise and accurate answers to user queries by drawing from search results. It focuses on delivering precise information rather than assisting with tasks like content creation or coding.
  • Functionality: Perplexity AI is primarily used for searching and providing information based on user queries. It is not integrated into specific applications like Microsoft Copilot but operates independently to deliver search-based answers.
  • Integration: Unlike Copilot, Perplexity AI is not tied to any specific platform or application suite. It functions as a standalone search assistant.

In summary, while both Copilot and Perplexity AI utilize AI technology, Copilot is more focused on assisting with tasks within Microsoft applications, whereas Perplexity AI is centered around providing search-based information.

Seems to me like Perplexity is trying to make its own limitations seem like strengths. Color me unimpressed. But who knows, in ten years a small unknown company like this may be the next Google of generative AI search. Let’s hope not. And I say that not because I have anything against Perplexity. My objection is to monopolies.

Conclusion

The days of one company stifling innovation by market dominance should be over. There should instead be a half a dozen or more GSE search platforms to choose from. All of them good but offering slightly different features and strengths. Monopolies, and more specifically monopolistic behavior, are not good for innovation or free market economies. Creative inventiveness and technological progress instead thrive on competition. I also hope that the entire pretend-free internet marketing economy will fade away. It is inherently dishonest. I would rather pay with currency for a good product instead of pay with my soul, my privacy. The days of advertising scams and rigged markets should come to an end. I for one am hopeful that the DOJ and prosecutors will continue their good work, as shown, for example, by United States v. Google LLCAlso see Prosecutors and AI: Navigating Justice in the Age of Algorithms (8/30/24).

Let us embrace a marketplace where lies and deception are punished, not rewarded, where the rule of law and honest integrity prevail. That day will come, the only question is when. Will the country backslide again before it moves forward? Will most of the courts maintain or reattain independent integrity? Will our political leaders, our big tech CEOs? Will we the people?

It will be interesting to see how this all plays out in the next five to ten years, maybe less, especially if democracy remains strong world-wide. It will also help if OpenAI hits another home run with GPT Search, or some other company does so. I will let you know about GPT Search after I get my hands on it. Hopefully it will not suggest we add glue to keep the cheese on our pizza from slipping!

Ralph Losey Copyright 2024 — All Rights Reserved


Prosecutors and AI: Navigating Justice in the Age of Algorithms

August 30, 2024

Ralph Losey. Published August 30, 2024.

AI has the potential to transform the criminal justice system through its ability to process vast datasets, recognize patterns, and predict outcomes. However, this potential comes with a profound responsibility: ensuring that AI is employed in ways that uphold basic human principles of justice. This article will focus on how AI can assist prosecutors in fulfilling their duty to represent the people fairly and equitably. It will highlight the practical benefits of AI in criminal law, providing specific examples of its application. The underlying theme emphasizes the necessity of human oversight to prevent the misuse of AI and to ensure that justice remains a human ideal, not an artificial construct.

The integration of AI into criminal prosecutions must be aligned with the ethical and legal obligations of prosecutors as outlined, for instance, by the American Bar Association’s Criminal Justice Standards for the Prosecution Function (ABA, 4th ed. 2017) (hereinafter “ABA Standards”). The ABA Standards emphasize the prosecutor’s duty to seek justice, maintain integrity, and act with transparency and fairness in all aspects of the prosecution function. This article will not cover the indirectly related topics of AI evidence. See Gless, Lederer, Weigend, AI-Based Evidence in Criminal Trials? (William & Mary Law School, Winter 2024). It will also not cover criminal defense lawyer issues, but maybe in a followup soon.

The Promise of AI in Criminal Prosecutions

The primary duty of the prosecutor is to seek justice within the bounds of the law, not merely to convict.” ABA Standard 3-1.2(b). When AI is used responsibly, it can assist prosecutors in fulfilling this duty by providing new tools. The AI powered tools can enhance evidence analysis, case management, and decision-making, all while maintaining the integrity and fairness expected of the prosecution function. Prosecutors with AI can better manage the vast amounts of data in modern investigations, identify patterns that might escape human detection, and make more informed decisions. It is no magic genie, but when used properly, can be a very powerful tool.

The National Institute of Justice in March 2018 sponsored a workshop of prosecutors from around the country that identified data and technology challenges as a high-priority need for prosecutors. According to the report by the Rand Corporation on the conference entitled, Prosecutor Priorities, Challenges, and Solutions (“Rand Report“) the key findings of the prestigious group were: (1) difficulties recruiting, training, managing, and retaining staff, (2) demanding and time-consuming tasks for identifying, tracking, storing, and disclosing officer misconduct and discipline issues, and (3) inadequate or inconsistent collection of data and other information shared among agencies . . . as well as by emerging digital and forensic technologies. The full Rand Report PDF may be downloaded here. The opening summary states:

Prosecutors are expected to deliver fair and legitimate justice in their decision making while balancing aspects of budgets and resources, working with increasingly larger volumes of digital and electronic evidence that have developed from technological advancements (such as social media platforms), partnering with communities and other entities, and being held accountable for their actions
and differing litigation strategies. . . .

Moreover, the increasing volume of potentially relevant digital information, video footage, and other information from technological devices and tools can significantly add to the amount of time needed to sufficiently examine and investigate the evidence in order to make decisions about whether to drop or pursue a case. This can be especially challenging because the staffing and other resources in prosecutors’ offices have not necessarily kept pace with these increasing demands.

Although the amount of digital information that prosecutors must sometimes sift through can be managed, in part, through innovative technological tools, such as data mining and data reduction solutions (Al Fahdi, Clarke, and Furnell, 2013; Quick and Choo, 2014), there are often steep learning curves or high costs that make it unrealistic for an office to implement these technologies.

Rand Report, pages 1-3.

Also see the excellent Duke Law sponsored one hour panel discussion video, The Equitable, the Ethical and the Technical: Artificial Intelligence’s Role in The U.S. Criminal Justice System for a comprehensive discussion of issues as of November 2021, just before the development and release of the new generative models of AI a year later.

e-Discovery, Evidence Analysis and Case Management

As the Rand Report confirms, the sheer volume of evidence in complex criminal investigations is a significant challenge for prosecutors. Also see: Tinder Date Murder Case Highlights the Increasing Complexity of eDiscovery in Criminal Investigations: eDiscovery Trends (e-Discovery Daily, 6/15/18). AI can analyze vast datasets—such as emails, text messages, and internet activity logs—to identify patterns indicative of criminal activity, but the software can be expensive and requires trained technology experts. AI algorithms can recognize specific types of evidence, such as images, sentiments, or key concepts relevant in many cases. They can help prosecutors identify patterns and connections within the evidence that might not be immediately apparent to human investigators. This capability can significantly reduce the time needed to search and study evidence, enabling prosecutors to build stronger cases more efficiently.

But, as the Rand Report also makes clear, prosecutors need adequate funding and trained personnel to purchase and use these new tools. Fortunately generative AI is substantially less expensive that the older models of AI and easier to use. Still, issues of fairness and guardrails against discrimination in their use remain as significant problems. There are also very significant privacy issues inherent in predictive policing. David Ly, Predictive Policing: Balancing Innovation and Ethics (The Fast Mode, 8/15/24); Arjun Bhatnagar, The Threat of Predictive Policing to Data Privacy and Personal Liberty (Dark Reading, 12/27/22).

Use of AI evidence search and classification tools such as predictive coding, which are well established in civil litigation, should be used more widely used soon in criminal law. The high costs involved are now plummeting and should soon be affordable to most prosecutors. They can drastically reduce the time needed to search and analyze large volumes of complex data. Still, budgets to hire trained personnel to operate the new tools must be expanded. AI can complement, but not entirely replace, human review in what I call a hybrid multimodal process. Ralph Losey, Chat GPT Helps Explains My Active Machine Learning Method of Evidence Retrieval (e-Discovery Team, 1/28/23). Human experts on the prosecutor’s team should always be involved in the evidence review to ensure that no critical information is missed.

Transparency and accountability are also crucial in using AI in discovery. Defense attorneys should be provided with a detailed explanation of how these tools were used. This is essential to maintaining the fairness and integrity of the discovery process, ensuring that both sides have equal access to evidence and can challenge the AI’s conclusions if necessary.

AI also plays a crucial role in case management. AI-powered tools can help prosecutors organize and prioritize cases based on the severity of the charges, the availability of evidence, and the likelihood of a successful prosecution. These tools can assist in tracking deadlines, managing court calendars, and ensuring that all necessary court filings are completed on time. By streamlining these administrative tasks, AI allows prosecutors and their assistants to concentrate on the substantive aspects of their work—pursuing justice. It also helps them deal with the omnipresent staff shortage issues.

Bias Detection and Mitigation

Bias in prosecutorial decision-making—whether conscious or unconscious—remains a critical concern. ABA Standards state:

The prosecutor should not manifest or exercise, by words or conduct, bias or prejudice based upon race, sex, religion, national origin, disability, age, sexual orientation, gender identity, or socioeconomic status. A prosecutor should not use other improper considerations, such as partisan or political or personal considerations, in exercising prosecutorial discretion. A prosecutor should strive to eliminate implicit biases, and act to mitigate any improper bias or prejudice when credibly informed that it exists within the scope of the prosecutor’s authority.

ABA Standards 3-1.6(a).

AI can play a crucial role in detecting and mitigating such biases, helping prosecutors adhere to the mandate that they “strive to eliminate implicit biases, and act to mitigate any improper bias or prejudice” within their scope of authority.

AI systems also offer the potential to detect and mitigate unconscious human bias in prosecutorial decision-making. AI can analyze past prosecutorial decisions to identify patterns of bias that may not be immediately apparent to human observers. By flagging these patterns, AI can help prosecutors become aware of their biases in their office and take corrective action.

Prosecutors should use care in the selection and use of AI systems. If they are trained on biased data, they can perpetuate and even amplify existing disparities in the criminal justice system. For instance, an AI algorithm used to predict recidivism, if trained on data reflecting historical biases—such as the over-policing of minority communities—may disproportionately disadvantage these communities. AI systems used in criminal prosecutions should be designed to avoid this bias.

The software purchased by a prosecutor’s office should be chosen carefully, ideally with outside expert advice, and rigorously tested for bias and other errors before deployment. Alikhademi, K., Drobina, E., Prioleau, D. et al.A review of predictive policing from the perspective of fairness Artif Intell Law 30, 1–17 (2022) (“[T]he pros and cons of the technology need to be evaluated holistically to determine whether and how the technology should be used in policing.”) There should also be outside community involvement. Artificial Intelligence in Predictive Policing Issue Brief (NAACP, 2/15/24) (NAACP’s four recommendations: independent oversight; transparency and accountability; community engagement; ban use of biased data; new laws and regulations).

Prosecutors should not fall into a trap of overcompensating based on statistical analysis alone. AI is a limited tool that, like humans, makes errors of its own. Its use should be tempered by prosecutor experience, independence, intuition and human values. When we use AI in any context or field it should be a hybrid relationship where humans remain in charge. From Centaurs To Cyborgs: Our evolving relationship with generative AI (e-Discovery Team, 4/24/24) (experts recommend two basic ways to use AI, both hybrid, where the unique powers of human intuition are added to those of AI). AI can also help prosecutors make objective decisions on charging and sentencing by providing statistically generated recommendations, again with the same cautionary advice on overreliance.

Sentencing Recommendations and Predictive Analytics

The use of AI in predictive analytics for sentencing is among the most controversial applications in criminal law. AI systems can be trained to analyze data from past cases and make predictions about the likelihood of a defendant reoffending or suggest appropriate sentences for a given crime. These recommendations can then inform the decisions of judges and prosecutors.

Predictive analytics has the potential to bring greater consistency and objectivity to sentencing. By basing recommendations on data rather than individual biases or instincts, AI can help reduce disparities and ensure similar cases are treated consistently. This contributes to a more equitable criminal justice system.

While AI can bring greater consistency to sentencing, prosecutors must ensure that AI-generated recommendations comply with their “heightened duty of candor” and the overarching obligation to ensure that justice is administered equitably.

In light of the prosecutor’s public responsibilities, broad authority and discretion, the prosecutor has a heightened duty of candor to the courts and in fulfilling other professional obligations.

ABA Standard 3-1.4(a)

The use of AI in sentencing raises important ethical questions. Should AI make predictions about a person’s future behavior based on their past? What if the data used to train the AI is biased or incomplete? How can we ensure that AI-generated recommendations are not seen as infallible but are subject to critical scrutiny by human decision-makers?

These concerns highlight the need for caution. While AI can provide valuable insights and recommendations, it is ultimately the responsibility of human prosecutors and judges to make the final decisions. AI should be a tool to assist in the pursuit of justice, not a replacement for human judgment.

Predictive Policing

Predictive policing uses algorithms to analyze massive amounts of information in order to predict and help prevent potential future crimes. Tim Lau, Predictive Policing Explained (Brennan Center for Justice, 11/17/21). This is an area where old AI (before advent of generative AI) has been embraced by many police departments worldwide, including the E.U. countries, but also China and other repressive regimes. Many prosecutors in the U.S. endorse it, but it is quite controversial and hopefully will be improved by new models of generative AI. The DA’s office wants to use predictive analytics software to direct city resources to ‘places that drive crime.’ Will it work? (The Lens, 11/15/23). In theory, by analyzing data on past crimes—such as the time, location, and nature of the offenses—AI algorithms can predict where and when future crimes are likely to occur. The majority of reports say this already works. But what of the minority reports? They contest the accuracy of these predictions using old AI models. Some say they are terrible at it. Sankin and Mattu, Predictive Policing Software Terrible At Predicting Crimes (Wired, 10/2/23). There is widespread concern of growing misuse, especially in countries that have politicized prosecutorial systems.

Still, in theory this kind of statistical analysis should be able to help honest law enforcement agencies allocate resources more effectively, enabling police to prevent crime before it happens. See generally, Navigating the Future of Policing: Artificial Intelligence (AI) Use, Pitfalls, and Considerations for Executives (Police Chief Magazine, 4/3/24).

All prosecutors, indeed. all citizens, want to be smart when it comes to crime, we all want “more police officers on the street, deployed more effectively. They will not just react to crime, but prevent it.” Kamala Harris (Author) and Joan Hamilton, Smart on Crime: A Career Prosecutor’s Plan to Make Us Safer (Chronicle Books, 2010).

The Los Angeles Police Department (LAPD) was one of the first to use predictive policing software, which was known as PredPol (now Geolitica). It identified areas of the city at high risk for certain types of crime, such as burglaries or auto thefts. The software analyzed data on past crimes and generated “heat maps” that indicate where crimes are most likely to occur in the future. This guided patrols and other law enforcement activities. PredPol proved to be very controversial. Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them (The Markup, 12/2/21). Its use was discontinued by the LAPD in 2020, but other companies claim to have corrected the biases and errors in the programs. See Levinson-Waldman and Dwyer, LAPD Documents Show What One Social Media Surveillance Firm Promises Police (Brennan Center for Justice, 11/17/21).

Another type of predictive policing software was adopted by the NYPD called Patternizr. According to the Wikipedia article on predictive policing:

The goal of the Patternizr was to help aid police officers in identifying commonalities in crimes committed by the same offenders or same group of offenders. With the help of the Patternizr, officers are able to save time and be more efficient as the program generates the possible “pattern” of different crimes. The officer then has to manually search through the possible patterns to see if the generated crimes are related to the current suspect. If the crimes do match, the officer will launch a deeper investigation into the pattern crimes.

See Molly Griffard, A Bias-Free Predictive Policing Tool?: An Evaluation of the Nypd’s Patternizr (Fordham Urban Law Journal, December 2019). 

While predictive policing has been credited with reducing crime in some areas, it has also been criticized for potentially reinforcing existing biases. If the data used to train the AI reflects a history of over-policing in certain minority communities, the algorithm may predict those communities are at higher risk for future crimes, leading to even more policing in those areas. This, in turn, can perpetuate a cycle of discrimination and injustice. See e.g. Taryn Bates, Technology and Culture: How Predictive Policing Harmfully Profiles Marginalized People Groups (Vol. 6 No. 1 (2024): California Sociology Forum).

To address these concerns, predictive policing algorithms must be designed with fairness in mind and subject to rigorous oversight. David Stephens, Forecasting Justice: The promise of AI-enhanced law enforcement (Police1, 1/27/24). I endorse the conclusions of Chief Deputy David Stephens made in his Forecasting Justice article:

Projecting into the next decade, AI will be an integral part of law enforcement — from crime prediction and real-time decision aids to postincident analysis. These technologies could lead to smarter patrolling, fewer unnecessary confrontations and overall enhanced community safety. However, this vision can only materialize with rigorous oversight, consistent retraining and an undiluted focus on civil liberties and ethics. Law enforcement’s AI-driven future must be shaped by a symbiotic relationship where technology amplifies human judgment rather than replacing it. The future promises transformative advances, but it’s imperative that the compass of integrity guide this journey.

The latest versions of predictive policing technology will certainly use new generative AI enhanced analysis. Law enforcement should be very careful in the purchase and implementation of these new technologies. They should seek the input of outside experts and carefully examine vendor representations. That should include greater vendor transparency, such as disclosure of the data used to train these systems to confirm that it is representative and unbiased. Proper methods of implementation of the AI tools should also be carefully considered. In my view and others this mean adopting a hybrid approach that “amplifies human judgment rather than replacing it.”

Sentiment Analysis in Jury Selection

Another trending application of AI in criminal law is the use of sentiment analysis in jury selection. Sentiment analysis is a type of AI that can analyze text or speech to determine the underlying emotions or attitudes of the speaker. In jury selection, sentiment analysis can analyze potential jurors’ public records, especially social media posts, as well as their responses during voir dire—the process of questioning jurors to assess their suitability for a case. It can also monitor unfair questions of potential jurors by prosecutors and defense lawyers. See Jo Ellen Nott, Natural Language Processing Software Can Identify Biased Jury Selection, Has Potential to Be Used in Real Time During Voir Dire (Criminal Legal News, December 2023). Also see AI and the Future of Jury Trials (CLM, 10/18/23).

For example, an AI-powered sentiment analysis tool could analyze the language used by potential jurors to identify signs of bias or prejudice that might not be immediately apparent to human observers. This information could then be used by prosecutors and defense attorneys to make more informed decisions about which jurors to strike or retain.

While sentiment analysis has the potential to improve jury selection fairness, it also raises ethical questions. Should AI influence juror selection, given the potential for errors or biases in the analysis? How do we ensure AI-generated insights are used to promote justice, rather than manipulate the selection process?

These questions underscore the need for careful consideration and oversight in using AI in jury selection. AI should assist human decision-makers, not substitute their judgment.

AI in Plea Bargaining and Sentencing

AI can also play a transformative role in plea bargaining and sentencing decisions. Plea bargaining is a critical component of the criminal justice system, with most cases being resolved through negotiated pleas rather than going to trial. AI can assist prosecutors in evaluating the strength of their case, the likelihood of securing a conviction, and the appropriate terms for a plea agreement. See: Justice Innovation Lab, Critiquing The ABA Plea Bargaining Principles Report (Medium, 2/1/24); Justice Innovation Lab, Artificial Intelligence In Criminal Court Won’t Be Precogs (Medium, 10/31/23) (article concludes with “Guidelines For Algorithms and Artificial Intelligence In The Criminal Justice System“).

For example, AI algorithms can analyze historical data from similar cases to provide prosecutors with insights into the typical outcomes of plea negotiations, considering factors such as the nature of the crime, the defendant’s criminal history, and the available evidence. This can help prosecutors make more informed decisions on plea deal offers.

Moreover, AI can assist in making sentencing recommendations that are more consistent and equitable. Sentencing disparities have long been a concern in the criminal justice system, with studies showing that factors such as race, gender, and socioeconomic status can influence sentencing outcomes. AI has the potential to reduce these disparities by providing sentencing recommendations based on objective criteria rather than subjective judgment. Keith Brannon, AI sentencing cut jail time for low-risk offenders, but study finds racial bias persisted (Tulane Univ., 1/23/24); Kieran Newcomb, The Place of Artificial Intelligence in Sentencing Decisions (Univ. NH, Spring 2024).

For instance, an AI system could analyze data from thousands of past cases to identify typical sentences imposed for specific crimes, accounting for relevant factors like the severity of the offense and the defendant’s criminal record. This information could then be used to inform sentencing decisions, ensuring that similar cases are treated consistently and fairly.

However, using AI in plea bargaining and sentencing also raises significant ethical considerations. The primary concern is the risk of AI perpetuating or exacerbating existing biases in the criminal justice system. If the data used to train AI systems reflects historical biases—such as harsher sentences for minority defendants—AI’s recommendations may inadvertently reinforce those biases.

To address this concern, AI systems used in plea bargaining and sentencing must be designed with fairness and transparency in mind. This includes ensuring that the data used to train these systems is representative and free from bias and providing clear explanations of how the AI’s recommendations were generated. Moreover, human prosecutors and judges must retain the final authority in making plea and sentencing decisions, using AI as a tool to inform their judgment rather than a substitute for it. It is important that AI systems be chosen and used very carefully in part because “the prosecutor should avoid an appearance of impropriety in performing the prosecution function.” ABA Standard 3-1.2(c)

Ethical Implications of AI in Criminal Prosecutions

While the potential benefits of AI in criminal law are significant, it is equally important to consider the ethical implications of integrating AI into the criminal justice system. AI, by its very nature, raises questions about accountability, transparency, and the potential for misuse—questions that must be carefully addressed to ensure AI is used in ways that advance, not hinder, the cause of justice.

As we integrate AI into criminal prosecutions, it is essential that we do so with a commitment to the principles articulated in the ABA’s Criminal Justice Standards. By aligning AI’s capabilities with these ethical guidelines, we can harness technology to advance justice while upholding the prosecutor’s duty to act with integrity, fairness, and transparency.

Transparency and Accountability

One of the most pressing ethical concerns is the issue of transparency, which we have mentioned previously. AI algorithms are often referred to as “black boxes” because their decision-making processes can be difficult to understand, even for those who design and operate them. This lack of transparency can be particularly problematic in criminal prosecutions, where the stakes are incredibly high, and the consequences of a wrong decision can be severe. A ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up (The Conversation, 7/26/23) (discusses UK systems).

For example, if an AI system is used to predict the likelihood of a defendant reoffending, it is crucial that the defendant, their attorney, and the judge understand how that prediction was made. Without transparency, challenging the AI’s conclusions becomes difficult, raising concerns about due process and the right to a fair trial.

To address this issue, AI systems used in criminal prosecutions must be designed to be as transparent as possible. This includes providing clear explanations of how AI’s decisions were made and ensuring that the underlying data and algorithms are accessible for review and scrutiny. There is federal legislation that has been pending for years that would require this, the Justice in Forensic Algorithms Act. New bill would let defendants inspect algorithms used against them in court (The Verge, 2/15/24) (requires disclosure of source code). Moreover, the legal community must advocate for developing AI systems prioritizing explainability and interpretability, ensuring that the technology is effective, accountable, and understandable.

Fairness and Bias

Another ethical concern is, as mentioned, the potential for AI to be used in ways that exacerbate existing inequalities in the criminal justice system. For example, there is a risk that AI could justify more aggressive policing or harsher sentencing in communities already disproportionately targeted by law enforcement. This is why AI systems must be designed with fairness in mind and their use subject to rigorous oversight. Look beyond vendor marketing claims to verify with hard facts and independent judgments.

Ensuring fairness requires that AI systems are trained on representative and unbiased data. It also necessitates regular audits of AI systems to detect and mitigate any biases that may arise. Additionally, AI should not be the sole determinant in any criminal justice decision-making process; human oversight is essential to balance AI’s recommendations with broader considerations of justice and equity. For instance, the NYPD represents that its widespread use of AI driven facial recognition technology in criminal investigations “does not establish probable cause to arrest or obtain a search warrant, but serves as a lead for additional investigative steps.” NYPD Questions and Answers – Facial Recognition, and see the NYPD official patrol guide dated 3/12/20.

Human Judgment and Ethical Responsibility

The deployment of AI in criminal prosecutions also raises important questions about the role of human judgment in the justice system. While AI can provide valuable insights and recommendations, it is ultimately human prosecutors, judges, and juries who must make the final decisions. This is because justice is not just about applying rules and algorithms—it is about understanding the complexities of human behavior, weighing competing interests, and making moral judgments.

AI, no matter how advanced, cannot replicate the full range of human judgment, and it should not be expected to do so. Instead, AI should be seen as a tool to assist human decision-makers, providing them with additional information and insights that can help them make more informed decisions. At the same time, we must be vigilant in ensuring that AI does not become a crutch or a substitute for careful human deliberation, judgment and equity.

Conclusion

The integration of AI into criminal prosecutions holds the promise of advancing the cause of justice in profound and meaningful ways. To do so we must always take care that applications of AI follow the traditional principles stated in the Criminal Justice Standards for the Prosecution Function and other guides of professional conduct. By aligning AI’s capabilities with ethical guidelines, we can harness technology in a manner that advances the prosecutor’s duty to act with integrity, fairness, and transparency.

With these cautions in mind, we should boldly embrace the opportunities that AI offers. Let us use AI as a tool to enhance, not replace, human judgment. And let us work together—lawyers, technologists, and policymakers—to ensure that the use of AI in criminal prosecutions advances the cause of justice for all.

Ralph Losey Copyright 2024 — All Rights Reserved


Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time

August 12, 2024

Ralph Losey. Published August 12, 2024

A computer capable of Artificial General Intelligence (AGI), if ever created, will be the greatest invention in history. It will be far more transformative than the printing press or the wheel. Unlike narrow AI, which excels in specific tasks, AGI will have a general ability to understand, learn, and apply knowledge at a level equal to or beyond human intelligence. If AGI is attained in a computer, it will be the greatest invention of all time, primarily because of its impact on all facets of human life and civilization, including our systems of justice.

AGI Defined

Artificial General Intelligence means a computer system with a level of AI that is equal or greater than human intelligence in all fields. Narrow AI, which has intelligence to perform specific limited tasks, has already attained greater than human intelligence in some fields. The fields include games, such as Chess and Go, facial recognition, translation, and scientific data heavy areas, such as protein folding. AGI, when invented, will be capable of performing at a human or better level in all fields of knowledge and all intellectual tasks, including math, science, coding, general reasoning, problem-solving, planning, and adapting to new situations. You name it, it will be as smart or smarter than the top human minds in that field. Sparks of Artificial General Intelligence: Early experiments with GPT-4 (Microsoft Research, 4/13/23) (“Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”)

This broad cognitive capability will enable AGI to integrate knowledge from various domains, make autonomous decisions, and continuously improve its performance through learning and experience. If attained, AGI computers will revolutionize human knowledge and fundamentally transform all human society and culture, including Law. Ray Kurzweil, and others, believe AGI will be attained in five years, 2029. Ray Kurzweil: Google’s prophet of superintelligent AI who will not slow down (e-Discovery Team, 12/12/23). Some think it may come sooner. Some think that AI self awareness will result. I do not, but I am confident AGI will have advanced metacognition abilities. Bill Gates on the Next ‘Big Frontier’ of Generative AI: Programming Metacognition Strategies into ChatGPT (e-Discovery Team, 7/26/24).

Ray Kurzweil predicts that once AGI is invented by a company such as Google or OpenAI it will quickly spread and be inexpensively available to everyone, much like generative AI is now. He also predicts this will then lead to the Singularity in 2045, where humans will merge with superintelligent AI leading to a million fold increase in our intelligence. Ray Kurzweil’s New Book: The Singularity is Nearer (when we merge with AI) (e-Discovery Team 7/17/24). Others believe that AGI will never be attained or is hundreds of years away. Based on Ray Kurzweil’s excellent track record, strong arguments, and position at Google, it seems reasonable to take him seriously and start preparing for AGI now. Id.

Unmatched Problem-Solving Capabilities

AGI, when attained, should be able to address complex global challenges that have persisted for centuries. Its advanced cognitive abilities should enable it to:

  • Analyze Vast Data Sets: AGI can process and analyze massive amounts of data far more quickly and accurately than humans, leading to breakthroughs in fields such as medicine, climate science, and economics.
  • Generate Innovative Solutions: By integrating knowledge from various disciplines, AGI can develop innovative solutions to problems like climate change, pandemics, and resource scarcity.
  • Optimize Systems: AGI can optimize systems ranging from healthcare to transportation, improving efficiency and reducing waste on a global scale.

Exponential Advancement in Knowledge and Technology

AGI is expected to catalyze exponential growth in scientific and technological progress:

  • Accelerated Research: AGI can conduct research at a pace unimaginable for human scientists, rapidly advancing our understanding of the universe, biology, and more.
  • Innovation in Technology: AGI-driven innovation will lead to the creation of new technologies and the enhancement of existing ones, driving economic growth and improving quality of life.
  • Interdisciplinary Integration: AGI can seamlessly integrate knowledge from diverse fields, leading to holistic advancements that human researchers might miss due to the limitations of siloed disciplines.

The legal industry will also benefit immensely from AGI-driven advancements. Automated contract analysis, predictive policing, and enhanced legal analytics are just a few areas where AGI could significantly enhance efficiency and effectiveness. As I often note on the e-Discovery Team blog, the integration of advanced AI technologies in legal processes is not just beneficial, but necessary, for the legal profession to keep pace with the rapidly evolving technological landscape​​.

Enhanced Human Capabilities and Quality of Life

AGI has the potential to significantly enhance human capabilities and overall quality of life:

  • Personalized Education: AGI can provide individualized learning experiences, catering to each person’s strengths and weaknesses, and ensuring everyone reaches their full potential. In the legal sector, AGI could enhance legal education by providing personalized learning pathways for law students and continuous professional development for practicing lawyers. AGI could tailor legal training programs to address individual learning gaps, ensuring a more competent and knowledgeable legal workforce.
  • Advanced Healthcare: AGI can revolutionize healthcare by diagnosing diseases with unprecedented accuracy, personalizing treatments, low cost medicines, and even predicting health issues before they arise.
  • Economic Transformation: AGI can drive economic growth by automating mundane tasks, allowing humans to focus on creative and strategic activities, creating low cost goods, foods, energy, housing and significantly reducing poverty and inequality.

Ethical and Societal Impacts

The ethical implications of AGI are significant, and if managed responsibly, AGI can lead to a more just and equitable society:

  • Fair Distribution of Resources: AGI can optimize the distribution of resources, ensuring fair access to essentials like food, water, and healthcare.
  • Global Collaboration: AGI can facilitate international cooperation by providing unbiased analysis and recommendations, helping to resolve conflicts and promote peace.
  • Ethical Frameworks: With proper governance, AGI can be programmed to uphold ethical standards, ensuring that its actions are aligned with human values and societal good.

Historical Context and Long-term Impact

Comparing AGI to previous landmark inventions highlights its unique potential:

  • The Wheel: Revolutionized transportation and industry.
  • The Printing Press: Democratized knowledge and spurred the Scientific Revolution. Many believe it is the most important invention of all time, at least before AGI.
  • Electricity: Powered modern civilization and technological advancements.
  • The Internet: Connected the world and transformed communication and information access.

AGI will encompass the benefits of these inventions and surpasses them by offering not just tools, but an intelligent entity capable of continuous learning and improvement. That will lead to sustained progress in all fields of knowledge and endeavors.

Image of the top five inventions before AGI. By Ralph Losey using Visual Muse, a narrow AI.

Profound Transformations of the Legal Profession from AGI

The legal profession, once thought to be slow to adopt new technologies, stands to be profoundly transformed by AGI. The potential applications are vast and varied. Many in the legal profession are already catching on and using the narrow AI we have today. The stodgy old times of a slow moving legal profession are already dissolving, with e-discovery specialists leading the way.

Predictive Analytics

AGI can provide very powerful predictive analytics to forecast case outcomes. This helps lawyers to strategize more effectively. By analyzing historical data, case law, and judge-specific decision patterns, AGI can predict the likelihood of various outcomes in litigation. Our existing level AI is already very proficient at this. With even more powerful AGI, negotiation strategies will improve and lead to ever more judicious uses of litigation resources.

Document Review and E-Discovery

Document review and e-discovery are critical, labor-intensive aspects of modern legal practice. AI technologies like predictive coding have already begun to revolutionize these areas. AGI will take this to the next level, automating these processes with even greater accuracy and speed. Traditional document review involves sifting through vast quantities of documents to identify those relevant to a case, a task that is time-consuming and prone to human error. AGI can automate this process even further, rapidly identifying relevant documents, categorizing them by importance, and flagging potentially privileged information. Moreover, AGI can learn from each case, improving its accuracy and efficiency over time. This automation not only reduces costs but also ensures a more thorough and reliable review process, allowing legal professionals to focus on higher-value tasks.

Legal Research

Legal research is another area poised for further transformation by AGI. Currently, lawyers spend considerable time researching case law, statutes, and legal precedents to support their arguments. AGI can perform comprehensive legal research in seconds, providing lawyers with relevant case law, statutes, and legal precedents, thereby saving time and reducing costs. AGI’s ability to cross-reference vast legal databases instantaneously means that no relevant case or statute is overlooked. Additionally, AGI can present the findings in a concise and understandable manner, complete with contextual analysis and relevance ranking. This allows lawyers to quickly access the most pertinent information, enhancing the quality and precision of their legal arguments. Furthermore, AGI can keep abreast of the latest legal developments, ensuring that the research it provides is always current. The problems that some lawyers now have with AI case hallucinations and sycophantism will be a thing of the past.

Judicial Decision-Making

While controversial, there is potential for AGI to assist in judicial decision-making, ensuring consistency and impartiality in rulings. Circuits in Session: Addendum and Elaboration of the Appellate Court Judge Experiment (e-Discovery Team, 10/26/23); Circuits in Session: Analysis of the Quality of ChatGPT4 as an Appellate Court Judge (e-Discovery Team, 11/1/23). The judiciary’s role is to interpret and apply the law impartially; however, human judges can be influenced by unconscious biases, personal experiences, and external pressures. AGI, devoid of such biases, can assist by providing data-driven insights and suggesting verdicts based on precedent and established legal principles. For example, AGI could analyze past rulings on similar cases to recommend a decision that aligns with legal standards and precedents. This does not mean replacing judges but rather augmenting their decision-making process with a tool that offers a level of consistency and objectivity unattainable by humans alone. Judges could use AGI to double-check their reasoning, ensuring that their rulings are fair and legally sound. BREAKING NEWS: Eleventh Circuit Judge Admits to Using ChatGPT to Help Decide a Case and Urges Other Judges and Lawyers to Follow Suit (e-Discovery Team, 6/3/24).

Enhanced Client Interaction

AGI can revolutionize client interactions by providing personalized and immediate responses to client inquiries. Virtual legal assistants powered by AGI can handle initial consultations, answer routine questions, and provide updates on case progress. Of course, human supervision will remain, and some reforms of existing professional ethics will eventually be required. If handled properly, the use of AGI should not only improve client satisfaction, but also free up valuable time for lawyers to focus on complex legal matters.

Contract Analysis and Drafting

Contract analysis and drafting are fundamental yet tedious tasks in legal practice. AGI can streamline these processes by quickly analyzing large volumes of contracts, identifying potential risks, and suggesting optimal terms. Furthermore, AGI can draft contracts with high precision, ensuring compliance with relevant laws and minimizing the risk of disputes. This capability can be particularly beneficial in mergers and acquisitions, where timely and accurate contract analysis is crucial.

Legal Compliance and Risk Management

AGI can also play a significant role in legal compliance and risk management. Businesses operate in increasingly complex regulatory environments, where staying compliant with laws and regulations is challenging. AGI can monitor regulatory changes, assess their impact on the organization, and suggest necessary adjustments to policies and procedures. Additionally, AGI can identify potential legal risks and recommend proactive measures to mitigate them, thereby safeguarding businesses against costly legal issues.

Conclusion

If AGI is attained, it will be the greatest invention of all time due to its:

  • Problem-solving capabilities in all fields,
  • Potential for exponential advancement in knowledge and technology,
  • Enhancement of many human capabilities, and
  • Significant ethical and societal impacts.

Computers with AGI level intelligence will be able to integrate and innovate across disciplines. This incredible tool will enable us to progress and improve our quality of life at a previously unimaginable rate. This invention, if ever actualized, will be a transformative force in human history. Properly managed, AGI should not only solve existing problems, but also unlock new horizons for humanity, securing its place as the greatest invention of all time.

There will still be dangers and problems with implementation, to be sure, but the AGI itself should be able to help well meaning humans to avoid or mitigate them. See e.g.
Seven Problems of AI: an incomplete list with risk avoidance strategies and help from “The Dude” (e-Discovery Team, 8/6/24); Mustafa Suleyman, The Coming Wave (Crown, 9/5/23).

The legal profession should also benefit significantly by AGI. With proper human guidance, AGI level AI should herald a new era of efficiency, accuracy, and accessibility in legal services. From predictive analytics and document review, to legal research and judicial decision-making, AGI will transform every facet of legal practice. The judiciary and law schools will radically change too.

The transformation has already begun by use of the baby AIs we already have, ChatGPT 4 and the like. These changes will accelerate and improve dramatically once AGI is attained. More changes will likely result that are beyond anyone’s current imagination.

It is important to approach AGI with both optimism and caution. Hard work will be required in the coming years to make AGI a reality, and if the AI designers and coders do make an AGI, much more hard work will be required of the users of this invention. We will all have to study and work diligently to safely harness the world’s greatest invention. We will need to find a way to deploy the new found intelligence with wisdom and compassion. Legal professionals trained in AI can play a significant role in this new smart world. Ray Kurzweil predicts AGI by 2029, so we need to learn as much as we can to get ready.

Ralph Losey Copyright 2024 — All Rights Reserved