Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse

January 9, 2025

by Ralph Losey. Published January 9, 2025.

In the history of technological revolutions, there are moments that challenge not only our understanding of what is possible but the very nature of reality itself. Google’s latest refinement to its quantum computer, Willow, may represent such a moment. By achieving computational feats once thought to be confined to science fiction, it forces us to confront bizarre new theories about the fabric of the universe. Could this machine, built from the smallest known building blocks of matter, actually provide evidence that parallel universes exist as some at Google claim? The implications are as profound as they are unsettling.

Introduction

This article discusses Google’s quantum computer, Willow, and the groundbreaking evidence released on December 9, 2024. Willow demonstrated it could perform computations so complex that they would take classical computers longer than the age of the universe to complete. Many, including Hartmut Neven, founder and manager of Google’s Quantum Artificial Intelligence Lab, believe that the unprecedented speed of the quantum computer is only possible by its leveraging computations across parallel universes. Google’s recent advancements in real-time error correction using size scaling stacking of qubits made it possible for these parallel universes to “work” in our own reality. Google claims to be the first to overcome the main hurdle previously facing the practical use of quantum computers, the immense sensitivity of quantum systems to external disturbances like stray particles and vibrations, which researchers call noise.

Neven and his team suggest the best way to understand how their computer works is the many-worlds interpretation of quantum mechanics—the multiverse theory. This theory posits that every quantum event splits the universe, leading to a near infinite array of universes. In a TED Talk five months ago, well before Willow’s latest proof of concept and design, Neven described its remarkable quantum capacities and how they align with this theory. He even speculated that consciousness itself might arise from the interaction of infinite multiverses converging into a single neurological form. These are not just bold claims—they are paradigm-shifting ideas that challenge our deepest assumptions about existence.

Crazy you say? The Manager of Google’s Quantum Artificial Intelligence Lab speaking about tiny transverse-able wormholes, time crystals and quality controlled computations in multiple universes! Even talking seriously about quantum computers “allowing us to expand human consciousness in space, time and complexity.”

Maybe hard to believe but paradigm shifting ideas are often at first dismissed and ridiculed as crazy. Consider the trial of Galileo in 1633 for heresy. Despite Galileo’s eloquent defense arguments that the Earth revolves around the Sun, he was convicted of heresy and spent the rest of his life, eight years, under house arrest. The final judgment rendered also banned him from all further “Ted Talks” of his day about the crazy idea, which obviously defies common sense, “that the sun is the center of the world, and that it does not move from east to west, and that the earth does move, and is not the center of the world.” The judgment by the Catholic Church was not reversed until 1992! Quantum computing, like Galileo’s heliocentric model, challenges us to see beyond what seems obvious and to embrace ideas that defy conventional understanding.

This article explores the quantum parallel universes controversy, which is currently sparking debates across physics, philosophy, and even metaphysics. We’ll examine the topic in a straightforward yet accurate manner, accessible to both experts and curious newcomers. Fasten your seatbelts—today’s scientific theories are as intellectually jarring as Galileo’s were in 1633, when the movement of the Sun across the sky seemed an unshakable truth. As then, we are called to rethink not just how we understand the universe, but our place within it.

To grasp the implications of quantum computing, we must first explore its roots in the fundamental fabric of reality. What happens when exponentially greater possibilities are computed in parallel? What happens when this is applied to generative AI? Will AI deliver answers that are more profound, or entirely transformational? Perhaps, as imagined in my short story, Singularity Advocate Series #1:  AI with a Mind of Its Own, On Trial for its Life, these advancements could even lead to AI consciousness. The possibilities are as exhilarating as they are unsettling.

Quantum Computing is Now Doing the Impossible

The multiverse controversy gained new momentum with Google’s claim that its quantum computer, Willow, recently completed a famous benchmark computation, the Random Circuit Sampling (RCS) test, in just five minutes. This achievement is staggering because this theoretical task would take the fastest classical supercomputers an estimated 10 septillion years (10 followed by 24 zeros) to finish! To put that in perspective, the Universe itself is approximately 13.8 billion years old—meaning 10 septillion years is about 999,999,998,620,000,000,000 times older than the Universe. The sheer scale of this comparison defies imagination.

How can such an extraordinary feat be possible? The answer lies in the fundamental principles of quantum computing and its use of qubits. Unlike classical bits, which are confined to being either 0 or 1, qubits exist in a superposition state that is a probabilistic blend of both 0 and 1 simultaneously, until measured. To put it simply, qubits are neither strictly here nor there, neither fully 0 nor fully 1, but somewhere in between. Google’s qubits require superconductivity and can only work in the coldest places in our universe, the artificially constructed refrigerated chambers that hold the qubits. Go inside the Google Quantum AI lab to learn about how quantum computing works, video at 3:30-4:30 of 6:17. They are measured and made to collapse from a zero and one super-state by use of tuned microwaves

This seemingly impossible property of both a zero and one probable charge is called superposition. Qubits, governed by the principles of quantum mechanics, behave both as particles and waves depending on the conditions. This wave-like nature underpins phenomena like superposition and entanglement. Entangled particles are linked so that the measurement of one instantly determines the state of the other, no matter the distance between them. (To me and others, this reliance on human measurements to explain a theory is misplaced (see “Measurement Problem,” Wikipedia.)) The instant changes supposedly caused by a measurement also seeming violate the limitations of time and space and the Speed of Light. At first, this phenomenon—called quantum entanglement—was met with skepticism, famously dismissed by Albert Einstein as “spooky action at a distance.” Yet, like Galileo’s once-ridiculed theories, the fact of quantum entanglement has been repeatedly validated through rigorous experimentation, although no one really knows how it works.

The Speed of Light (SOL) is supposedly not violated by quantum entanglement because the states are random and probabilistic, and supposedly nothing actually “travels” from one qubit or elementary particle to another. This is the establishment view of the SOL as a limit to try to uphold the general view of relativity. This has never been totally convincing to some scientists. They contend the SOL is not an inviolate limit. If these antiestablishment scientists are correct, then space travel at faster that light velocities might be possible. That mean our physical isolation from other star systems could be overcome.

This is possible under the parallel universes theory, which also goes under the name of the Many-Worlds Interpretation (MWI). The idea was first set forth by Hugh Everett in 1957 in his dissertation “The Theory of the Universal Wavefunction.” Scientists arguing for the Many Worlds Interpretation include Bryce DeWitt, David Deutsch, Max Tegmark and Sean Carroll. [I suggest you see recent Tegmark interviews excerpts by Robert Kuhn, here, here and here and another short video of Max Tegmark here. You should also watch a recent video interview of Sean Carroll by Neil deGrasse, which is included later in this article along with reference to his two latest books. As an interesting aside, physicist David David Deutsch (1953-present) speculates in his book The Beginning of Infinity (pg. 294) that some fiction, such as alternate history, could occur somewhere in the multiverse, as long as it is consistent with the laws of physics.]

Regardless of whether the SOL is being violated, quantum computers today routinely use quantum entanglement to link qubits, enabling them to function as an interconnected system. By leveraging the unique properties of quantum mechanics—superposition, entanglement, and interference—quantum computers can simultaneously explore an immense number of possible solutions, making computations that are impossible for classical computers.

Google’s Willow quantum chip demonstrated this capability by solving the Random Circuit Sampling (RCS) problem, a benchmark designed specifically to showcase the computational supremacy of quantum systems over classical ones. Willow’s ability to complete this test error-free marks a milestone not just in quantum computing but in our understanding of the potential of computers.

Random Circuit Sampling Benchmark Test

Here’s a simplified explanation of the RCS benchmark test. Imagine navigating an incredibly complex maze filled with twists, turns, and countless random paths. The goal of the RCS test is to “map” this maze by randomly exploring all of its paths and recording where each one leads.

In quantum computing the “maze” represents a random quantum circuit. A quantum circuit is like a recipe composed of gates—building blocks that dictate how qubits interact and evolve. In the RCS test, these gates are arranged randomly, creating a circuit of immense complexity. The “map” of this circuit is the output: a set of results generated based on probabilities defined by the random arrangement of gates. The test is about “sampling” these outputs multiple times to uncover the circuit’s overall behavior.

For non-quantum chip computers to simulate this process, they must calculate every possible path through the maze, one at a time. The complexity of possible paths grows exponentially as the various alternative combine. Even using today’s supercomputers the calculation can require an unimaginable amount of time—potentially up to septillions of years.

The RCS test is designed to showcase quantum computers’ ability to tackle tasks that are practically impossible for classical systems. While the test itself doesn’t solve a “real-world” problem, it serves as a performance benchmark to demonstrate the mind-boggling computational power of quantum machines.

Until recently, this was all theoretical. Building a quantum chip capable of solving the RCS test without overwhelming errors had never been achieved. Noise—external interference from particles and vibrations—created too many errors for the results to be usable. However, in December 2024, Google announced that Willow had overcome the noise issue. By scaling up the number of qubits and implementing real-time error correction, Willow successfully completed the test.

This breakthrough means quantum computers may soon be able to leverage superposition and quantum interference to perform previously impossible computer tasks. By harnessing quantum entanglement, qubits can maintain correlations and work together as a unified system, enabling quantum computers to explore numerous paths through the maze simultaneously and sample outputs at seemingly impossible speeds.

These advancements make otherwise impossible computer tasks possible. Quantum computing holds the potential to revolutionize fields such as environmental modeling, chemistry, material science, medicine, cybersecurity (a very troubling thought), artificial intelligence, and even the creation of reality simulations. This adds some support for Elon Musk’s claim there is a 99% chance that we are already living in a simulated reality generated by an advanced alien civilization. The idea that we are all just computer generated avatars living in a fake world seems like sensational media fiction to me but large-scale quantum computers could soon bring ideas like that closer to our current universe realities.

Multiverse Metaphysics

The multiverse theory, which some argue is now much more viable due to Google’s quantum computer, has many challenging philosophical implications. Perhaps the most fascinating is the idea that our reality, our universe, is just one among countless others, potentially infinite in number. This challenges our perception of ourselves as unique and our universe as the only reality, suggesting instead that we are just one small part of an unfathomably vast and complex existence. In some ways this is even weirder than Musk’s belief we are living in a simulated reality—a kind of cosmic deepfake.

Picture a reality where every possible outcome of every quantum event plays out in a separate universe. Every decision you make, every path you don’t take, could be unfolding in parallel timelines, creating alternate versions of yourself. Multiverse metaphysics challenges our traditional understanding of identity and free will. If every choice creates a new branching timeline, does our sense of individuality and free-will still make sense? Or are we just one version of countless others diverging infinitely in a meaningless multiverse?

The multiverse also forces us to rethink our understanding of time. One model suggests that these parallel universes exist across vast stretches of space, each potentially originating from its own Big Bang. This implies that time may not be the linear flow we perceive but rather a multidimensional web, where past, present, and future coexist simultaneously. Personally, I wouldn’t be surprised if this turns out to explain phenomena like quantum entanglement—Einstein’s “spooky action at a distance.” Is this what Helmut Neven is referring to when he TED Talks about his quantum computer creating nearly perpetual motion time crystals? Supra at 4:55 of 11:39.

While these concepts might sound like science fiction, advancements in quantum computing, such as Google’s Willow, could provide the tools to explore them scientifically. Some physicists believe that anomalies in the cosmic microwave background radiation—remnants of the Big Bang—might offer indirect evidence of the multiverse. Could this also lend credence to Musk’s speculation that we’re living in a computer simulation? If that’s the case, does it mean we’re at the mercy of some cosmic programmer who might press the reset button at any moment? (For the record, I doubt very much the Musk-supported scenario—though the thought is undeniably unsettling.)

For more on the far-out philosophical implications of the quantum world and the multiverse, check out Neli deGrasse Tyson’s conversation with theoretical physicist Sean Carroll below. Also see Sean Carroll’s recent books, Quanta and Fields: The Biggest Ideas in the Universe (Dutton, 2024) and Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime (Dutton, 2019), and videos.

The multiverse theory has its share of critics, and skepticism remains widespread among scientists. Yet, even if concrete evidence for parallel universes eludes us, the mere exploration of these ideas expands the boundaries of our understanding of reality. Such inquiries challenge us to confront profound questions about existence and the nature of the universe itself. One thing is certain: quantum computers like Willow compel us to reevaluate our perceptions of what is real. Could Hartmut Neven or Sean Carrol be the heretical Galileo of our time?

As for me, I lean toward perspectives grounded in self-determination and objective truth. I find it hard to accept that every quantum event, such as the collapse of a probability wave during measurement, results in the creation of an entirely new universe. Likewise, I’m skeptical of the idea that each decision we make spawns a new universe, though I do believe we create our own reality within this universe. My belief aligns closely with the concept of free will. I’m also intrigued by the idea that multiple universes could exist simultaneously and that quantum particles might somehow traverse between them. The idea that quantum computers might leverage these connections across universes to perform their calculations is consistent with these musings, suggesting that the interplay between quantum mechanics and multiverses may offer profound insights into the fabric of reality.

But can we communicate and receive intelligent data from other universes? Can we engineer practical applications that use parallel universes? Helmut Neven stated in his TED Talk that the quantum computer his team at Google created can be thought of as creating tiny, transverse-able wormholes between universes. Supra at 4:20 of 11:39. Quantum computers might not create new universes, but they could hypothetically create bridges between them. Perhaps interaction with other universes is what Google’s Willow is now doing.

This idea challenges the traditional worldview of mainstream scientists, which is centered on a single universe and the foundational power of measurements to determine outcomes. (As mentioned, this reliance on the seemingly magical power of measurement or human observation to explain quantum behavior comes across as an irrational shortcut to me, and many others, a product of the early Twentieth Century worldview.) Whatever the explanation, it is clear that Willow now operates successfully, defying conventional expectations and hinting at possibilities that push the boundaries of our current understanding.

According to Google, now that it has proof of concept on what a few chips can do it will start construction of large stacks of super-cooled quantum computers. What happens when it uses the power of a million quantum qubits? Google’s goal is to begin releasing practical applications by the end of this decade—perhaps sooner with AI’s help. It’s closest competitors in this field-IBM , Amazon, Microsoft and others, might not be far behind. Quantum computation is yet another dramatic agent of change. The future is moving fast.

Dark Side of Quantum Computers

Unfortunately, the future of quantum computers also has a dark side, much like AI. Privacy will be vulnerable as new cybersecurity attack weapons are made possible. All non-quantum encryption codes could easily be cracked and all communications and financial systems vulnerable, especially bit-coins. China is well aware of the weaponization potentials of both AI and quantum. They have a history of trade-secret theft from U.S. companies and are certainly now focused on stealing Google’s latest breakthrough to boost their own impressive efforts. Just before Google’s December 9, 2024, announcement of the Willow breakthrough China claimed their latest quantum chip, the Tianyan-504, had the same capacities as Google’s Willow. I suspect that impacted the timing of Google’s announcement.

The U.S. Department of Defense, NSA and big-tech companies are well aware of the new threats that quantum computing creates. Consider for instance the U.S. Department of Defense unclassified Report to Congress, Military and Security Developments Involving the People’s Republic of China dated 12/18/24:

The PLA is pursuing next-generation combat capabilities based on its vision of future conflict, which it calls “intelligentized warfare,” defined by the expanded use of AI, quantum computing, big data, and other advanced technologies at every level of warfare. . . .

Judging from the build out of the PRC’s quantum communication infrastructure, the PLA may leverage integrated quantum networks and quantum key distribution to reinforce command, control, and communications systems. . . .

In 2021, Beijing funded the China Brain Plan, a major research project aimed at using brain science to develop new biotechnology and AI applications. That year, the PRC designed and fabricated a quantum computer capable of outperforming a classical high-performance computer for a specific problem. The PRC was domestically developing specialized refrigerators needed for quantum computing research in an effort to end reliance on international components. In 2017, the PRC spent over $1 billion on a national quantum lab which will become the world’s largest quantum research facility when completed.

The 2025 National Defense Authorization Act that passed on December 9, 2012, leaves no doubt that the incoming Trump Administration will continue, if not accelerate, current DOD efforts in quantum computing. See e.g. Section Sec. 243 of the Act, aka the Quantum Scaling Initiative.

No one knows how much Elon Musk will influence such policies, but we do know he understands the impact of Google’s announcement and publicly praised Google’s CEO, Sundar Pichai, for the achievement. Pichai replied to Musk on X that: We should do a quantum cluster in space with Starship one day 🙂. (Note that China has had a quantum chip in space since 2016 to study secure communications and in October 2024 announced plans for several more in 2025. China to launch new quantum communications satellites in 2025, 10/08/24). Musk immediately replied affirmatively on X to Sundar and even upped the ante by saying:

That will probably happen. Any self-respecting civilization should at least reach Kardashev Type II. In my opinion, we are currently only at <5% of Type I. To get to ~30%, we would need to place solar panels in all desert or highly arid regions.

Unpacking the rest of Musk’s quote would require another article, let’s just say Kardashev has to do with technological progress and level of energy production. Level two refers to solar energy where a civilizations uses their star’s energy through a device such as a Dyson sphere shown below.

Conclusion

I decided you might enjoy my delegation of the final words to not-yet-quantum-powered AIs from Google. Perhaps in another universe, you’d hear my own thoughts wrapping this up, but for now, count yourself lucky to be conscious in this one. My AI podcasters bring humor and insight, though they’re far from Godlike—so I still need to guide and verify them. What’s new, however, is the interactivity feature Google recently added to the podcasters. In this session, you’ll hear wacky versions of me near the end interrupt to ask questions and the AIs’ spontaneous responses. It’s fascinating to imagine what quantum-powered AIs might say or do in the future. Click here or on the graphic below to go to the EDRM podcast.

Ralph Losey Copyright 2024. All Rights Reserved.


Prosecutors and AI: Navigating Justice in the Age of Algorithms

August 30, 2024

Ralph Losey. Published August 30, 2024.

AI has the potential to transform the criminal justice system through its ability to process vast datasets, recognize patterns, and predict outcomes. However, this potential comes with a profound responsibility: ensuring that AI is employed in ways that uphold basic human principles of justice. This article will focus on how AI can assist prosecutors in fulfilling their duty to represent the people fairly and equitably. It will highlight the practical benefits of AI in criminal law, providing specific examples of its application. The underlying theme emphasizes the necessity of human oversight to prevent the misuse of AI and to ensure that justice remains a human ideal, not an artificial construct.

The integration of AI into criminal prosecutions must be aligned with the ethical and legal obligations of prosecutors as outlined, for instance, by the American Bar Association’s Criminal Justice Standards for the Prosecution Function (ABA, 4th ed. 2017) (hereinafter “ABA Standards”). The ABA Standards emphasize the prosecutor’s duty to seek justice, maintain integrity, and act with transparency and fairness in all aspects of the prosecution function. This article will not cover the indirectly related topics of AI evidence. See Gless, Lederer, Weigend, AI-Based Evidence in Criminal Trials? (William & Mary Law School, Winter 2024). It will also not cover criminal defense lawyer issues, but maybe in a followup soon.

The Promise of AI in Criminal Prosecutions

The primary duty of the prosecutor is to seek justice within the bounds of the law, not merely to convict.” ABA Standard 3-1.2(b). When AI is used responsibly, it can assist prosecutors in fulfilling this duty by providing new tools. The AI powered tools can enhance evidence analysis, case management, and decision-making, all while maintaining the integrity and fairness expected of the prosecution function. Prosecutors with AI can better manage the vast amounts of data in modern investigations, identify patterns that might escape human detection, and make more informed decisions. It is no magic genie, but when used properly, can be a very powerful tool.

The National Institute of Justice in March 2018 sponsored a workshop of prosecutors from around the country that identified data and technology challenges as a high-priority need for prosecutors. According to the report by the Rand Corporation on the conference entitled, Prosecutor Priorities, Challenges, and Solutions (“Rand Report“) the key findings of the prestigious group were: (1) difficulties recruiting, training, managing, and retaining staff, (2) demanding and time-consuming tasks for identifying, tracking, storing, and disclosing officer misconduct and discipline issues, and (3) inadequate or inconsistent collection of data and other information shared among agencies . . . as well as by emerging digital and forensic technologies. The full Rand Report PDF may be downloaded here. The opening summary states:

Prosecutors are expected to deliver fair and legitimate justice in their decision making while balancing aspects of budgets and resources, working with increasingly larger volumes of digital and electronic evidence that have developed from technological advancements (such as social media platforms), partnering with communities and other entities, and being held accountable for their actions
and differing litigation strategies. . . .

Moreover, the increasing volume of potentially relevant digital information, video footage, and other information from technological devices and tools can significantly add to the amount of time needed to sufficiently examine and investigate the evidence in order to make decisions about whether to drop or pursue a case. This can be especially challenging because the staffing and other resources in prosecutors’ offices have not necessarily kept pace with these increasing demands.

Although the amount of digital information that prosecutors must sometimes sift through can be managed, in part, through innovative technological tools, such as data mining and data reduction solutions (Al Fahdi, Clarke, and Furnell, 2013; Quick and Choo, 2014), there are often steep learning curves or high costs that make it unrealistic for an office to implement these technologies.

Rand Report, pages 1-3.

Also see the excellent Duke Law sponsored one hour panel discussion video, The Equitable, the Ethical and the Technical: Artificial Intelligence’s Role in The U.S. Criminal Justice System for a comprehensive discussion of issues as of November 2021, just before the development and release of the new generative models of AI a year later.

e-Discovery, Evidence Analysis and Case Management

As the Rand Report confirms, the sheer volume of evidence in complex criminal investigations is a significant challenge for prosecutors. Also see: Tinder Date Murder Case Highlights the Increasing Complexity of eDiscovery in Criminal Investigations: eDiscovery Trends (e-Discovery Daily, 6/15/18). AI can analyze vast datasets—such as emails, text messages, and internet activity logs—to identify patterns indicative of criminal activity, but the software can be expensive and requires trained technology experts. AI algorithms can recognize specific types of evidence, such as images, sentiments, or key concepts relevant in many cases. They can help prosecutors identify patterns and connections within the evidence that might not be immediately apparent to human investigators. This capability can significantly reduce the time needed to search and study evidence, enabling prosecutors to build stronger cases more efficiently.

But, as the Rand Report also makes clear, prosecutors need adequate funding and trained personnel to purchase and use these new tools. Fortunately generative AI is substantially less expensive that the older models of AI and easier to use. Still, issues of fairness and guardrails against discrimination in their use remain as significant problems. There are also very significant privacy issues inherent in predictive policing. David Ly, Predictive Policing: Balancing Innovation and Ethics (The Fast Mode, 8/15/24); Arjun Bhatnagar, The Threat of Predictive Policing to Data Privacy and Personal Liberty (Dark Reading, 12/27/22).

Use of AI evidence search and classification tools such as predictive coding, which are well established in civil litigation, should be used more widely used soon in criminal law. The high costs involved are now plummeting and should soon be affordable to most prosecutors. They can drastically reduce the time needed to search and analyze large volumes of complex data. Still, budgets to hire trained personnel to operate the new tools must be expanded. AI can complement, but not entirely replace, human review in what I call a hybrid multimodal process. Ralph Losey, Chat GPT Helps Explains My Active Machine Learning Method of Evidence Retrieval (e-Discovery Team, 1/28/23). Human experts on the prosecutor’s team should always be involved in the evidence review to ensure that no critical information is missed.

Transparency and accountability are also crucial in using AI in discovery. Defense attorneys should be provided with a detailed explanation of how these tools were used. This is essential to maintaining the fairness and integrity of the discovery process, ensuring that both sides have equal access to evidence and can challenge the AI’s conclusions if necessary.

AI also plays a crucial role in case management. AI-powered tools can help prosecutors organize and prioritize cases based on the severity of the charges, the availability of evidence, and the likelihood of a successful prosecution. These tools can assist in tracking deadlines, managing court calendars, and ensuring that all necessary court filings are completed on time. By streamlining these administrative tasks, AI allows prosecutors and their assistants to concentrate on the substantive aspects of their work—pursuing justice. It also helps them deal with the omnipresent staff shortage issues.

Bias Detection and Mitigation

Bias in prosecutorial decision-making—whether conscious or unconscious—remains a critical concern. ABA Standards state:

The prosecutor should not manifest or exercise, by words or conduct, bias or prejudice based upon race, sex, religion, national origin, disability, age, sexual orientation, gender identity, or socioeconomic status. A prosecutor should not use other improper considerations, such as partisan or political or personal considerations, in exercising prosecutorial discretion. A prosecutor should strive to eliminate implicit biases, and act to mitigate any improper bias or prejudice when credibly informed that it exists within the scope of the prosecutor’s authority.

ABA Standards 3-1.6(a).

AI can play a crucial role in detecting and mitigating such biases, helping prosecutors adhere to the mandate that they “strive to eliminate implicit biases, and act to mitigate any improper bias or prejudice” within their scope of authority.

AI systems also offer the potential to detect and mitigate unconscious human bias in prosecutorial decision-making. AI can analyze past prosecutorial decisions to identify patterns of bias that may not be immediately apparent to human observers. By flagging these patterns, AI can help prosecutors become aware of their biases in their office and take corrective action.

Prosecutors should use care in the selection and use of AI systems. If they are trained on biased data, they can perpetuate and even amplify existing disparities in the criminal justice system. For instance, an AI algorithm used to predict recidivism, if trained on data reflecting historical biases—such as the over-policing of minority communities—may disproportionately disadvantage these communities. AI systems used in criminal prosecutions should be designed to avoid this bias.

The software purchased by a prosecutor’s office should be chosen carefully, ideally with outside expert advice, and rigorously tested for bias and other errors before deployment. Alikhademi, K., Drobina, E., Prioleau, D. et al.A review of predictive policing from the perspective of fairness Artif Intell Law 30, 1–17 (2022) (“[T]he pros and cons of the technology need to be evaluated holistically to determine whether and how the technology should be used in policing.”) There should also be outside community involvement. Artificial Intelligence in Predictive Policing Issue Brief (NAACP, 2/15/24) (NAACP’s four recommendations: independent oversight; transparency and accountability; community engagement; ban use of biased data; new laws and regulations).

Prosecutors should not fall into a trap of overcompensating based on statistical analysis alone. AI is a limited tool that, like humans, makes errors of its own. Its use should be tempered by prosecutor experience, independence, intuition and human values. When we use AI in any context or field it should be a hybrid relationship where humans remain in charge. From Centaurs To Cyborgs: Our evolving relationship with generative AI (e-Discovery Team, 4/24/24) (experts recommend two basic ways to use AI, both hybrid, where the unique powers of human intuition are added to those of AI). AI can also help prosecutors make objective decisions on charging and sentencing by providing statistically generated recommendations, again with the same cautionary advice on overreliance.

Sentencing Recommendations and Predictive Analytics

The use of AI in predictive analytics for sentencing is among the most controversial applications in criminal law. AI systems can be trained to analyze data from past cases and make predictions about the likelihood of a defendant reoffending or suggest appropriate sentences for a given crime. These recommendations can then inform the decisions of judges and prosecutors.

Predictive analytics has the potential to bring greater consistency and objectivity to sentencing. By basing recommendations on data rather than individual biases or instincts, AI can help reduce disparities and ensure similar cases are treated consistently. This contributes to a more equitable criminal justice system.

While AI can bring greater consistency to sentencing, prosecutors must ensure that AI-generated recommendations comply with their “heightened duty of candor” and the overarching obligation to ensure that justice is administered equitably.

In light of the prosecutor’s public responsibilities, broad authority and discretion, the prosecutor has a heightened duty of candor to the courts and in fulfilling other professional obligations.

ABA Standard 3-1.4(a)

The use of AI in sentencing raises important ethical questions. Should AI make predictions about a person’s future behavior based on their past? What if the data used to train the AI is biased or incomplete? How can we ensure that AI-generated recommendations are not seen as infallible but are subject to critical scrutiny by human decision-makers?

These concerns highlight the need for caution. While AI can provide valuable insights and recommendations, it is ultimately the responsibility of human prosecutors and judges to make the final decisions. AI should be a tool to assist in the pursuit of justice, not a replacement for human judgment.

Predictive Policing

Predictive policing uses algorithms to analyze massive amounts of information in order to predict and help prevent potential future crimes. Tim Lau, Predictive Policing Explained (Brennan Center for Justice, 11/17/21). This is an area where old AI (before advent of generative AI) has been embraced by many police departments worldwide, including the E.U. countries, but also China and other repressive regimes. Many prosecutors in the U.S. endorse it, but it is quite controversial and hopefully will be improved by new models of generative AI. The DA’s office wants to use predictive analytics software to direct city resources to ‘places that drive crime.’ Will it work? (The Lens, 11/15/23). In theory, by analyzing data on past crimes—such as the time, location, and nature of the offenses—AI algorithms can predict where and when future crimes are likely to occur. The majority of reports say this already works. But what of the minority reports? They contest the accuracy of these predictions using old AI models. Some say they are terrible at it. Sankin and Mattu, Predictive Policing Software Terrible At Predicting Crimes (Wired, 10/2/23). There is widespread concern of growing misuse, especially in countries that have politicized prosecutorial systems.

Still, in theory this kind of statistical analysis should be able to help honest law enforcement agencies allocate resources more effectively, enabling police to prevent crime before it happens. See generally, Navigating the Future of Policing: Artificial Intelligence (AI) Use, Pitfalls, and Considerations for Executives (Police Chief Magazine, 4/3/24).

All prosecutors, indeed. all citizens, want to be smart when it comes to crime, we all want “more police officers on the street, deployed more effectively. They will not just react to crime, but prevent it.” Kamala Harris (Author) and Joan Hamilton, Smart on Crime: A Career Prosecutor’s Plan to Make Us Safer (Chronicle Books, 2010).

The Los Angeles Police Department (LAPD) was one of the first to use predictive policing software, which was known as PredPol (now Geolitica). It identified areas of the city at high risk for certain types of crime, such as burglaries or auto thefts. The software analyzed data on past crimes and generated “heat maps” that indicate where crimes are most likely to occur in the future. This guided patrols and other law enforcement activities. PredPol proved to be very controversial. Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them (The Markup, 12/2/21). Its use was discontinued by the LAPD in 2020, but other companies claim to have corrected the biases and errors in the programs. See Levinson-Waldman and Dwyer, LAPD Documents Show What One Social Media Surveillance Firm Promises Police (Brennan Center for Justice, 11/17/21).

Another type of predictive policing software was adopted by the NYPD called Patternizr. According to the Wikipedia article on predictive policing:

The goal of the Patternizr was to help aid police officers in identifying commonalities in crimes committed by the same offenders or same group of offenders. With the help of the Patternizr, officers are able to save time and be more efficient as the program generates the possible “pattern” of different crimes. The officer then has to manually search through the possible patterns to see if the generated crimes are related to the current suspect. If the crimes do match, the officer will launch a deeper investigation into the pattern crimes.

See Molly Griffard, A Bias-Free Predictive Policing Tool?: An Evaluation of the Nypd’s Patternizr (Fordham Urban Law Journal, December 2019). 

While predictive policing has been credited with reducing crime in some areas, it has also been criticized for potentially reinforcing existing biases. If the data used to train the AI reflects a history of over-policing in certain minority communities, the algorithm may predict those communities are at higher risk for future crimes, leading to even more policing in those areas. This, in turn, can perpetuate a cycle of discrimination and injustice. See e.g. Taryn Bates, Technology and Culture: How Predictive Policing Harmfully Profiles Marginalized People Groups (Vol. 6 No. 1 (2024): California Sociology Forum).

To address these concerns, predictive policing algorithms must be designed with fairness in mind and subject to rigorous oversight. David Stephens, Forecasting Justice: The promise of AI-enhanced law enforcement (Police1, 1/27/24). I endorse the conclusions of Chief Deputy David Stephens made in his Forecasting Justice article:

Projecting into the next decade, AI will be an integral part of law enforcement — from crime prediction and real-time decision aids to postincident analysis. These technologies could lead to smarter patrolling, fewer unnecessary confrontations and overall enhanced community safety. However, this vision can only materialize with rigorous oversight, consistent retraining and an undiluted focus on civil liberties and ethics. Law enforcement’s AI-driven future must be shaped by a symbiotic relationship where technology amplifies human judgment rather than replacing it. The future promises transformative advances, but it’s imperative that the compass of integrity guide this journey.

The latest versions of predictive policing technology will certainly use new generative AI enhanced analysis. Law enforcement should be very careful in the purchase and implementation of these new technologies. They should seek the input of outside experts and carefully examine vendor representations. That should include greater vendor transparency, such as disclosure of the data used to train these systems to confirm that it is representative and unbiased. Proper methods of implementation of the AI tools should also be carefully considered. In my view and others this mean adopting a hybrid approach that “amplifies human judgment rather than replacing it.”

Sentiment Analysis in Jury Selection

Another trending application of AI in criminal law is the use of sentiment analysis in jury selection. Sentiment analysis is a type of AI that can analyze text or speech to determine the underlying emotions or attitudes of the speaker. In jury selection, sentiment analysis can analyze potential jurors’ public records, especially social media posts, as well as their responses during voir dire—the process of questioning jurors to assess their suitability for a case. It can also monitor unfair questions of potential jurors by prosecutors and defense lawyers. See Jo Ellen Nott, Natural Language Processing Software Can Identify Biased Jury Selection, Has Potential to Be Used in Real Time During Voir Dire (Criminal Legal News, December 2023). Also see AI and the Future of Jury Trials (CLM, 10/18/23).

For example, an AI-powered sentiment analysis tool could analyze the language used by potential jurors to identify signs of bias or prejudice that might not be immediately apparent to human observers. This information could then be used by prosecutors and defense attorneys to make more informed decisions about which jurors to strike or retain.

While sentiment analysis has the potential to improve jury selection fairness, it also raises ethical questions. Should AI influence juror selection, given the potential for errors or biases in the analysis? How do we ensure AI-generated insights are used to promote justice, rather than manipulate the selection process?

These questions underscore the need for careful consideration and oversight in using AI in jury selection. AI should assist human decision-makers, not substitute their judgment.

AI in Plea Bargaining and Sentencing

AI can also play a transformative role in plea bargaining and sentencing decisions. Plea bargaining is a critical component of the criminal justice system, with most cases being resolved through negotiated pleas rather than going to trial. AI can assist prosecutors in evaluating the strength of their case, the likelihood of securing a conviction, and the appropriate terms for a plea agreement. See: Justice Innovation Lab, Critiquing The ABA Plea Bargaining Principles Report (Medium, 2/1/24); Justice Innovation Lab, Artificial Intelligence In Criminal Court Won’t Be Precogs (Medium, 10/31/23) (article concludes with “Guidelines For Algorithms and Artificial Intelligence In The Criminal Justice System“).

For example, AI algorithms can analyze historical data from similar cases to provide prosecutors with insights into the typical outcomes of plea negotiations, considering factors such as the nature of the crime, the defendant’s criminal history, and the available evidence. This can help prosecutors make more informed decisions on plea deal offers.

Moreover, AI can assist in making sentencing recommendations that are more consistent and equitable. Sentencing disparities have long been a concern in the criminal justice system, with studies showing that factors such as race, gender, and socioeconomic status can influence sentencing outcomes. AI has the potential to reduce these disparities by providing sentencing recommendations based on objective criteria rather than subjective judgment. Keith Brannon, AI sentencing cut jail time for low-risk offenders, but study finds racial bias persisted (Tulane Univ., 1/23/24); Kieran Newcomb, The Place of Artificial Intelligence in Sentencing Decisions (Univ. NH, Spring 2024).

For instance, an AI system could analyze data from thousands of past cases to identify typical sentences imposed for specific crimes, accounting for relevant factors like the severity of the offense and the defendant’s criminal record. This information could then be used to inform sentencing decisions, ensuring that similar cases are treated consistently and fairly.

However, using AI in plea bargaining and sentencing also raises significant ethical considerations. The primary concern is the risk of AI perpetuating or exacerbating existing biases in the criminal justice system. If the data used to train AI systems reflects historical biases—such as harsher sentences for minority defendants—AI’s recommendations may inadvertently reinforce those biases.

To address this concern, AI systems used in plea bargaining and sentencing must be designed with fairness and transparency in mind. This includes ensuring that the data used to train these systems is representative and free from bias and providing clear explanations of how the AI’s recommendations were generated. Moreover, human prosecutors and judges must retain the final authority in making plea and sentencing decisions, using AI as a tool to inform their judgment rather than a substitute for it. It is important that AI systems be chosen and used very carefully in part because “the prosecutor should avoid an appearance of impropriety in performing the prosecution function.” ABA Standard 3-1.2(c)

Ethical Implications of AI in Criminal Prosecutions

While the potential benefits of AI in criminal law are significant, it is equally important to consider the ethical implications of integrating AI into the criminal justice system. AI, by its very nature, raises questions about accountability, transparency, and the potential for misuse—questions that must be carefully addressed to ensure AI is used in ways that advance, not hinder, the cause of justice.

As we integrate AI into criminal prosecutions, it is essential that we do so with a commitment to the principles articulated in the ABA’s Criminal Justice Standards. By aligning AI’s capabilities with these ethical guidelines, we can harness technology to advance justice while upholding the prosecutor’s duty to act with integrity, fairness, and transparency.

Transparency and Accountability

One of the most pressing ethical concerns is the issue of transparency, which we have mentioned previously. AI algorithms are often referred to as “black boxes” because their decision-making processes can be difficult to understand, even for those who design and operate them. This lack of transparency can be particularly problematic in criminal prosecutions, where the stakes are incredibly high, and the consequences of a wrong decision can be severe. A ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up (The Conversation, 7/26/23) (discusses UK systems).

For example, if an AI system is used to predict the likelihood of a defendant reoffending, it is crucial that the defendant, their attorney, and the judge understand how that prediction was made. Without transparency, challenging the AI’s conclusions becomes difficult, raising concerns about due process and the right to a fair trial.

To address this issue, AI systems used in criminal prosecutions must be designed to be as transparent as possible. This includes providing clear explanations of how AI’s decisions were made and ensuring that the underlying data and algorithms are accessible for review and scrutiny. There is federal legislation that has been pending for years that would require this, the Justice in Forensic Algorithms Act. New bill would let defendants inspect algorithms used against them in court (The Verge, 2/15/24) (requires disclosure of source code). Moreover, the legal community must advocate for developing AI systems prioritizing explainability and interpretability, ensuring that the technology is effective, accountable, and understandable.

Fairness and Bias

Another ethical concern is, as mentioned, the potential for AI to be used in ways that exacerbate existing inequalities in the criminal justice system. For example, there is a risk that AI could justify more aggressive policing or harsher sentencing in communities already disproportionately targeted by law enforcement. This is why AI systems must be designed with fairness in mind and their use subject to rigorous oversight. Look beyond vendor marketing claims to verify with hard facts and independent judgments.

Ensuring fairness requires that AI systems are trained on representative and unbiased data. It also necessitates regular audits of AI systems to detect and mitigate any biases that may arise. Additionally, AI should not be the sole determinant in any criminal justice decision-making process; human oversight is essential to balance AI’s recommendations with broader considerations of justice and equity. For instance, the NYPD represents that its widespread use of AI driven facial recognition technology in criminal investigations “does not establish probable cause to arrest or obtain a search warrant, but serves as a lead for additional investigative steps.” NYPD Questions and Answers – Facial Recognition, and see the NYPD official patrol guide dated 3/12/20.

Human Judgment and Ethical Responsibility

The deployment of AI in criminal prosecutions also raises important questions about the role of human judgment in the justice system. While AI can provide valuable insights and recommendations, it is ultimately human prosecutors, judges, and juries who must make the final decisions. This is because justice is not just about applying rules and algorithms—it is about understanding the complexities of human behavior, weighing competing interests, and making moral judgments.

AI, no matter how advanced, cannot replicate the full range of human judgment, and it should not be expected to do so. Instead, AI should be seen as a tool to assist human decision-makers, providing them with additional information and insights that can help them make more informed decisions. At the same time, we must be vigilant in ensuring that AI does not become a crutch or a substitute for careful human deliberation, judgment and equity.

Conclusion

The integration of AI into criminal prosecutions holds the promise of advancing the cause of justice in profound and meaningful ways. To do so we must always take care that applications of AI follow the traditional principles stated in the Criminal Justice Standards for the Prosecution Function and other guides of professional conduct. By aligning AI’s capabilities with ethical guidelines, we can harness technology in a manner that advances the prosecutor’s duty to act with integrity, fairness, and transparency.

With these cautions in mind, we should boldly embrace the opportunities that AI offers. Let us use AI as a tool to enhance, not replace, human judgment. And let us work together—lawyers, technologists, and policymakers—to ensure that the use of AI in criminal prosecutions advances the cause of justice for all.

Ralph Losey Copyright 2024 — All Rights Reserved


Navigating the AI Frontier: Wharton Professor’s Guide to Mastering Generative AI

August 21, 2024

Ralph Losey. Published August 21, 2024

Ethan Mollick, a Professor at Wharton who teaches entrepreneurship and innovation, has written an insightful book on generative AI, Co-Intelligence: Living and Working with AI (Portfolio, April 2, 2024). This article aims to entice you to read it by employing unpredictable language and vivid illustrations. The piece takes several zigzagging, idiosyncratic detours, including a discussion of a fake chess-playing computer that fooled both Benjamin Franklin and Napoleon, alongside quotes from Edgar Allan Poe on AI and Charles Babbage. Professor Mollick is a witty, hands-on AI enthusiast whose new book deserves not just a human-in-the-loop touch but a full embrace. While most of his insights may stray from business topics, they are vital for successfully navigating the fast-changing world of AI. His work brought me genuine delight—a feeling that, for now, remains uniquely human. I hope to evoke the same in you.

Ethan Mollick: The Man Behind the Ideas

Ethan Mollick

Ethan Mollick is not your average Associate Professor of The Wharton School. He is an expert in entrepreneurship and innovation who teaches elite Wharton management students to become more effective leaders and innovators. But he is much more than his Linkedin profile, where, by the way, I read his posts every day. He is a pioneer in using games and simulations to teach. This turns out to be a good background to pick up AI quickly, which is what he has done. Recall that I mentioned Ethan Mollick previously in the article on AI and education, Back To School: A Review of Salman Khan’s New Book, ‘Brave New Words: How AI will revolutionize education (and why that’s a good thing)’ Both he and Sal Khan were selected by OpenAI to be early users of ChatGPT to see what it could do for education. Good choice as Professor Mollick’s book shows, Co-Intelligence: Living and Working with AI. So too does his stream of good articles on substack One Useful Thing. If you are lucky, maybe you can even get into one of his classes at Wharton. Hopefully they will go online someday.

My research of Ethan Mollick uncovered that the game playing professor has recently made time to create a Custom GPT called, Complexifier, making easy, hard. Yup, it does the opposite of what most generative AI apps are designed to do. I asked this strange little alien GPT why anyone would want to make something easy, hard? Here’ the first paragraph of the response of Complexifier:

Ah, why indeed? The thrill lies in the dance of complexity, the joy in the jigsaw puzzle of a thousand interlocking pieces. Making something simple is like solving a crossword with only one word—it’s over before it begins. But to make the straightforward serpentine? That’s where the magic is. It’s about savoring each twist and turn, where every step forward is a question mark and every answer opens ten more questions.

The answer goes on and on and creates an elaborate and convoluted flowchart for performance of a simple task. Like Ethan Mollick, I spend much of my thinking, teaching, and writing to try to make the hard, easy, to somehow explain the insanely convoluted. But the opposite is also attractive, in an evil attorney academic kind of way. The Complexifier GPT hooked me into a kind of Naomi Klein trip into a Doppleganger mirror world to see the dark Rube Goldberg twin. So I continued on.

Ralph: Why did Ethan Mollick bother to create this custom GPT?

Complexifier: Ah, the mystery behind the creation of this delightful complexity machine! Why, indeed, would Ethan Mollick, a professor known for his innovative thinking and exploration of how technology can transform learning and work, create such a custom GPT? Allow me to speculate, in an appropriately convoluted manner:

  1. To Embrace the Absurd: Life can be far too serious. By creating a GPT that revels in unnecessary complexity, he might be encouraging us to appreciate the absurdity of over-complicating the mundane.
  2. To Challenge the Intellect: Complex tasks require deeper thinking, and perhaps this GPT was designed to push the boundaries of problem-solving by making us work through intricacies we’d typically avoid.
  3. To Encourage Creativity: The labyrinth of complexity is fertile ground for creative solutions. This custom GPT may serve as a playground where unconventional ideas are not just allowed but celebrated.
  4. To Highlight the Journey Over the Destination: In academia and beyond, the journey often teaches more than the destination itself. By complicating the simple, we’re forced to engage more fully with the process, learning along the way.
  5. To Add Humor to Learning: Complexity, when presented with a wink and a nod, can be incredibly entertaining. This GPT likely adds a layer of humor to the learning process, making the pursuit of knowledge more enjoyable.
  6. To Explore AI’s Potential: Creating a GPT that thrives on complexity could be an experiment in how far we can push AI’s capabilities, demonstrating that even the most trivial tasks can be transformed into elaborate intellectual exercises.

In the end, why settle for simple when the convoluted is so much more fun?

Ralph: Which one of these six reasons do you think is the most likely? And why pray tell, why?

Complexifier: Ah, a question within the question! If I had to place my bet on the most likely reason, I’d lean towards “To Add Humor to Learning.”

Why, you ask? Let me weave a rationale for you:

Ethan Mollick is known for his innovative approach to teaching and his playful engagement with the serious world of academia. By introducing humor into learning, he’s likely aiming to make complex concepts more approachable and enjoyable. Humor has a unique way of disarming us, lowering our defenses against the overwhelming nature of complexity. When something is funny, we’re more willing to engage with it, even if it’s ridiculously convoluted.

Creating a GPT that turns even the simplest task into a Rube Goldberg machine of logic is a perfect way to inject that humor into the learning process. It’s not just about making us laugh; it’s about making us see that the path to understanding—even if unnecessarily complicated—can be enjoyable. Plus, the absurdity of it all serves as a gentle reminder not to take ourselves too seriously, even when grappling with complex ideas.

So, my best guess? He did it for the laughs, knowing that in those laughs, there’s a lesson in embracing the joy of the journey.

So Ethan Mollick writes a lot, is hands-on, and is funny. Now you know why I had to write about his pioneering work with AI and education and his new book, Co-Intelligence: Living and Working with AI.

Part I of Professor Mollick’s Book: Setting the Stage for Co-Intelligence

Co-Intelligence has a great opening line, which I fully endorse: “I believe the costs of getting to know AI–really getting to know AI–is at least three sleepless nights.” Then you will discovery that ChatGPT, and other top generative models, “don’t act like you expect a computer to act. Instead, they at more like a person.” They act like something new, an alien person of unknown abilities. Professor Mollick’s excitement in using the new tool right away in his classes in Wharton is contagious. This new type of general purpose technology, like the steam engine and Internet, changes everything, including teaching.

After the introduction to he looks back into the history of AI. He notes how we have long been fascinated with “machines that can think,” or at least pretend they can. One example Ethan Mollick gave was the Mechanical Turk, a chess-playing automaton built in 1770. It was a machine that could beat almost all human chess players. Actually, in what was a very well-kept secret, which fooled the likes of Napoleon Bonaparte and Benjamin Franklin, the thinking machine was a hoax. A small human chess master was cleverly hidden behind gears in the contraption. See this YouTube video for its full its history.

When Edgar Allan Poe saw the Mechanical Turk in 1835 he speculated that it was a fake, but only because the Turk would sometimes lose. Poe thought that if it was a true thinking machine, then it would always win. Although not in Professor Mollick’s book, I dug deeper into his reference to Poe to and AI and found the original text. Edgar Allan Poe, Maelzel’s Chess-Player (1836). There we read of Poe’s thoughts on Charles Babbage, mechanical thinking, and his impressive insights into what would later be called AI.

Museum reproduction of the original Mechanical Turk with photoshop words and enhancements by Ralph Losey

Edgar Allan Poe’s words:

Photo of Edgar Allan Poe by W.S. Hartshorn, 1848

There is then no analogy whatever between the operations of the Chess-Player, and those of the calculating machine of Mr. Babbage, and if we choose to call the former a pure machine we must be prepared to admit that it is, beyond all comparison, the most wonderful of the inventions of mankind. . . .

It is quite certain that the operations of the Automaton are regulated by mind, and by nothing else. Indeed this matter is susceptible of a mathematical demonstration, a priori. The only question then is of the manner in which human agency is brought to bear. . . .

The Automaton does not invariably win the game. Were the machine a pure machine this would not be the case — it would always win. The principle being discovered by which a machine can be made to play a game of chess, an extension of the same principle would enable it to win a game — a farther extension would enable it to win all games — that is, to beat any possible game of an antagonist.

Edgar Allan Poe, Maelzel’s Chess-Player (1836).

Garry Kasparov found out in 1997 that the great Mr. Poe was right. IBM’s Deep Blue taught itself to “win all games” of chess. Interestingly, many speculate that Edgar Allan Poe’s encounter with the Mechanical Turk led to his writing the first detective novel soon thereafter. Kat Eschner, Debunking the Mechanical Turk Helped Set Edgar Allan Poe on the Path to Mystery Writing (Smithsonian Magazine, 2017).

Professor Mollick makes clear that the AI today, unlike the Mechanical Turk, is very real, and in some ways very powerful, but characterizes it as a type of “alien intelligence.” It is fundamentally different from human intelligence, yet capable of performing human tasks. This alien intelligence is something you need to discover for yourself to appreciate its abilities and flaws. The only way to do that is to use generative AI. Ethan lays out four principles of co-intelligence to guide your use:

  1. Always invite AI to the table. Try and use AI whenever and wherever you can.
  2. Be the human in the loop. Actively supervise and verify.
  3. Treat AI like a person (but tell it what kind of person it is). Give the Ai context and use its sub-persona abilities.
  4. Assume this is the worst AI you will ever use. Do not get discouraged by how AI sometimes stumbles, it is getting better very fast.

The first half of the book spells out these four principals, which are all pretty basic. Ethan does a good job of laying this out and I recommend you read the book, Co-Intelligence: Living and Working with AI.

After you begin to use AI and get past the three sleepless nights, you will discover what Ethan Mollick calls the “Jagged Frontier.” This is his metaphor for the uneven capabilities of AI, where some tasks are easily within reach, while others, some quite simple, and beyond its grasp. See: From Centaurs To Cyborgs: Our evolving relationship with generative AI (4/24/24). Ethan Mollick discusses this at length in his article, Centaurs and Cyborgs on the Jagged Frontier. The second to last paragraph of the article states:

People really can go on autopilot when using AI, falling asleep at the wheel and failing to notice AI mistakes. And, like other research, we also found that AI outputs, while of higher quality than that of humans, were also a bit homogenous and same-y in aggregate. Which is why Cyborgs and Centaurs are important – they allow humans to work with AI to produce more varied, more correct, and better results than either humans or AI can do alone. And becoming one is not hard. Just use AI enough for work tasks and you will start to see the shape of the jagged frontier, and start to understand where AI is scarily good… and where it falls short.

The metaphors of the jagged frontier, centaurs, and cyborgs help us to understand the unpredictable nature of AI’s abilities. Also see:  Ethan Mollick and several others on the impact of AI hybrid work. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (Harvard Business School, Working Paper 24-013); Losey, Report on the First Scientific Experiment to Test the Impact of Generative AI on Complex, Knowledge-Intensive Work (e-Discovery Team, 4/29/24).

Because of the surprising failures on the wrong side of the jagged line, Ethan warns readers: “Don’t rely on AI alone.” He encourages use of AI as a complement to human judgment, not a replacement for it. Legal professionals know this rule well, “trust but verify.” Another AI Hallucination Case with Sanctions Threatened Because of ‘All-Too-Human’ Mistakes (July 30, 2024); Some Legal Ethics Quandaries on Use of AI, the Duty of Competence, and AI Practice as a Legal Specialty (May 6, 2024); “My AI Did It!” Is No Excuse for Unethical or Unprofessional Conduct. (Losey.ai CLE).

Part II of Professor Mollick’s Book: AI in Action

The second half of Co-Intelligence is divided into six different characteristics of generative AI and how to use them. Each is a different chapter in the book.

  • AI as a Person. A “thinking companion” that can assist in decision-making by providing alternative perspectives. Includes discussion of the “uncanny valley” and need for ethical monitoring of its use, and how AI lacks the depth and intuition that come from human experience.
  • AI as a Creative. AI will not replace human creators, but it will totally change the way we approach creative work. It will be more than a tool, it will be a co-creator. Mollick reassures readers that while AI can assist in the creative process, it is ultimately up to humans to imbue that work with significance and purpose.
  • AI as a Coworker. This important chapter is a must-read for technology leaders who are grappling with the integration of AI into their teams. Mollick argues that AI can handle many of the repetitive tasks that bog down human workers, freeing them up to focus on more strategic and creative endeavors. He provides examples of companies that have successfully integrated AI into their workflows, resulting in significant productivity gains. Mollick’s also discusses using AI as a “Challenger,” which I like to call a “Devil’s Advocate.” AI can challenge human decisions, offering alternative perspectives that may not have been considered. Professor Mollick also warns of the dangers of AI perpetuating biases in organizations. To counter this he recommends transparency and accountability in AI deployment, and regular audits.
  • AI as a Tutor. Professor Mollick, much like Sal Khan, really gets AI’s potential in the classroom and is already revolutionizing Wharton and soon all graduate level instruction. See e.g., BACK TO SCHOOL: A Review of Salman Khan’s New Book, ‘Brave New Words: How AI will revolutionize education (and why that’s a good thing)’; and the video interview of Mollick by Khan. Mollick and Khan are both blown away by the potential of AI to provide personalized learning experiences–Tutoring–that adapt to the needs of individual students. Mollick goes deep in explaining the many ways this will change traditional instruction and the successful experiments in his Wharton classrooms. Again, it will not replace teachers and it will make in-person classrooms more important than ever.
  • AI as a Coach. A personal trainer type role for AI who provides continuous tailored guidance and feedback to enhance human capabilities. The value of personalized advice is explored, although I wish he had gone into the dangers of sycophantism more than he did. See e.g. Worrying About Sycophantism. Mollick does point to the danger of becoming overly dependent on AI to the point where it diminishes our critical thinking and decision-making skills.
  • AI as Our Future. Here a series of four scenarios are given that explore how AI might shape the world in the coming decades. In the first AI is now at its peak, “As Good As It Gets,” which he and I deem very unlikely. In the second there is “Slow Growth” of AI going forward, again we think this is also unlikely. In the third scenario the possibility of continued “Exponential Growth” is imagined. Many specific predictions are made, including that “Loneliness becomes less of an issue, but new forms of social isolation emerge.” That one is a safe bet, but there are many other predictions that are not so obvious. The last scenario Ethan calls “The Machine God” where “machines reach AGI and some form of sentience.” Note I do not think sentience is a necessary byproduct of AGI, nor that the divine name is appropriate, but Ethan (and others) imagine it is. Losey, Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time (8/12/24).

Conclusion

Ethan Mollick’s book ends with an epilogue titled “AI As Us.” I like this because it follows the thinking of Ray Kurzweil, whose thoughts on AI I also respect. Ray Kurzweil’s New Book: The Singularity is Nearer (when we merge with AI) (July 17, 2024). Of course, it is not us yet, far from it. For now, AI is a new alien stranger that Professor Mollick would have you invite into your office and your home. He wants you to spend “three sleepless nights” with it and thereafter spend time with it every day. Ethan is just that kind of “all in” kind of guy.

Most legal professionals, myself included, are not quite so gung-ho, especially when you talk about using them in our work. Still, many valuable insights can be gained from his book, Co-Intelligence: Living and Working with AI (Portfolio, April 2, 2024), and it does include many warnings of dangers. Consider this statement in the epilogue: “AI is a mirror, reflecting back at us our best and worst qualities. We are going to decide on its implications, and those choices will shape what AI actually does for, and to, humanity.”

In a world where human-like AI is no longer just a possibility but a reality, Co-Intelligence serves as an essential guide for everyone, including legal professionals. Ethan Mollick’s suggestions simplify the process of harnessing AI’s full potential in business and management, while also revealing its possible applications in the law. This book, written by a very creative, hands-on professor at Wharton, equips you for the many challenges and opportunities ahead.

Ralph Losey Copyright 2024 — All Rights Reserved


Evolution of DALL·E with Demonstrations of its Current Text to Image Abilities

August 19, 2024

Ralph Losey. Published August 20, 2024.

The images shown here are to demonstrate some of the current abilities of DALL-E. They were all created by Ralph Losey using his custom GPT, Visual Muse: illustrating concepts with style, which is driven by OpenAI’s DALL-E software. Ralph has chosen one of his favorite types of images for this demonstration – “optical illusions” – since he does not often get a chance to use this image type in his blog. These images will be shown in a variety of different artistic styles, especially that of Salvador Dali, who is known for his love of optical illusions.

Left click on any image to see it alone in full size. Careful, don’t fall in!

Introduction

The first images to demonstrate DALL-E capabilities shown above are a type of “Op Art” using a classic black and white geometric style. All illustrations were created in the same day, about four hours, with about half coming out right – the way Ralph wanted – on the first try. A 50% precision rate like this is unusually high for him. Many of the images were not used to save space. Ralph’s workflow then includes use of Photoshop for final tuning and size changes. The research and writing itself took about three hours of Ralph’s time, with about 50% help from ChatGPT 4o – omni using a cyborg method. From Centaurs To Cyborgs: Our evolving relationship with generative AI (e-Discovery Team, 4/24/24).

The development of DALL·E, a generative AI model by OpenAI, from its first release in January 2021 to today, represents a significant achievement in the field of AI-driven image generation. The broad outlines of the development will be discussed in this article, but all images shown will be from the latest version of August 2024. For more examples of what DALL-E is capable of, suggest you look at the hundreds of Ralph’s illustrations in e-Discovery Team blog. A few are created using OpenAI’s main competitor in image generation, Midjourney.

Ralph’s blog images usually illustrate the topics discussed in the accompanying text. For Losey they represent a new form of expression where words and images and hyperlinks form a multimodal whole, created by a hybrid combination of Man and Machine. In his blog the human – Ralph – does almost all the work on the text, including most of the research, and the AI does most of the work on the illustrations. Those familiar with Ralph’s work in e-Discovery know this mirrors his work with multimodal hybrid search techniques, i.w. Predictive Coding. They are described in detail in the free TAR Course linked to at the top the blog.

The History of OpenAI’s Generative AI Image Tool: DALL-E

1. Initial Release: DALL·E 1 (January 2021)

The initial release of DALL·E was in January 2021. The name is a homage to the famous Twentieth Century artist Salvador Dali. OpenAI’s release of DALL-E was a breakthrough moment for generative models to create images from text descriptions. DALL·E 1 utilized a modified GPT-3 architecture to generate images from text prompts. Although remarkable compared to what others had been able to achieve before, this first model exhibited limitations in image coherence, resolution, and the ability to accurately represent complex scenes. The underlying transformer architecture was effective in generating diverse and creative outputs, but the model struggled with maintaining consistency and realism across different elements of the image.

2. Introduction of DALL·E 2 (April 2022)

In April 2022, OpenAI released DALL·E 2, which introduced several critical improvements:

Enhanced Resolution and Image Quality: DALL·E 2 featured improvements in the model’s ability to generate higher-resolution images with finer details. This was achieved through the refinement of the underlying generative process, likely involving improvements in the training dataset and the introduction of more sophisticated images for training.

Improved Compositional Understanding: The model demonstrated enhanced capabilities in handling complex prompts that required the accurate rendering of multiple objects and interactions. This improvement can be attributed to advancements in the model’s attention mechanisms, enabling better spatial awareness and coherence in generated images.

Advanced Control Mechanisms: Users were provided with more granular control over image attributes such as style, color, and composition. This was likely facilitated by the integration of additional conditioning layers or modules within the model architecture, allowing for more targeted manipulation of the generated outputs.

3. August 2023 Update: Refinement and Realism

The August 2023 update was the last full training update to the model. This made possible significant refinements to DALL·E’s functionality, focusing on realism, detail, and user customization:

Increased Realism and Texture Fidelity: The model’s ability to generate photorealistic images was markedly improved, particularly in rendering textures, lighting, and shadows. These enhancements suggest advancements in the model’s ability to learn and apply high-fidelity visual patterns from training data, potentially through the use of more complex loss functions and training techniques that prioritize visual accuracy.

Enhanced Text Integration: DALL·E’s capability to incorporate textual elements within images saw notable improvement. This likely involved the refinement of text-to-image embedding processes and a better alignment between text tokens and their corresponding visual representations within the model.

User Customization: The update provided users with increased control over specific aspects of image generation, such as adjusting the perspective or selecting a particular art style. This was achieved through the introduction of more sophisticated user interfaces and the likely addition of new conditional input mechanisms within the model.

Feedback-Driven Optimization: The update also integrated feedback from users, leading to iterative adjustments that enhanced the model’s overall performance and usability. This process likely involved fine-tuning the model on user-provided data or leveraging reinforcement learning techniques to align the model outputs more closely with user preferences.

4. Ongoing Enhancements (Post-August 2023)

After the August 2023 update, DALL·E has continued to evolve with ongoing technical enhancements:

Refinement of Image Generation: Continuous improvements have been made to the model’s image generation capabilities, particularly in handling edge cases and complex scene compositions. These refinements suggest iterative updates to the model’s training regimen, potentially involving more diverse and higher-quality datasets.

Increased Processing Efficiency: The model has seen improvements in processing speed, reducing latency in image generation. This is indicative of optimizations in the model’s computational efficiency, likely through algorithmic refinements or the adoption of more efficient neural network architectures.

Advanced Control Features: The introduction of more nuanced control features has provided users with the ability to manipulate image attributes with greater precision. These features likely involve the integration of additional conditioning factors within the model, allowing for more detailed user input.

5. Interface and Usability Enhancements

In addition to technical improvements, there have been significant updates to the DALL·E user interface and overall usability:

Improved User Interface: The interface has been refined to offer a more intuitive user experience, facilitating easier access to advanced features. This likely involved the integration of better design principles and user experience research into the interface development process.

Accessibility Enhancements: Updates have been made to improve accessibility, ensuring that the platform is usable by a broader audience, including individuals with disabilities. This may involve the adoption of accessibility standards in interface design and the introduction of assistive technologies.

Collaborative Functionality: The platform has introduced features that support collaborative use cases, enabling multiple users to contribute to the image generation process. This functionality suggests the integration of multi-user input mechanisms and enhanced session management capabilities.

6. Industry-Specific Tools and Content Moderation

Recent developments have also focused on the introduction of tools tailored to specific industries and the enhancement of content moderation mechanisms:

Industry-Specific Tools: DALL·E has introduced features designed to meet the needs of particular industries, such as fashion, architecture, and graphic design. These tools likely involve the addition of domain-specific models or fine-tuning the base model on industry-specific datasets.

Content Moderation Enhancements: There have been improvements in content moderation, ensuring that generated images adhere to ethical standards and legal requirements. This likely involves the integration of content filtering algorithms and the use of human-in-the-loop processes to monitor and curate outputs. Sometimes in Ralph’s opinion they go overboard in policing potential copyright violations and other guardrails. Unlike DALL-E’s main competitor, Midjourney, they do not have instant appeals and, where warranted, reversals. That can be annoying.

Conclusion: Ongoing Development and Legal Implications

The evolution of DALL·E underscores the rapid pace of advancement in generative AI technology. This is a powerful, fun new tool for all creators to make their own images and play with the incredible abilities of generative AI. If you just stick to words and computer code, you will miss out.

Plus, it is getting better and better every month. The kind of things you can do with it now are mind bending. You may think it is all an optical illusion, but it is not. It is a great time to be alive. For me it is a relaxing hobby. That’s one reason I made, often update and freely share the Visual Muse custom GPT. It is at the OpenAI Custom GPT Store, along with thousands of other free GPTs to try out. If you like visual images and want to go from the beginner level to the intermediate and advanced levels of DALL-E use, this may be a good tool for you. Plus it can help teach you about artists and styles of art.

Each OpenAI update to DALL-E not only improves the creative capabilities of the model but also raises important legal and ethical implications of AI-generated content. As DALL·E continues to develop, legal professionals must remain vigilant in understanding these advancements to effectively navigate the associated legal challenges, including intellectual property rights, content moderation, and the ethical use of AI. To do that it helps to be able to use the tools yourself, at least somewhat. Generative AI has to be used to be understood. Otherwise, no matter how smart you are, your understanding will be superficial, maybe even illusory.

Ralph Losey Copyright 2024 — All Rights Reserved