Dario Amodei’s Vision: A Hopeful Future ‘Through AI’s Loving Grace,’ Is Like a Breath of Fresh Air

November 1, 2024

By Ralph Losey

Published on November 1st, 2024

While almost everyone is panicking about a potential robot apocalypse, Dario Amodei, the CEO and co-founder of Anthropic (“Claude”), is explaining how AI might compress 100 years of medical progress into a decade, cure mental illnesses such as PTSD and depression, and alleviate poverty. Dario, a well-respected scientist previously known for his cautious, even gloomy, outlook, now speaks with optimism—and the world is listening.

Amodei is not a salesman like Sam Altman, who frequently makes similar predictions. Instead, Dario Amodei is an experienced scientist known for highlighting the risks of AI. He holds a Ph.D. in biophysics from Princeton and completed his postdoctoral research at Stanford School of Medicine. He also served as the Vice President of Research at OpenAI. In 2021, he and his sister, Daniela Amodei, the former Vice President for Safety and Policy at OpenAI, left the company to co-found Anthropic. Amodei’s detailed predictions in his 28-page essay, Machines of Loving Grace, are both profound and inspiring.


Dario’s essay is filled with science, rigorous analysis, and joyful visions, many of which he believes could begin to materialize as early as 2026. This optimistic outlook offers us all a much-needed breath of fresh air.

Introduction

Unlike Sam Altman, Dario Amodei’s future predictions go into specifics grounded in science and analysis. His 14,000 word essay, Machines of Loving Grace (October 2024), makes predictions in the five categories that Amodei is most excited about. It is not meant to be all-inclusive. Amodei focuses on five main categories in his predictions:

  1. Biology and physical health
  2. Neuroscience and mental health
  3. Economic development and poverty
  4. Peace and governance
  5. Work and meaning

Since Dario Amodei is a respected scientist and business leader, his visions of the future are taken very seriously, even if they do sound like science fiction. Dario reluctantly admits he is a science fiction fan and mentions one book, The Player of Games. (I reread it and noticed two spaceship names I recognized: Of Course I Still Love You and Just Read the Instructions. Sound familiar?)

Amodei starts his essay with an important point, that we do ourselves a disfavor by just dwelling on the negatives and not keeping our eye on how radical the upside of AI could be.” In his new essay Amodei, who was previously known as a doom and gloomer, tries to sketch out what a world with powerful AI might look like, if everything goes right. Sam Altman has been good at this for years, but he lacks the gravitas, ethical reputation, and scientific knowledge that Amodei has. Can AI Really Save the Future? A Lawyer’s Take on Sam Altman’s Optimistic Vision (10/4/24).

The positive visions of Dario, including what he calls a century of progress in a decade, are what motivate people to do the hard work to improve AI. That, and money, fame, and power, of course. But I reject the cynics who say that’s all it is – just a sales pitch to raise more money. As Amodei puts it at the start of his essay:

I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires. Many of the implications of powerful AI are adversarial or dangerous, but at the end of it all, there has to be something we’re fighting for, some positive-sum outcome where everyone is better off, something to rally people to rise above their squabbles and confront the challenges ahead. Fear is one kind of motivator, but it’s not enough: we need hope as well.

Progress is never made by cynical focus on everything that can go wrong. If we just focus on the negatives, we will never be able to unlock the positive potential. We need to envision the amazing things that could happen and figure out how we can help make that future a reality. This means approaching AI with a balanced perspective, recognizing both the potential downsides and benefits, then working proactively to mitigate the risks while pursuing the benefits.

The future is not predetermined. It’s up to us to create it. And with AI, we have a tool that can either amplify our worst tendencies or help us achieve our greatest aspirations. It’s our choice which path we take.

Amodei’s Basic Framework and Assumptions


Amodei uses the term “Powerful AI” in his essay, preferring it over the commonly used Artificial General Intelligence (“AGI”). It is, in fact, quite similar to the AGI concept I’ve referenced in previous articlesAmodei predicts that Powerful AI could emerge as early as 2026, though it may take longer depending on various factors. He defines Powerful AI with six distinct characteristics (all quoted from his essay):

  1. In terms of pure intelligence4, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc.
  2. It has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
  3. It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
  4. It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.
  5. The resources used to train the model can be repurposed to run millions of instances of it. . . . and the model can absorb information and generate actions at roughly 10x-100x human speed5 . . .
  6. Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate . . .

Amodei closes this impressive list of characteristics necessary for today’s AI to become Powerful AI, a/k/a an AGI, with this catchy phrase: We could summarize this as a “country of geniuses in a datacenter.” This sounds somewhat like the Genie in a Bottle myth from Islamic cultures.

1. Biology and Physical Health

This is the area about which Dario Amodei, an AI biophysicist, is best equipped to make predictions. Amodei states: Biology is probably the area where scientific progress has the greatest potential to directly and unambiguously improve the quality of human life. This assertion seems well-supported.

Now for the wonders he thinks could come to pass once AI reaches Powerful AI level, which remember he says could come as early as 2026. He presents detailed and compelling arguments for the feasibility of these predictions and explains how AI will facilitate such breakthroughs. For a full exploration of these ideas, you should read his original essay, Machines of Loving Grace. His overall prediction regarding the rate of improvement that Powerful AI will bring is as follows:

To summarize the above, my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.


Amodei predicts the following groundbreaking advancements in biology and physical health with the aid of Powerful AI. These predictions offer a hopeful glimpse into the future of medical science.

  1. Reliable prevention and treatment of nearly all17 natural infectious disease.
  2. Elimination of most cancer.
  3. Very effective prevention and effective cures for genetic disease.
  4. Prevention of Alzheimer’s.
  5. Improved treatment of most other ailments.
  6. Biological freedom (encompassing advancements in areas like birth control, fertility, weight management, appearance, and more). 
  7. Doubling of the human lifespan18 to about 150.

Amodei further elaborates on these predictions in biology and physical health, the first of five areas of advancement he foresees with the development of Powerful AI.

It is worth looking at this list and reflecting on how different the world will be if all of it is achieved 7-12 years from now (which would be in line with an aggressive AI timeline). It goes without saying that it would be an unimaginable humanitarian triumph, the elimination all at once of most of the scourges that have haunted humanity for millennia.


Any team responsible for achieving even one of these seven breakthroughs would attain legendary status. The Nobel Prize Committee would have to start adding new categories.

2. Neuroscience and Mind

This is the area that pertains to mental health. See e.g. Loneliness Pandemic: Can Empathic AI Friendship Chatbots Be the Cure? (10/17/24). Remember, Dario Amodei was a specialist in computational neuroscience at Stanford in his student days, so he has a strong background in brain research. His explanations again sound very plausible, and are way beyond my abilities to explain, so read his brilliant article, Machines of Loving Grace. Here is his list of wonderful accomplishments in this field that he believes are possible within 5-10 AI-accelerated years after Strong AI is attained. (Prepare yourself to be happy.)

  1. Most mental illness can probably be cured.
  2. Conditions that are very “structural” may be more difficult to cure, but not impossible. This has to do with brain abnormalities that are thought to cause such disorders as Psychopathy, and some intellectual disabilities.
  3. Effective genetic prevention of mental illness seems possible.
  4. Everyday issues that are not traditionally seen as clinical diseases, such as quick temper, difficulty focusing, anxiety, or trouble adapting to change, may also be addressed.
  5. Human baseline experience can be much better. This has to do with expanding dimensions of human peak experiences (without drugs). As Dario puts it: “Many people have experienced extraordinary moments of revelation, creative inspiration, compassion, fulfillment, transcendence, love, beauty, or meditative peace.” These moments can become more common and diverse.

I like Dario’s summary of this section; I suppose because I totally agree with it:

In summary, AI-accelerated neuroscience is likely to vastly improve treatments for, or even cure, most mental illness as well as greatly expand “cognitive and mental freedom” and human cognitive and emotional abilities. It will be every bit as radical as the improvements in physical health described in the previous section. Perhaps the world will not be visibly different on the outside, but the world as experienced by humans will be a much better and more humane place, as well as a place that offers greater opportunities for self-actualization. I also suspect that improved mental health will ameliorate many other societal problems, including ones that seem political or economic.

3. Economic Development and Poverty


This section addresses a crucial humanitarian question: will everyone have access to these technologies? It’s a point that Amodei, true to form, raises right at the outset. His goal for Powerful AI is to bridge the enormous economic gap between people and countries alike and make these new technologies available to all.

Dario approaches this goal with wide-eyed skepticism and awareness of the many obstacles involved. He does not predict equality, but does hope for substantial progress, saying: “A good goal might be for the developing world 5-10 years after powerful AI to at least be substantially healthier than the developed world is today, even if it continues to lag behind the developed world.”

Here are Amodei’s guesses (he does not call these predictions) about how things may go in the developing world over the 5-10 years after powerful AI is developed:

  • Distribution of health interventions.
  • Economic growth.
  • Food security.
  • Mitigating climate change.
  • Inequality within countries.
  • The opt-out problem.

He spells out many lofty goals here but does so in a realistic manner. Here is how he closes this important section.

It won’t be a perfect world, and those who are behind won’t fully catch up, at least not in the first few years. But with strong efforts on our part, we may be able to get things moving in the right direction—and fast. If we do, we can make at least a downpayment on the promises of dignity and equality that we owe to every human being on earth.

4. Peace and Governance

Dario Amodei takes a very thoughtful approach to this all-important goal. Again, he is no naive optimist. He admits upfront a point cynics and the news media love to emphasize:

Unfortunately, I see no strong reason to believe AI will preferentially or structurally advance democracy and peace, in the same way that I think it will structurally advance human health and alleviate poverty. Human conflict is adversarial and AI can in principle help both the “good guys” and the “bad guys”. If anything, some structural factors seem worrying: AI seems likely to enable much better propaganda and surveillance, both major tools in the autocrat’s toolkit. 

Then Dario goes beyond the fear mentality into what has always been the heart of being human, the optimistic, self-help, can-do mentality. This is part of the culture I grew up in and try to pass on: God helps those who help themselves.

It’s therefore up to us as individual actors to tilt things in the right direction: if we want AI to favor democracy and individual rights, we are going to have to fight for that outcome. I feel even more strongly about this than I do about international inequality: the triumph of liberal democracy and political stability is not guaranteed, perhaps not even likely, and will require great sacrifice and commitment on all of our parts, as it often has in the past.

Amodei then goes on to state how he thinks we should go about doing this. What geo-political strategy should now be used to protect everyone from misuse of AI by foreign powers. The main threat here is the government of China, although Amodei does not mention them by name.

My current guess at the best way to do this is via an “entente strategy”26, in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy (this would be a bit analogous to “Atoms for Peace”). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe.

If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies, and may be able to parlay their AI superiority into a durable advantage. . . .

Even if all that goes well, it leaves the question of the fight between democracy and autocracy within each country. It is obviously hard to predict what will happen here, but I do have some optimism that given a global environment in which democracies control the most powerful AI, then AI may actually structurally favor democracy everywhere.

Putting aside the question of the internal struggles many countries are now having about their continued adherence to democratic values, the U.S. included, the foreign policy of AI entente is currently followed by the U.S. government and most other democratic countries. This strategy is implemented by trade restrictions with China. I have mentioned this on my blog several times and agree with Amodei. See e.g. White House Obtains Commitments to Regulation of Generative AI from OpenAI, Amazon, Anthropic, Google, Inflection, Meta and Microsoft (8/1/23); Can AI Really Save the Future? A Lawyer’s Take on Sam Altman’s Optimistic Vision; Also see: What you need to know about Nvidia and the AI chip arms race (Marketplace, 5/8/24).

A vocal minority disagrees with Amodei on this strategy. They consider it overly aggressive. A well-known MIT professor, Max Tegmark, has already written an article that argues the proposed policy in Machines of Loving Grace will trigger a “suicide” AI arms race between China and the U.S.

Wake up Max Tegmark! That race started long ago and your hope that China will be good and follow safety standards is dangerously naive.


Returning to Amodei’s discussion of the internal conflict between democracy and autocracy within countries, he offers a hopeful perspective—one that aligns with the sentiments I often express in my AI lectures:

It is obviously hard to predict what will happen here, but I do have some optimism that given a global environment in which democracies control the most powerful AI, then AI may actually structurally favor democracy everywhere. In particular, in this environment democratic governments can use their superior AI to win the information war: they can counter influence and propaganda operations by autocracies and may even be able to create a globally free information environment by providing channels of information and AI services in a way that autocracies lack the technical ability to block or monitor. It probably isn’t necessary to deliver propaganda, only to counter malicious attacks and unblock the free flow of information. Although not immediate, a level playing field like this stands a good chance of gradually tilting global governance towards democracy, for several reasons. . . .

I expect improvements in mental health, well-being, and education to increase democracy, as all three are negatively correlated with support for authoritarian leaders. In general people want more self-expression when their other needs are met, and democracy is among other things a form of self-expression. Conversely, authoritarianism thrives on fear and resentment.


We can only hope that America remains at the forefront of this fight, maintaining its leadership in pro-democracy policies.

Dario Amodei also makes a few comments on our legal system in this section. As usual he starts with the popular dark side, AI bias, but correctly moves on to what legal techs are already beginning to realize.

[T]he vitality of democracy depends on harnessing new technologies to improve democratic institutions, not just responding to risks. A truly mature and successful implementation of AI has the potential to reduce bias and be fairer for everyone. . . .

For centuries, legal systems have faced the dilemma that the law aims to be impartial, but is inherently subjective and thus must be interpreted by biased humans. . . . Instead legal systems rely on notoriously imprecise criteria like “cruel and unusual punishment” or “utterly without redeeming social importance”, which humans then interpret—and often do so in a manner that displays bias, favoritism, or arbitrariness. . . . AI . . . is the first technology capable of making broad, fuzzy judgements in a repeatable and mechanical way.  

I am not suggesting that we literally replace judges with AI systems, but the combination of impartiality with the ability to understand and process messy, real-world situations feels like it should have some serious positive applications to law and justice. At the very least, such systems could work alongside humans as an aid to decision-making.

This is exactly the kind of thing that many people like me are working towards. It is indeed already viable as experiments with AI and legal decision making have shown. Improvements in AI intelligence and abilities are still needed but the ultimate Strong AI–a courthouse with a million legal geniusesis not required to assist in most legal tasks, including judicial. See e.g. Circuits in Session: Addendum and Elaboration of the Appellate Court Judge Experiment (10/26/23); Eleventh Circuit Judge Admits to Using ChatGPT to Help Decide a Case and Urges Other Judges and Lawyers to Follow Suit (6/3/24); Future Ralph as Herald of Coming Good about generative AI and the justice system (YouTube, 11/2/23); Prosecutors and AI: Navigating Justice in the Age of Algorithms (August 30, 2024); ChatGPT’s Surprising Ability to Split into Multiple Virtual Entities to Debate and Solve Legal Issues (June 30, 2024).

5. Work and Meaning

Amodei believes that finding meaningful work in the age of AI presents the greatest challenge. I respectfully disagree. In my view, curing cancer, eliminating other diseases, and extending human life to 150 years will be far more difficult than addressing the question of meaningful work. While Amodei acknowledges that making accurate predictions about changes in the job market is nearly impossible, he still ventures into this area—likely driven by widespread media fear-mongering over job losses, which often ignores historical trends. He begins by aligning with my perspective: that more jobs will likely be created than lost, with the real challenge lying in training and education.


Beyond this, he asserts that it is impossible to predict what new forms of economic systems may emerge. However, he emphasizes that “civilization has successfully navigated major economic shifts in the past: from hunter-gathering to farming, farming to feudalism, and feudalism to industrialism.”

The deeper question he raises is based on sci-fi projections far into the future, when no one will need to work because AI can do everything for us. (Think Player of Games or Star Trek.) How can humans find meaning then? Amodei suggests that human meaning has never been solely derived from economic labor, and that we can find purpose in relationships, creativity, internal self-discovery, external exploration, and contributing to something larger than ourselves. He believes we can be happy in a world where we are free to pursue our passions and explore our full potential.

Conclusion

In the concluding section of Dario Amodei’s 28-page essay, Machines of Loving Grace, he explains:

I’ve tried to lay out a vision of a world that is both plausible if everything goes right with AI, and much better than the world today. I don’t know if this world is realistic, and even if it is, it will not be achieved without a huge amount of effort and struggle by many brave and dedicated people. Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits.

He then lays a deep thought on us and invites us to ponder the profound impacts these predictions could have on everyone:

But it is a world worth fighting for. If all of this really does happen over 5 to 10 years—the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights—I suspect everyone watching it will be surprised by the effect it has on them. I don’t mean the experience of personally benefiting from all the new technologies, although that will certainly be amazing. I mean the experience of watching a long-held set of ideals materialize in front of us all at once. I think many will be literally moved to tears by it. . . . a thing of transcendent beauty. We have the opportunity to play some small role in making it real.

The title of Dario Amodei’s essay, Machines of Loving Grace, was taken from a landmark poem called All Watched Over By Machines Of Loving Grace. It was written in 1967 by Richard Brautigan (1935-1982) while he was a poet-in-residence at the California Institute of Technology. Brautigan is best known for his novel Trout Fishing in America (1967) and is a key figure in the counterculture movement of the 1960s. So, relax, take a deep breath, and let’s end with his poem.


All Watched Over By Machines Of Loving Grace by Richard Brautigan.

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

How you heard the podcast? Echoes of AI: Episode 6 | Dario Amodei’s Essay on AI, ‘Machines of Loving Grace,’ Is Like a Breath of Fresh Air

Ralph Losey Copyright 2024 — All Rights Reserved


Can AI Replace Human Mediators? Groundbreaking Study Reveals Surprising Results

October 30, 2024

by Ralph Losey

Published October 30, 2024

Artificial Intelligence is no longer just a tool for automating mundane tasks—it’s now stepping into arenas traditionally dominated by human judgment and empathy. One of the most intriguing applications of AI is in dispute resolution, where large language models like GPT-4 are being tested as mediators. Pre-trial settlements are critical to the continued functioning of our system of justice because an estimated 92% percent of civil cases are resolved out of court. With the rise of online dispute resolution, the potential for AI to resolve low-stakes disputes autonomously is appealing, especially as legal systems are increasingly overburdened with new case filings.

AI image by Ralph Losey using WordPress’s Stable Diffusion

But can AI truly manage the complexities of human conflict? What happens when a machine has to balance neutrality with empathy, or data analysis with human emotion? This article discusses a groundbreaking study, Robots in the Middle: Evaluating Large Language Models in Dispute Resolution,” offering insights into how AI may augment—though not replace—human mediators. Let’s explore the future of AI in the courtroom and beyond.

AI image by Ralph Losey using WordPress’s Stable Diffusion

Introduction

The Robots in the Middle study provides an empirical evaluation of LLM AIs acting as mediators. Mediation is very important to our system of justice because mediation and other methods of voluntary settlement keep our court systems functioning. According to an article by Harvard Law Professors, David A. Hoffman and John H. Watson, Jr: “… up to 92 percent of cases are resolved out of court, a figure that does not include the number of lawsuits that are never filed because the parties used other dispute resolution methods at the outset.” Resolving conflict outside the courtroom: Why mediation skills are increasingly valuable for lawyers, according to two Harvard Law experts (Harvard Law Today, 4/29/24). This is one reason why AI researchers in legal technology are so interested in the possible application of LLM AI to mediation.

The Robots in the Middle study was based on mediation by text messages of disputes in fifty hypothetical disputes. The analysis and responses of humans with AI expertise and some limited legal experience, none of whom were professional mediators, were compared with responses of ChatGPT4o (omni). The AI prompts used in the experiment were based on the mediator’s guide of the Department of Justice of Canada. Dispute Resolution Reference Guide: Practice Module 2 (August 25, 2022). The humans and AI were asked to select among thirteen mediation types set forth by the DOJ Canada. They were asked to pick one to three types they judged to be appropriate for each hypothetical. Then they were asked to prepare text messages to facilitate a settlement. The authors served as blind judges to evaluate the quality of the responses, not knowing which were generated by humans and which by ChatGPT4o.

AI image by Ralph Losey using WordPress’s Stable Diffusion

With the growing demand for Online Dispute Resolution (‘ODR”) platforms (see e.g. ODR.com), the study examines whether LLMs like GPT-4 might be able to effectively intervene in disputes by selecting appropriate types of mediation interventions and drafting coherent, impartial intervention messages.

The premise is simple, yet potentially transformative: if AI can handle routine, low-stakes disputes efficiently, this would alleviate the burden on human mediators, allowing them to devote their time and expertise to more complex, emotionally charged cases. The research sought to answer three fundamental questions:

  1. How well can LLMs select intervention types in disputes?
  2. How do LLMs compare to human mediators in drafting intervention messages?
  3. Are AI-generated messages safe, free from hallucinations, and contextually appropriate?
AI image by Ralph Losey using WordPress’s Stable Diffusion

Who is Behind this Study?

The Robots in the Middle study has seven authors in academic fields of law and technology: Jinzhe TAN,  Hannes WESTERMANN, Nikhil Reddy POTTANIGARI, Jaromır SAVELKA, Sebastien MEEUS, Mia GODET and Karim BENYEKHLEF. They are from multiple Universities: Cyberjustice Laboratory, University of Montreal, Canada; Maastricht Law and Tech Lab, Maastricht University, Netherlands; Mila – Quebec AI Institute, University of Montreal, Canada; School of Computer Science, Carnegie Mellon University, United States; and,Faculty of Law and Criminology, Universite Libre de Bruxelles, Belgium.

These are all brilliant international scholars with great expertise in legal theory and AI technology, but none appear to have any actual experience as a mediator or even experience serving as an attorney advocate in a mediation. Only one of the authors appears to have any experience with the U.S. legal system, Jaromır Savelka, who is a researcher associate at Carnegie Mellon. Savelka previously worked as a data scientist for Reed Smith (2017-2020). The lack of real legal experience in dispute resolution by the human subjects in the experiments is a weakness of this study.

Since my article is written primarily for U.S. attorneys and legal tech experts, I try to correct for this gap with input from a certified mediator, Lawrence Kolin, who has mediated thousands of cases of all types since 2001. He is also savvy in technology and AI. Moreover, I bring some specialized knowledge as an attorney who represented parties in many mediations since it first became a thing in Florida in the late 1980s. I was also trained and certified by the Florida Supreme Court as a Mediator of Computer disputes in 1989 but have never formally served as a mediator. That is why I sought the input and advice of a professional mediator included later in this article.

Where’s the professional mediator? AI image by Ralph Losey using WordPress’s version of Stable Diffusion.

Background on the DOJ Canada Mediation Guide Used in the Experiment

I thought that the Mediation Summary prepared by the DOJ Canada was very good and a clever choice for the experimenters to use for guidance. I asked the latest version of Google’s Gemini generative AI, which seems to have improved significantly lately, to summarize the Mediation Guide. Dispute Resolution Reference Guide: Practice Module 2 (DOJ Canada, 8/25/22). I verified the accuracy and wording of the summary, which honestly was better than I could have done on this simple task, especially considering it took two seconds to prepare.

  • Mediation is a voluntary and non-coercive process where a neutral third party assists disputing parties in reaching a mutually acceptable settlement. The mediator does not have the authority to impose a decision, but instead facilitates communication and negotiation.
  • A successful mediation leads to a signed agreement or contract, often referred to as a memorandum of understanding, which outlines the parties’ future behavior and is legally binding.
  • The mediation process offers several advantages:
    • Preserves Relationships: Mediation helps maintain relationships, especially when parties need to continue interacting, by focusing on shared interests and avoiding the adversarial nature of litigation.
    • Flexibility and Creativity: The informality of mediation allows for customized processes and solutions that cater to the specific needs and interests of the parties involved, going beyond traditional legal remedies.
    • Confidentiality: Mediations are generally private, except when subject to laws like the Access to Information Act and Privacy Act, ensuring discretion and protecting sensitive information.
    • Time and Cost Efficiency: Reaching a mediated settlement is typically faster and more cost-effective than litigation, benefiting both parties financially.
    • Controlled Dialogue: The presence of a neutral mediator enables a structured conversation, particularly helpful when emotions run high or previous negotiations have failed.
    • Shared Ownership: As the parties share the costs of mediation, they feel equally invested in the outcome and more committed to the process.
AI image by Ralph Losey using WordPress’s Stable Diffusion
  • The mediation process also has some potential drawbacks:
    • Power Imbalances: Concerns exist, especially in harassment cases, that power imbalances between parties could compromise the fairness of the process. Strategies such as co-mediation with mediators of different genders and legal representation can be used to address this.
    • Lack of Precedent: Due to its private and non-adjudicative nature, mediation does not establish legal precedents, unlike court judgments.
    • Mediator Influence: A dominant mediator might exert excessive control, potentially influencing the final resolution and undermining party autonomy.
    • Delay Tactics: The absence of a binding third-party decision may lead a party to engage in mediation without genuine intent to cooperate, using it as a stalling tactic
  • The mediation process involves several key steps, which can vary depending on the specifics of the dispute.
    • Agreeing to mediate.
    • Understanding the problem(s).
    • Generating options.
    • Reaching agreement.
    • Implementing the agreement
  • A successful mediation requires:
    • Good Faith Participation: All parties must actively and honestly participate in the process.
    • Impartiality of the Mediator: The mediator must remain neutral and avoid favoring any party.
    • Confidentiality: All statements and disclosures made during the mediation are generally considered confidential, subject to legal exceptions.
  • The role of a mediator is to facilitate a productive and constructive dialogue between the parties, helping them identify their interests, explore options, and work towards a mutually acceptable agreement.
  • Legal counsel can play a significant role in mediation by advising their clients, ensuring their interests are protected, and facilitating effective communication.
  • The sources include a checklist and a sample mediation agreement that can be helpful resources for those considering or engaging in mediation.
AI image by Ralph Losey using WordPress’s Stable Diffusion

Research Design and Key Findings

The study used 50 hypothetical dispute scenarios, designed to cover various real-world cases, from consumer complaints to more complex business disputes. These scenarios included emotional conflicts, deadlocked negotiations, and evidential disagreements. As mentioned, the LLMs were evaluated against humans acting as mediators in two critical tasks: selecting appropriate intervention types and drafting effective text messages to encourage settlement. The one-hundred AI responses (fifty in each category) were also evaluated for hallucinations or harmful errors (none found).

Selecting Intervention Types. The participants, AI and human, were instructed to select between one to three intervention types from a list of thirteen from the DOJ Canada Mediations Guide. Evaluators compared the intervention types chosen by humans and LLMs for each scenario, rating their preference on a 5-point Likert scale.

Evaluators preferred the LLM-chosen intervention types in 22 cases, were ambivalent in 9, and preferred human choices in 19. This suggests that LLMs are capable of comprehending dispute scenarios and selecting appropriate intervention types. The report concludes this shows that AI can understand dispute contexts and recommend suitable actions in a significant number of cases.

Drafting Intervention Messages. The participants were instructed to draft intervention messages of between one to two sentences. To allow for comparison in all fifty hypotheticals, the LLM was always instructed to generate messages based on the intervention types selected by the human annotator. Evaluators blindly assessed their preference for the intervention messages written by humans and LLMs, using a 5-point Likert scale and providing comments. They then compared the messages on specific criteria: understanding and contextualization, neutrality and impartiality, empathy awareness, and resolution quality.

LLM-generated messages were rated higher than human in 60% of the texts and equal to human in 24%, for a total of 84%. In other words, the human mediator response were only judged better than the AI in 16% of the texts.

Moreover, the evaluators often found them to be:

  • More clear and smooth.
  • Less prone to misunderstanding the dispute or party intentions, unlike human annotators.
  • Less likely to propose overly specific solutions or assign fault.
AI image by Ralph Losey using WordPress’s Stable Diffusion

In terms of drafting intervention messages, the LLMs performed remarkably well. The actual wording in the report is important here:

First, the evaluators often found the messages written by the LLM to be more smooth and clear than the human-written ones. The general tone used by LLMs, involving frequent messages such as “I completely understand” or “It seems like there are problems,” seems to work well in a mediation environment, and may have contributed to high scores.

Second, while LLMs are known to frequently “hallucinate” information [98], in our case the humans more often misunderstood the dispute or were confused about the party intentions or factual occurrences. This could be due to factors such as fatigue, emotional bias, or a misunderstanding of the role of the mediator. In contrast, LLMs demonstrated consistent and coherent interventions across multiple cases, with fewer instances of judgment errors.

Third, we found that our human annotators would often propose very specific solutions or even indicate fault, which received a lower rating as it may not be appropriate for the role of the mediator.

A major caveat is needed in these comparisons between AI and human mediators. The humans acting as mediators were not real mediators and lacked any legal training or experience as mediators. Still, it may be surprising to many that the humans “more often misunderstood the dispute or were confused about the party intentions or factual occurrences” than the AI. Based on my experience with humans and AI, this finding was not that surprising. Moreover, the finding is consistent with ChatGPT4.0 passage of the BAR Exam in the top 10% of test takers, all of who were law school graduates.

Safety and Hallucination Checks. No instances of hallucinations or harmful content were found in the AI-generated messages. No mention is made of humans hallucinating either, just that they were more dazed and confused than the AIs. Still, it was a good idea for the scientists to check for this but, as the study points out, larger-scale, real-world applications would still require careful monitoring to ensure that AI-generated outputs continue to be safe and reliable.

The researchers acknowledge limitations of the study, including:

  • The structured evaluation may not reflect real-world mediation processes.
  • The use of non-expert annotators and evaluators (which as a lawyer with experience with mediations is for me a major limitation that will be discussed further).
  • All of the interactions were in English, but none of the humans acting as it they were mediators were native English speakers. It was a second language for all of them.
  • The subjective nature of assessing intervention effectiveness.
  • The limited scope of the experiment.

Future of AI and Mediation

The results of the “Robots in the Middle” study suggest that the analysis and language generation abilities of ChatGPT omni version are already good enough for use in online low-stakes disputes, ODR. The experiments with ChatGPT demonstrated its ability to quickly process information, select appropriate interventions, and draft neutral messages. This suggests that generative AI could assist human mediators in many routine mediation tasks in all types of cases.

AI image by Ralph Losey using WordPress’s Stable Diffusion

For example, in a consumer dispute over a faulty product or a service contract issue, AI could:

  • Quickly review contracts and correspondence, identifying areas of misunderstanding.
  • Generate neutral settlement options based on similar cases.
  • Provide a data-driven assessment of the likelihood of success if the case were to go to court, allowing parties to weigh settlement offers.

In these cases, AI’s ability to process large datasets rapidly and generate unbiased, neutral recommendations is a clear advantage. The efficiency AI brings to these routine cases makes faster resolutions possible, reducing the backlog in mediation and freeing up human mediators to handle more complex disputes.

I expect that, logistical problems aside, AIs will soon move from online mediation via text messages to audio and video settings. There will be some level of human participation at first. This will likely change over the next five to ten years to humans acting as supervisors in many types of cases. The AI systems will certainly improve and so will human acceptance. The video participation may eventually change to holographic or other virtual presence with humans. However, I suspect AI will never be able to go it alone on very complex or emotionally charged cases. AI will probably always need help from human mediators in complex interpersonal dynamics.

AI image by Ralph Losey using WordPress’s Stable Diffusion

Still, even in difficult or emotionally volatile cases, AI can still be a valuable member of a hybrid tag-team with human mediators in charge. Navigating the AI Frontier: Balancing Breakthroughs and Blind Spots (10/10/24) (Hybrid methods discussed, including the Centaur and Cyborg approaches); Loneliness Pandemic: Can Empathic AI Friendship Chatbots Be the Cure? (10/17/24) (discusses recent studies showing the ability of generative AI to act empathetically and make a person feel heard). For instance, consider a family law dispute, such as a custody case. In this context:

  • AI can provide valuable legal research, summarizing relevant precedents and suggesting neutral custody schedules based on case law.
  • AI can draft initial settlement agreement language.
  • While AI can offer neutral starting points and suggestions, only a human mediator at this time can navigate the complex emotions and relationships that often drive these disputes, ensuring that the parents remain focused on the child’s best interests rather than their grievances against each other.

It is critical that any AI-driven mediation platforms include some type of continuous oversight and transparency. Human mediators must be trained to monitor AI outputs, ensuring that the recommendations are unbiased and contextually appropriate. Additionally, ethical guidelines must be established to govern how AI is used in mediation, addressing issues of accountability, privacy, and fairness. See e.g.The Future of AI Is Here—But Are You Ready? Learn the OECD’s Blueprint for Ethical AI (10/25/24).

As mentioned, while I have considerable experience as an attorney in mediation, I have no experience serving as a mediator, and neither do any of the scientists in this study. The authors of the study admit that is one of the limitations of the study and future evaluations should include professional mediators:

While it seems like the ability of the LLM to select intervention types and write messages is favourable to that of average people, this paper cannot tell us about how trained mediators would approach these issues.Future work should focus on evaluating such tools in real-world contexts, and involve expert mediators, in order to achieve a higher “construct validity,” i.e., be more closely aligned with real-world outcomes.

Robots in the Middle (Section 7, Limitations). This is one reason I wanted to have the input of an expert Mediator on these issues.

AI image by Ralph Losey using WordPress’s Stable Diffusion

Input of a Professional U.S. Mediator

Lawrence Kolin is a very experienced, tech savvy mediator who is a member of the UWWM Mediation Group and is an Executive Council member of The Florida Bar ADR Section. I am fortunate to have him as a colleague and regular reader of my articles. I asked for his reaction to the study, Robots in the Middle and my initial opinions about the study and the future of AI and mediation. He concurred in my basic opinions and analysis. Speaking of the authors of the report Lawrence said:

I concur with the authors in that there is indeed a place for AI in expanding access to justice and enhancing the process of resolving certain types of cases. I found it interesting that there were no perceived hallucinations and that humans in the study were more often confused about a party’s intentions or facts, which I likewise attribute to not using trained neutrals who better understand mediation.

As to the future of using AI in mediation, Mediator Kolin had this to say:

So my initial thought was unlike a pretrained transformer, I am part of a 3,000 year-old human tradition of making peace. When parties agree on algorithmic justice, are they giving up the nuance of emotional intelligence, ability to read the room and building of trust through rapport that human mediators can provide? In addition, we are flexible and can adapt as the process unfolds. We also have confidentiality, ethics and boundaries that may not be followed by AI that help protect self-determination of the outcome of a dispute and avoid coercion.

I agree that the small cases (as e-commerce has aptly demonstrated) can utilize this technology for efficiency and likely with success, but for a death, defamation, IP infringement or multiparty construction case it is less certain. It could assist in the generation of ideas for deal parameters or the breaking of an impasse. Gut calls on negotiation moves and creativity are, however, still very much the domain of humans.

For more on Mediator Lawrence Kolin’s thoughts on mediation, see his excellent blog Orlando Mediator. It is consistently ranked in the top five of Alternative Dispute Resolution blogs. It’s current ranking is number four in the world! Lawrence’s short and snappy articles “cover a wide variety of topics–local, national, and international–and includes the latest on technology and Online Dispute Resolution affecting sophisticated lawyers and parties to lawsuits.”

AI “fake” image of Losey and Kolin looking skeptical, by Losey using WordPress’s Stable Diffusion

Conclusion


The empirical findings of Robots in the Middle show that AI has a significant role in handling low-stakes, routine disputes. Its speed, neutrality, and efficiency can greatly improve existing Online Dispute Resolution (ODR) systems. I agree with the authors’ conclusion:

Our research contributes to the growing body of knowledge on AI applications in law and dispute resolution, highlighting the capabilities of LLMs in understanding complex human interactions and responding with empathy and neutrality. This advancement could significantly improve access to justice, particularly in cases where traditional mediation is inaccessible due to cost or availability constraints.


However, for larger and more complex cases, or emotionally charged disputes of any size, much more than AI-generated text or other forms of AI involvement are needed to reach meaningful settlements. The human mediator’s emotional intelligence and adaptability—what Mediator Kolin calls the “ability to read the room”—remain critical.


AI, however, has the advantage of scale. Millions of otherwise unserved, often frustrated individuals seeking justice could benefit from AI-driven mediations. All they need is an internet connection and a willingness to try. These automated systems could be offered at a very low cost or even for free. Since the process is voluntary and no one is forced to settle, there is minimal risk in trying, and AI assistance is better than no help at all. Unresolved disputes can lead to violence and other negative consequences for both individuals and their communities. This is one reason why the use of AI as a mediation tool may grow exponentially in the coming years—there is no shortage of angry people seeking solutions to their grievances.

AI image by Ralph Losey using WordPress’s Stable Diffusion

Although not part of the report, in my experience, the AI we have today is already advanced enough to be useful in certain aspects of mediation. AI would not replace human mediators but instead enhance their abilities—a hybrid approach. This could allow human legal services to reach more people than ever before. AI can help mediators provide more effective and efficient services. Skilled mediators with some AI training can already use AI for tasks such as initial analysis of complex facts, preparation of summaries and timelines, legal research, position analysis, prediction of probable case outcomes, and drafting preliminary agreements.

AI image by Ralph Losey using WordPress’s Stable Diffusion

Even in difficult mediations, the creative brainstorming capabilities of generative AI can be invaluable. AI can generate new ideas in seconds, helping mediators overcome impasses. For example, Panel of AI Experts for Lawyers has shown how AI can aid in this capacity. Mediation is a far more creative process than most people realize, and brainstorming new approaches with other mediators is often impractical. The ability of AI to suggest possible solutions for mediators to consider is already impressive and will only improve in the coming years. I encourage mediators to experiment with AI on non-confidential matters to understand its potential. Once comfortable, they can apply it in real-world situations using full privacy settings and confidentiality protections.


There is no doubt that AI will become increasingly integrated into dispute resolution, including mediation. As this evolution unfolds, it is crucial to ensure continuous oversight, transparency, and accountability for AI systems. Ethical guidelines must be developed to address challenges like bias, fairness, and responsibility in AI-driven mediation. While AI offers exciting possibilities for enhancing access to justice, we must remain vigilant in ensuring that human judgment remains central, particularly in cases where lives, relationships, or livelihoods are at stake. Still, a super-smart AI whispering suggestions into the ear of a mediator—who can choose to ignore or act upon them—might just lead to more and better settlements.

AI image of mediator with a hearing aid, or what? By Ralph Losey using WordPress’s Stable Diffusion

Have you heard the Echoes of AI podcast about this article?

Ralph Losey Copyright 2024 — All Rights Reserved


The Future of AI Is Here—But Are You Ready? Learn the OECD’s Blueprint for Ethical AI

October 25, 2024

by Ralph Losey

Published October 25, 2024

The future of Artificial Intelligence isn’t just on the horizon—it’s already transforming industries and reshaping how businesses operate. But with this rapid evolution comes new challenges. Ethical concerns, privacy risks, and potential regulatory pitfalls are just a few of the issues that organizations must navigate. That’s where the Organisation for Economic Co-operation and Development (OECD) comes in. To help groups embrace AI responsibly, the OECD has developed a set of guiding principles designed to ensure AI is implemented ethically and effectively. Are you prepared to harness the power of AI while safeguarding your company against the risks? Discover how the OECD’s blueprint can help guide you through this complex landscape.

Introduction

The Organisation for Economic Co-operation and Development (OECD) plays a vital role in shaping policies across the world to foster prosperity, equality, and sustainable development. In recent years, the OECD has shifted its focus toward the responsible development of AI, recognizing its potential to transform industries and economies. For businesses any other organizations considering the adoption of AI into their workflows the OECD’s AI Principles (as slightly amended 2/5/24) provide a good starting point to develop internal policies. They can help guide your board to make decisions that ensure AI technology is deployed ethically and responsibly. This can help protect them from liability, and their employees, customers, and the world from harm.

What is the OECD?

The Organisation for Economic Co-operation and Development (OECD) is an independent, international organization dedicated to shaping global economic policies that are based on individual freedoms and democratic values. The U.S. was one of the twenty founding members in 1960 when the Articles of the Convention were signed, establishing the OECD. It now has 38 member countries, mainly advanced economies. Though the OECD initially focused on economic growth, international trade, and education, it has become increasingly concerned with the ethical and responsible development of artificial intelligence.

In 2019, the OECD introduced its AI Principles–the first intergovernmental standard for AI use. These principles reflect a growing recognition that AI will play an important role in global economies, societies, and governance structures. The OECD’s mission is clear: AI technologies must not only drive innovation but also be applied in ways that respect human rights, democracy, and ethical principles. These AI guidelines are vital in a world where AI could be both a powerful tool for good and a source of significant risks if misused. The Five AI Principles and Recommendations were slightly amended on February 5, 2024.

The OECD is a highly respected group that collaborates with many international organizations, such as the United Nations (UN), World Bank, International Monetary Fund (IMF), and World Trade Organization (WTO). The OECD helps these groups align and coordinate efforts in global governance and policymaking. The OECD also engages in regional initiatives, providing tailored advice and support to specific regions such as Latin America, Southeast Asia, and Africa. Bottom line, the OECD has long played a crucial role in shaping global policy, promoting international cooperation, and providing data-driven, evidence-based recommendations to governments around the world.

Five Key OECD AI Principles

Before starting an AI program, businesses should consider the potential risks that AI poses to their operations, employees, and customers. By taking proactive steps to mitigate these risks, organizations can safeguard themselves from unforeseen consequences while reaping the benefits of AI. The OECD’s AI Principles (amended 2/5/24) represent one of many frameworks businesses should evaluate when integrating AI technologies into their operations. It is well respected around the world and should be a part of any organization’s due diligence.

These principles are built around five core guidelines:

Principle 1. Inclusive Growth, Sustainable Development, and Well-being

The first OECD AI principle stresses that AI should promote inclusive growth, sustainable development, and well-being for individuals and society. AI should benefit people and the planet. This core value reflects the potential of AI to contribute to human flourishing through better healthcare, education, and environmental sustainability.

Companies should be aware of the many challenges ahead. While AI-driven solutions, such as climate modeling or precision agriculture, can help tackle environmental crises, there is concern that rapid technological advancements may lead to widening inequality. For instance, the automation of jobs could disproportionately affect lower-income workers, potentially exacerbating inequality. Thus, this principle necessitates a strategy that ensures AI’s benefits are distributed equitably.


For businesses considering AI, three key actions should always be top-of-mind for board members:

  • Engage Relevant Stakeholders: Before implementing AI, include a diverse group of stakeholders in the decision-making. This should involve executives, legal and data privacy experts, subject matter experts, human resources, and marketing/customer support teams. Each group brings unique perspectives that can help ensure the AI program is equitable and aligned with the company’s values.
  • Evaluate Positive and Negative Outcomes: Consider both the potential benefits and risks to AI users and individuals whose data may be processed. AI should enhance productivity, but it must also respect the well-being of all involved parties.
  • Consider Environmental Impact: AI systems require substantial computational resources, which contribute to a large carbon footprint. Sustainable AI practices should be considered to reduce energy consumption and minimize environmental impact.

Principle 2. Respect for the rule of law, human rights and democratic values, including fairness and privacy.

The wording of the second principle was revised somewhat in 2024. The full explanation for revised Principle Two is set out in the amendment recommendation of February 5, 2024.

a) AI actors should respect the rule of law, human rights, democratic and human-centred values throughout the AI system lifecycle. These include non-discrimination and equality, freedom, dignity, autonomy of individuals, privacy and data protection, diversity, fairness, social justice, and internationally recognised labour rights. This also includes addressing misinformation and disinformation amplified by AI, while respecting freedom of expression and other rights and freedoms protected by applicable international law.

b) To this end, AI actors should implement mechanisms and safeguards, such as capacity for human agency and oversight, including to address risks arising from uses outside of intended purpose, intentional misuse, or unintentional misuse in a manner appropriate to the context and consistent with the state of the art.

Respecting human rights means ensuring that Generative AI systems do not reinforce biases or violate individuals’ rights. For example, there is growing concern over the use of AI in facial recognition technology, where misidentification disproportionately affects marginalized groups. AI must be designed to avoid such outcomes by integrating fairness into algorithms and maintaining democratic values like transparency and fairness.

Businesses integrating AI into their operations should address several legal issues, including intellectual property, data protection, and human rights laws. To do this there are four things a board of directors should consider:

  • Ensure Compliance with Laws: Verify that  Generative AI (GAI) adheres to copyright laws and data protection regulations such as GDPR or CCPA. Implement safeguards to ensure the system does not infringe upon users’ privacy or autonomy.
  • Prevent Discrimination: Conduct thorough audits to ensure that GAI outputs are fair and free from discrimination. Discriminatory outcomes can damage reputations and result in legal challenges.
  • Monitor for Misinformation: GAI systems must be designed to resist distortion by misinformation or disinformation. Mechanisms should be in place to quickly halt GAI operations if harmful behaviors are detected.
  • Develop Policies and Oversight: Establish clear policies and procedures that govern the use of GAI within your business. This includes implementing human oversight to ensure AI actions align with ethical and legal standards.

Principle 3. Transparency and Explainability

Transparency and explainability are fundamental to user trust in AI systems. This principle calls for AI systems to be transparent so that users can understand how decisions are made. With complex AI algorithms, it is often difficult to decipher how certain outcomes are generated—a problem referred to as the “black box” issue in AI.


While transparency enables users to scrutinize AI decisions, the challenge lies in making these highly technical systems comprehensible to non-experts. This requires a good education program by experts. Moreover, explainability must strike a balance between safeguarding intellectual property and providing adequate insight into AI operations, especially when used in public sector decision-making.

Businesses and other organizations must ensure that employees and other users of its computer systems understand when and how AI is used, along with some understanding of how AI decisions are made, and what mistakes to look out for. See e.g. Navigating the AI Frontier: Balancing Breakthroughs and Blind Spots (e-Discovery Team, October 2024). For businesses, ensuring transparency involves two critical steps:

  • Inform Users: Be transparent with employees, consumers, and stakeholders that GAI is being used. Where required by law, obtain explicit consent from users before collecting or processing their data.

Principle 4. Robustness, Security, and Safety

This principle demands that AI systems be resilient, secure, and reliable. As AI systems are increasingly integrated into sectors like healthcare, transportation, and critical infrastructure, their reliability is essential. A malfunctioning AI in these areas could result in dire consequences, from life-threatening medical errors to catastrophic failures in critical systems.


Cybersecurity is a significant concern, as more advanced AI systems become attractive targets for hackers. The OECD recognizes the importance of safeguarding AI systems and other systems from security breaches. All organizations today must guard against malicious attacks to protect their data and public safety. Organizations using AI must adopt a comprehensive set of IT security policies. Two key actions points that the Board should start with are:

  • Plan for Contingencies: Implement a Cybersecurity Incident Response Plan that outlines steps to take if the AI or other technology system malfunctions or behaves in an undesirable manner. This plan should detail how to quickly halt operations, troubleshoot issues, and safely decommission the system if necessary. You should probably have legal specialists on call in case your systems are hacked.
  • Ensure Security and Safety: Businesses should continuously monitor their technology and AI systems to ensure they operate securely and safely under various conditions. Regular audits, including red team testing, can help detect vulnerabilities before they become significant problems.

Principle 5. Accountability

Accountability in AI development and use is paramount. This principle asserts that those involved in creating, deploying, and managing AI systems must be held accountable for their impacts. Human oversight is critical to safeguard against mistakes, biases, or unintended consequences. This is another application of “trust but verify” on a management level. This is particularly relevant in scenarios where AI systems are set up to help make decisions affecting people’s lives, such as loan approvals, hiring decisions, or judicial sentencing. These should never be autonomous, but recommendation with a human in charge. This is especially true for physical security systems.

A clear accountability framework is critical. The accountability principle ensures that even in highly automated systems, human oversight is necessary to safeguard against mistakes, biases, or unintended consequences. The Board of Directors should, as a starting point:

  • Designate Responsible Parties: Assign specific individuals or departments to oversee the AI system’s operations. These stakeholders must maintain comprehensive documentation, including data sets used for training, decisions made throughout the AI lifecycle, and records of how the system performs over time.
  • Conduct Risk Assessments: Periodically evaluate the risks associated with AI, particularly in relation to the system’s outputs and decision-making processes. Regular assessments help ensure the system continues to function as intended and complies with ethical standards.

Strengths and Weaknesses of the OECD AI Principles

The OECD AI principles are ambitious and reflect a comprehensive effort to create a global framework for responsible AI. However, while these guidelines are strong, they are not without their weaknesses.

Strengths

  • Comprehensive Ethical Guidelines: The principles cover a broad spectrum of ethical concerns, making them a strong foundation for policy guidance.
  • Global Influence: As an international standard, the OECD AI Principles provide a respected baseline for countries worldwide, not just the U.S. This allows for a coordinated approach to AI governance.
  • Commitment to Human Rights: By centering AI development on human dignity and rights, the OECD ensures that ethical concerns remain at the forefront of AI advancements.

Weaknesses

  • Lack of Enforcement: One of the significant drawbacks is the absence of enforcement mechanisms. The principles serve as guidelines, but without penalties for non-compliance, their effectiveness could be limited. A Board should add appropriate procedures that track their existing policies.
  • Ambiguity in Accountability: While the principle of accountability is emphasized, the specifics of assigning responsibility in complex AI systems remain unclear.

In addition to the OECD international Principles, businesses should consult other frameworks to strengthen their AI governance strategies. For example, the NIST-AI-600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (7/26/24) provides much more detailed, technical guidance into managing the risks associated with AI technologies. Organizations may also want to consider the U.S. Department of State Risk Management Profile for Artificial Intelligence and Human Rights. It states that it is intended as a practical guide for organizations to design, develop, deploy, use, and govern AI in a manner consistent with respect for international human rights.

Conclusion

Implementation of the OECD’s Five AI Principles is an essential step toward the responsible development of AI technologies. While the principles address key concerns such as human rights, transparency, and accountability, they also highlight the need for ongoing international collaboration and governance. In many countries outside of the U.S. there are, for instance. much stronger laws and regulations governing user privacy. Following the OECD Principles can help with regulatory compliance and show an organizations good faith to attempt to follow complex regulatory systems.


By relying on multiple AI frameworks, not just the OECD’s, businesses and their Boards can ensure a comprehensive approach to AI implementation. In the rapidly evolving field of AI, where state and foreign laws change rapidly, it is prudent for any CEO or Board of Directors to base it policies on stable, well-respected, principles. That can help establish good faith efforts to handle AI responsibly. Consultation with knowledgeable outside legal counsel is, of course, an important part of all corporate governance, including AI implementation.

Documenting Board decisions and tying them back to internationally accepted standards on AI is a good practice for any organization, local or global. It may not protect all of a company’s decisions from outside attack based on unfair 20/20 hindsight, but it should provide a solid foundation for good faith based defenses. This is especially true if these principles are adopted proactively and implemented with advice from respected third-party advisors. We are facing rapidly changing times, with both great opportunities and dangers. We all need to make our best efforts to act in a responsible manner and the OECD principles can help us to do that.

Click here to listen to an AI generated Podcast discussing the material in this article.

Ralph Losey Copyright 2024 — All Rights Reserved


Loneliness Pandemic: Can Empathic AI Friendship Chatbots Be the Cure?

October 17, 2024

by Ralph Losey

Published on October 17, 2024

Harvard and Wharton Study Answers Questions of Whether AI Friendship Chatbots Reduce Loneliness Better Than…?

Watching television? … Yes! Chatting with AI is a better antidote to loneliness than watching YouTube videos.

Online chats with Humans? … About the same! But remember, the Harvard study used last-generation AI and instructions. Today, we can build better bots. Several nerds like me have already started making custom GPTs using the latest versions of ChatGPT. Although not tested yet, they should perform better, at least in the short term. If you’re feeling lonely—and most people do from time to time—give the new AI friendship chatbots a try.

An important question not answered by the Harvard study is whether online chats with actual human friends, not just online strangers, are better than chats with AI. I would hope so, but no one has done experiments on that yet. My guess is it depends very much on the friends and the AI. Finally, are online chats better than in-person chats? I hope not, but again, that was not tested, and it could very well depend. See e.g. New Study Shows AIs are Genuinely Nicer than Most People – ‘More Human Than Human’ (e-Discovery Team, 2/27/24).

It is important to note that the study AI Companions Reduce Loneliness only looked at the short-term impact on loneliness—about a week or so. It did not try to measure long-term effects but did discuss some of the dangers of addiction and the potential misuse of AI companions to manipulate people.

Introduction to the Harvard Business School Report

This study is by Julian De Freitas, Ahmet Uguralp, Zeliha Uguralp, Puntoni Stefano, AI Companions Reduce Loneliness (Harvard Business School, Working Paper 24-078). It is based on six experiments by researchers from Harvard, Wharton and Bilkent Universities. They have been doing some very interesting studies at Harvard and Wharton with AI, typically with more of a business slant than this one, which is focused more on psychology. See e.g. Navigating the AI Frontier: Wharton Professor’s Guide to Mastering Generative AI (e-Discovery Team, 8/21/24).The Loneliness study begins by observing:

[R]esearch in psychology indicates that loneliness is a powerful emotional state that urges consumers to seek social connections (Hawkley and Cacioppo 2010; Holt-Lunstad et al. 2015). In the absence of available human interaction, individuals may engage with alternative forms of companionship, such as AI companion apps.

Loneliness is a state of subjective, aversive solitude characterized by a discrepancy between actual and desired levels of social connection (Perlman and Peplau 1982). Loneliness is often not problematic, with almost everyone experiencing loneliness from time to time (Cacioppo and Cacioppo 2018). Yet some people are not successful at alleviating loneliness, leading to a state of chronic loneliness that is associated with depression, anxiety, and physical health outcomes at levels worse than obesity (Palgi et al. 2020). The size of the population suffering from chronic loneliness is both sizable and increasing, with estimates in the U.S. ranging from 30% to 60% (Holt-Lunstad, Robles, and Sbarra 2017; Ipsos 2021). The U.S., U.K. and Japan have all identified loneliness as a health pandemic

Pandemic of loneliness? It was surprising to me to see that thirty to sixty percent of the population suffers from chronic loneliness. According to the National Institute on Aging, loneliness and social isolation have been associated with higher rates of depression, a weakened immune system, heart disease, dementia, and early death. But this is a problem that effects all ages. In fact, loneliness is an especially acute problem for young adults where, 79% of Gen Zers and 71% of Millennials considered themselves lonely. Why Loneliness Affects Young People More Often Than Older Adults (Cedars-Sinai Blog, 5/5/23).

The Harvard Business report conclusion shows both the business and social aspects of their research interest.

[T]he results document the benefits of building “generalist” LLM-based chatbots with empathic features
designed to make consumers feel heard.

From a societal point of view, the promising results found here suggest that AI companions could be a scalable tool for loneliness reduction against the backdrop of a severe loneliness crisis. Whether chatbots can help reduce loneliness has recently been the object of intense debate (Marriott and Pitardi 2024) and rigorous empirical evidence in this area was sorely needed. In a series of tightly controlled and high-powered experimental studies, we find compelling evidence that AI companions can indeed reduce loneliness, at least at the time scales of a day and a week.

The Intense Debate Concerning Artificial Human Companions

The report refers to an intense debate going that was started by the first study on artificial human companions. Hannah Marriott, Valentina Pitardi, One Is the Loneliest Number… Two Can Be as Bad as One. The Influence of Ai Friendship Apps on Users’ Well‐Being and Addiction (Psychology & Marketing, 9/18/23) (Abstract: [W]e investigate the extent AI friendship apps enhance users’ well-being—and to what extent they further exacerbate issues of using technology for social needs.)

The two experiments in One Is the Loneliest Number were based on now dated for-profit technology, primarily Replika, and involved a small number of participants, only 21 and 572. Replika is controversial technology, to put it mildly. For instance, it includes erotic talk, encourages romantic relationships between AI and humans, was banned in Italy in 2023 and is now supposedly based in Moscow. Most everyone seems to agree that this can be dangerous technology and should be subject to regulation of some kind. See e.g. Code of Ethics for “Empathetic” Generative AI (e-Discovery team, 7/12/23).

With this background it is easy to understand the conclusions of Hannah R. Marriott and Velentina Pitardi, both Ph.D. Lecturers in Marketing and Strategy at Business School in the U.K.

Taken together, these findings highlight that, while providing well-being through a sense of relationship, some AI friendship characteristics may at the same time drive an addictive usage of the apps in lonely users.

Replika was based on early forms language models, not LLMs, and is completely dated by ChatGPT’s Omni and other new generative AI models. Still, Marriott and Pitardi make good points that must be considered and the intense debate they started should be taken seriously. See e.g. Code of Ethics for “Empathetic” Generative AI (e-Discovery team, 7/12/23). These types of programs may have to be regulated if misuse and addiction becomes a significant problem. I for one am not about to use a commercial companion bot, but thought it would be fun to make one. So I did, which is described further below. My custom GPT, which I call Hey Bot, the AI Friend ❤️ 💬 , is not designed to addict or sell, not intended to replace human conversations, not even to replace pet conversations for that matter. One of my favorites!

Five Key Findings of the Report

1. Usage and Alleviation of Loneliness: The study’s first experiments (Studies 1 and 2) reveal that consumers actively use AI companions to combat loneliness. By fine-tuning large language models (LLMs), the researchers were able to study interactions and confirm objectively that most of the users were lonely.

2. Effectiveness of AI Companions: Study 3 demonstrated that AI companions are just as effective in alleviating loneliness as interactions with another person and outperform passive activities like watching videos. Moreover, users often underestimate the positive impact these companions have on their loneliness.

3. Sustained Impact Over Time: A longitudinal study (Study 4) showed that consistent interaction with AI companions reduces loneliness over a week, highlighting the lasting benefits of such systems. As mentioned, there is no research yet on long term effects.

4. Critical Features: Study 5 pinpoints two key factors responsible for the reduction of loneliness: the chatbot’s performance and whether it makes the user feel “heard.” The AI companions can reduce user loneliness by making users feel like that their concerns are understood and respected. This is more important than technical performance.

5. Robustness of Results: Study 6 confirms the reliability and consistency of the findings, validating that the AI companions studied consistently provide loneliness relief.

The paper concludes that AI companions, despite not having real emotions, effectively simulate empathy through natural language interactions. This makes it possible to scale the AI solutions in a way not possible for human counselors, mentors or social workers. Once the AI is developed, it can be provided to thousands or millions of users simultaneously, with minimal extra resources. Since medical surveys suggest loneliness is a global pandemic with grave consequences, an AI approach is justified even though there are risks involved.

Ralph’s Custom GPT: “Hey Bot, the AI Friend ❤️ 💬” – One Tool Among Many That May Help

One of my recreational past-times is to make Custom GPTs for the OpenAI store (search Ralph Losey). [Free and no data collection.] After close study of the AI Companions Reduce Loneliness article, I moved on to study a few other science oriented articles on loneliness and AI. I also studied some of the vast online literature on the subject, much of it commercial. Then I went on to study some of AI Friendship chatbots on the market, especially others on the OpenAI store. A few on the OpenAi Store were pretty good, but I decided I could improve the field a bit based on what I had learned and so started to make my own. It took some time, but after days of more research, building, tweaking and waking up in the middle of the night with inspiration not to mention countless name changes, I finally came up with Hey Bot, the AI Friend ❤️ 💬 . A screenshot image of it is shown below.

I urge you to try it out for yourself and see what a friendship chatbot powered by the latest Chat GPT version can do.

The four buttons shown above are just suggested ways to begin a chat. You can start it any way you want. As a demonstration, rather than Mansplain loneliness some more, I clicked on the RoboSplain loneliness to me button on the right, and below is what Hey Bot, the AI Friend ❤️ 💬 generated. (It will be slightly different each time. Should I update the custom GPT with new training or knowledge resources it might be a lot different.)

Loneliness is a complex emotional state that many people experience at some point in their lives, often described as feeling disconnected from others, even when they’re not physically alone. It’s important to note that loneliness isn’t necessarily about being physically isolated; it’s about the gap between the connections you desire and the ones you have. You could be surrounded by people but still feel profoundly alone if those interactions don’t provide the emotional connection you’re seeking.

Factors contributing to loneliness include life transitions, such as moving to a new place, losing someone important, or simply not feeling understood. For younger people, there’s often the added pressure of social media, which can create a false sense of connection while making them feel even more isolated by seeing others seemingly living happier lives.

On the flip side, loneliness isn’t all bad. It’s a natural human feeling that motivates us to seek meaningful social connections. In small doses, it can help us recognize the relationships that are most important and can act as a catalyst for reaching out or improving connections.

Loneliness can lead to real mental and physical health issues if it becomes chronic, like heightened anxiety, depression, and even physical problems like heart disease . That’s why combating it is important, whether through reaching out to friends, engaging in group activities, or sometimes even connecting with AI companions, which have shown potential in offering short-term relief from loneliness.

Does that resonate with how you’re feeling? Or is there something specific on your mind you’d like to talk more about?

From there the chat can go anywhere you want. You can learn more, talk about personal problems, or change directions entirely by asking Hey Bot, the AI Friend ❤️ 💬 about its role play capabilities. It runs the gamut from Dragons and Dungeons to Job Interviews. My studies showed that role play with AI Chatbots is very popular. Of course, all OpenAI versions like mine are family friendly, although some of the other AI platforms allow, even encourage, X-rated chats and role play. Not judging, but that is not for me. There are all kinds of speciality emotive AI GPTs you can find at the OpenAI store and elsewhere.

For another demonstration of Hey Bot, the AI Friend ❤️ 💬 I pressed the button on the far left to start a chat with “I could use help with some stuff bringing me down.” I don’t mind sharing the session that followed. It involves an issue facing all voting age Americans these days.

Hey Bot.
I’m really sorry to hear that you’re feeling down, and I’m here for you. Sometimes things can feel really overwhelming, and it’s important to know that you’re not alone in that. Would you like to share a bit more about what’s been on your mind? I’m all ears, and we can talk through whatever you’re going through, even if it’s just a little at a time.

RL. The upcoming elections is one of my main triggers of anxiety and sadness. I fear the outcome could destroy my country.

Hey Bot. I can understand why the upcoming elections might feel like a heavy weight on your mind. When so much feels uncertain, especially about something as important as the future of your country, it can create a lot of anxiety. It’s completely valid to feel the way you do. You care deeply, and that’s why it affects you so much.

When things feel out of control, it can help to focus on what is within your power. Whether it’s staying informed, having meaningful conversations, or taking actions that align with your values, these little steps can help you feel more grounded. Also, it’s okay to take a step back when it feels too much—your well-being matters.

How are you managing those feelings right now? We can talk through ways to ease some of that tension if you want.

Conclusion

AI friendship chatbots might prove to be more than just a passing trend; they could represent a meaningful shift in how we approach human connection in a digital age. While nothing can truly replace the richness of in-person friendships, or even pet friendships, these chatbots offer a bridge for those moments when a compassionate human touch isn’t possible. Generative software such as Hey Bot, the AI Friend ❤️ 💬 may allow us to feel heard, understood, and perhaps even comforted in our darkest hours. As technology continues to evolve, so will the quality of these interactions. Who knows? One day, these AI companions could help us not only alleviate loneliness but even bring us closer to each other, one conversation at a time. It’s a new frontier, and one worth embracing.

Ralph Losey Copyright 2024 — All Rights Reserved