Six Sets of Draft Principles Are Now Listed at AI-Ethics.com

October 8, 2017

Arguably the most important information resource of AI-Ethics.com is the page with the collection of Draft Principles underway by other AI Ethics groups around the world. We added a new one that came to our attention this week from an ABA article, A ‘principled’ artificial intelligence could improve justice (ABA Legal Rebels, October 3, 2017). They listed six proposed principles from the talented Nicolas Economou, the CEO of electronic discovery search company, H5.

Although Nicolas Economou is an e-discovery search pioneer and past Sedona participant, I do not know him. I was, of course, familiar with H5’s work as one of the early TREC Legal Track pioneers, but I had no idea Economou was also involved with AI ethics. Interestingly, I recently learned that another legal search expert, Maura Grossman, whom I do know quite well, is also interested in AI ethics. She is even teaching a course on AI ethics at Waterloo. All three of us seem to have independently heard the Siren’s song.

With the addition of Economou’s draft Principles we now have six different sets of AI Ethics principles listed. Economou’s new list is added at the end of the page and reproduced below. It presents a decidedly e-discovery view with which all readers here are familiar.

Nicolas Economou, like many of us, is an alumni of The Sedona Conference. His sixth principle is based on what he calls thoughtful, inclusive dialogue with civil society. Sedona was the first legal group to try to incorporate the principles of dialogue into continuing legal education programs. That is what first attracted me to The Sedona Conference. AI-Ethics.com intends to incorporate dialogue principles in conferences that it will sponsor in the future. This is explained in the Mission Statement page of AI-Ethics.com.

The mission of AI-Ethics.com is threefold:

  1. Foster dialogue between the conflicting camps in the current AI ethics debate.
  2. Help articulate basic regulatory principles for government and industry groups.
  3. Inspire and educate everyone on the importance of artificial intelligence.

First Mission: Foster Dialogue Between Opposing Camps

The first, threshold mission of AI-Ethics.com is to go beyond argumentative debates, formal and informal, and move to dialogue between the competing camps. See eg. Bohm Dialogue, Martin Buber and The Sedona Conference. Then, once this conflict is resolved, we will all be in a much better position to attain the other two goals. We need experienced mediators, dialogue specialists and judges to help us with that first goal. Although we already have many lined up, we could always use more.

We hope to use skills in both dialogue and mediation to transcend the polarized bickering that now tends to dominate AI ethics discussions. See eg. AI Ethics Debate. We need to move from debate to dialogue, and we need to do so fast.

_____

Here is the new segment we added to the Draft Principles page.

6. Nicolas Economou

The latest attempt at articulating AI Ethics principles comes from Nicolas Economou, the CEO of electronic discovery search company, H5. Nicolas has a lot of experience with legal search using AI, as do several of us at AI-Ethics.com. In addition to his work with legal search and H5, Nicholas is involved in several AI ethics groups, including the AI Initiative of the Future Society at Harvard Kennedy School and the Law Committee of the IEEE’s Global Initiative for Ethical Considerations in AI.

Nicolas Economou has obviously been thinking about AI ethics for some time. He provides a solid scientific, legal perspective based on his many years of supporting lawyers and law firms with advanced legal search. Economou has developed six principles as reported in an ABA Legal Rebels article dated October 3, 2017, A ‘principled’ artificial intelligence could improve justice. (Some of the explanations have been edited out as indicated below. Readers are encouraged to consult the full article.) As you can see the explanations given here were written for consumption by lawyers and pertain to e-discovery. They show the application of the principles in legal search. See eg TARcourse.com. The principles have obvious applications in all aspects of society, not just the Law and predictive coding, so their value goes beyond the legal applications here mentioned.

Principle 1: AI should advance the well-being of humanity, its societies, and its natural environment. The pursuit of well-being may seem a self-evident aspiration, but it is a foundational principle of particular importance given the growing prevalence, power and risks of misuse of AI and hybrid intelligence systems. In rendering the central fact-finding mission of the legal process more effective and efficient, expertly designed and executed hybrid intelligence processes can reduce errors in the determination of guilt or innocence, accelerate the resolution of disputes, and provide access to justice to parties who would otherwise lack the financial wherewithal.

Principle 2: AI should be transparent. Transparency is the ability to trace cause and effect in the decision-making pathways of algorithms and, in hybrid intelligence systems, of their operators. In discovery, for example, this may extend to the choices made in the selection of data used to train predictive coding software, of the choice of experts retained to design and execute the automated review process, or of the quality-assurance protocols utilized to affirm accuracy. …

Principle 3: Manufacturers and operators of AI should be accountable. Accountability means the ability to assign responsibility for the effects caused by AI or its operators. Courts have the ability to take corrective action or to sanction parties that deliberately use AI in a way that defeats, or places at risk, the fact-finding mission it is supposed to serve.

Principle 4: AI’s effectiveness should be measurable in the real-world applications for which it is intended. Measurability means the ability for both expert users and the ordinary citizen to gauge concretely whether AI or hybrid intelligence systems are meeting their objectives. …

Principle 5: Operators of AI systems should have appropriate competencies. None of us will get hurt if Netflix’s algorithm recommends the wrong dramedy on a Saturday evening. But when our health, our rights, our lives or our liberty depend on hybrid intelligence, such systems should be designed, executed and measured by professionals with the requisite expertise. …

Principle 6: The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society. …  The societal dialogue relating to the use of AI in electronic discovery would benefit from being even more inclusive, with more forums seeking the active participation of political scientists, sociologists, philosophers and representative groups of ordinary citizens. Even so, the realm of electronic discovery sets a hopeful example of how an inclusive dialogue can lead to broad consensus in ensuring the beneficial use of AI systems in a vital societal function.

Nicolas Economou believes, as we do, that an interdisciplinary approach, which has been employed successfully in e-discovery, is also the way to go for AI ethics. Note his use of the word “dialogue” and mention in the article of The Sedona Conference, which pioneered the use of this technique in legal education. We also believe in the power of dialogue and have seen it in action in multiple fields. See eg. the work of physicist, David Bohm and philosopher, Martin Buber. That is one reason that we propose the use of dialogue in future conferences on AI ethics. See the AI-Ethics.com Mission Statement.

_____

__

 

 


More Additions to AI-Ethics.com: Offer to Host a No-Press Conference to Mediate the Current Disputes on AI Ethics, Report on the Asilomar Conference and Report on Cyborg Law

September 24, 2017

This week the Introduction and Mission Statement page of AI-Ethics.com was expanded. I also added two new blogs to the AI-Ethics website. The first is a report of the 2017 conference of the Future of Life Institute. The second is a report on Cyborg Law, subtitled, Using Physically Implanted AI to Enhance Human Abilities.

AI-Ethics.com Mission
A Conference to Move AI Ethics Talk from Argument to Dialogue

The first of the three missions of AI-Ethics.com is to foster dialogue between the conflicting camps in the current AI ethics debate. We have now articulated a specific proposal on how we propose to do that, namely by hosting a  conference to move AI ethics talk from argument to dialogue. I propose to use professional mediators to help the parties reach some kind of base consensus. I know we have the legal skills to move the feuding leaders from destructive argument to constructive dialogue. The battle of the ethics robots must stop!

In arguments nobody really listens to try to understand the other side. If they hear at all it is just to analyze and respond, to strike down. The adversarial argument approach only works if there is a fair, disinterested judge to rule and resolve the disputes. In the ongoing disputes between opposing camps in AI ethics there is no judge. There is only public opinion. In dialogue the whole point is to listen and hear the other side’s position. The idea is to build common understanding and perhaps reach a consensus from common ground. There are no winners unless both sides win. Since we have no judges in AI ethics, the adversarial debate now raging is pointless, irrational. It does more hard than good for both sides. Yet this kind of debate continues between otherwise very rational people.

The AI-Ethic’s Debate page was also updated this week to include the latest zinger. This time the dig was by Google’s head of search and AI, John Giannandrea, and was, as usual, directed against Elon Musk. Check out the page to see who said what. Also see: Porneczi, Google’s AI Boss Blasts Musk’s Scare Tactics on Machine Takeover (Bloomberg 9/19/17).

The bottom line for us now is how to move from debate to dialogue. (I was into that way before Sedona.) For that reason, we offer to host a closed meeting where the two opposing camps can meet and mediate.It will work, but only when the leaders of both sides are willing to at least be in the same room together at the same time and talk this out.

Here is our revised Mission page providing more details of our capabilities. Please let me know if you want to be a part of such a conference or can help make it happen.

We know from decades of legal experience as practicing attorneys, mediators and judges that we can overcome the current conflicts. We use confidential dialogues based on earned trust, understanding and respect. Social media and thirty-second sound bites, which characterize the current level of argument, will never get us there. It will, and already has, just exasperated the problem. AI-Ethics.com proposes to host a no-press allowed conference where people can speak to each other without concern of disclosure. Everyone will agree to maintain confidentiality. Then the biggest problem will be attendance, actually getting the leaders of both sides into a room together to hash this out. Depending on turn-out we could easily have dozens of breakout sessions and professional mediators and dialogue specialists assigned to each group.

The many lawyers already in AI-Ethics.com are well qualified to execute an event like that. Collectively we have experience with thousands of mediations; yes, some of them even involving scientists, top CEOs and celebrities. We know how to keep confidences, build bridges and overcome mistrust. If need be we can bring in top judges too. The current social media sniping that has characterized the AI ethics debate so far should stop. It should be replaced by real dialogue. If the parties are willing to at least meet, we can help make it happen. We are confident that we can help elevate the discussion and attain some levels of beginning consensus. At the very least we can stop the sniping. Write us if you might be able to help make this happen. Maybe then we can move onto agreement and action.

 

 

Future of Life Institute Asilomar Conference

The Future of Life Institute was founded by the charismatic, Max Tegmark, author of Life 3.0: Being Human in the Age of Artificial Intelligence (2017). This is a must-read, entry level book on AI, AI ethics and, as the title indicates, the future of life. Max is an MIT professor and cosmologist. The primary funding for his Institute is from none other than Elon Musk. The 2017 conference was held in Asilomar, California and so was named the Asilomar Conference. Looks like a very nice place on the coast to hold a conference.

This is the event where the Future of Life Institute came up with twenty-three proposed principles for AI ethics. They are called, as you might have guessed, the Asilomar Principles. I will be writing about these in the coming months as they are the most detailed list of principles yet created.

The new web page I created this week reports on the event itself, not the principles. You can learn a lot about the state of the law and AI ethics by reviewing this page and some of the videos shared there of conference presentations. We would like to put on an event like this, only more intimate and closed to press as discussed.

We will keep pushing for a small confidential dialogue based event like this. As mostly lawyers around here we know a lot about confidentiality and mediation. We can help make it happen. We have some places in Florida in mind for the event that are just as nice as Asilomar, maybe even nicer. We got through Hurricane Irma alright and are ready to go, without or without Musk’s millions to pay for it.

Cyborg Law and Cyber-Humans

The second new page in AI-Ethics.com is a report on Cyborg Law: Using Physically Implanted AI to Enhance Human Abilities. Although we will build and expand on this page in the future, what we have created so far relies primarily upon a recent article and book. The article is by Woodrow Barfield and Alexander Williams, Law, Cyborgs, and Technologically Enhanced Brains (Philosophies 2017, 2(1), 6; doi: 10.3390/philosophies2010006). The book is by the same Woodrow Barfield and is entitled Cyber-Humans: Our Future with Machines (December, 2015). Our new page also includes a short discussion and quote from Riley v. California, 573 U.S. __,  189 L.Ed.2d 430, 134 S.Ct. 2473 (2014).

Cyborg is a term that refers generally to humans with technology integrated into their body. The technology can be designed to restore lost functions, but also to enhance the anatomical, physiological, and information processing abilities of the body. Law, Cyborgs, and Technologically Enhanced Brains.

The lead author of the cited article on cyborg law, Woody Barfield is an engineer who has been thinking about the problems of cyborg regulation longer than anyone. Barfield was an Industrial and Systems Engineering Professor at the University of Washington for many years. His research focused on the design and use of wearable computers and augmented reality systems. Barfield has also obtained both JD and LLM degrees in intellectual property law and policy. The legal citations throughout his book, Cyber-Humans, make this especially valuable for lawyers. Look for more extended discussions of Barfield’s work here in the coming months. He is the rare engineer who also understands the law.


%d bloggers like this: