Six Sets of Draft Principles Are Now Listed at AI-Ethics.com

October 8, 2017

Arguably the most important information resource of AI-Ethics.com is the page with the collection of Draft Principles underway by other AI Ethics groups around the world. We added a new one that came to our attention this week from an ABA article, A ‘principled’ artificial intelligence could improve justice (ABA Legal Rebels, October 3, 2017). They listed six proposed principles from the talented Nicolas Economou, the CEO of electronic discovery search company, H5.

Although Nicolas Economou is an e-discovery search pioneer and past Sedona participant, I do not know him. I was, of course, familiar with H5’s work as one of the early TREC Legal Track pioneers, but I had no idea Economou was also involved with AI ethics. Interestingly, I recently learned that another legal search expert, Maura Grossman, whom I do know quite well, is also interested in AI ethics. She is even teaching a course on AI ethics at Waterloo. All three of us seem to have independently heard the Siren’s song.

With the addition of Economou’s draft Principles we now have six different sets of AI Ethics principles listed. Economou’s new list is added at the end of the page and reproduced below. It presents a decidedly e-discovery view with which all readers here are familiar.

Nicolas Economou, like many of us, is an alumni of The Sedona Conference. His sixth principle is based on what he calls thoughtful, inclusive dialogue with civil society. Sedona was the first legal group to try to incorporate the principles of dialogue into continuing legal education programs. That is what first attracted me to The Sedona Conference. AI-Ethics.com intends to incorporate dialogue principles in conferences that it will sponsor in the future. This is explained in the Mission Statement page of AI-Ethics.com.

The mission of AI-Ethics.com is threefold:

  1. Foster dialogue between the conflicting camps in the current AI ethics debate.
  2. Help articulate basic regulatory principles for government and industry groups.
  3. Inspire and educate everyone on the importance of artificial intelligence.

First Mission: Foster Dialogue Between Opposing Camps

The first, threshold mission of AI-Ethics.com is to go beyond argumentative debates, formal and informal, and move to dialogue between the competing camps. See eg. Bohm Dialogue, Martin Buber and The Sedona Conference. Then, once this conflict is resolved, we will all be in a much better position to attain the other two goals. We need experienced mediators, dialogue specialists and judges to help us with that first goal. Although we already have many lined up, we could always use more.

We hope to use skills in both dialogue and mediation to transcend the polarized bickering that now tends to dominate AI ethics discussions. See eg. AI Ethics Debate. We need to move from debate to dialogue, and we need to do so fast.

_____

Here is the new segment we added to the Draft Principles page.

6. Nicolas Economou

The latest attempt at articulating AI Ethics principles comes from Nicolas Economou, the CEO of electronic discovery search company, H5. Nicolas has a lot of experience with legal search using AI, as do several of us at AI-Ethics.com. In addition to his work with legal search and H5, Nicholas is involved in several AI ethics groups, including the AI Initiative of the Future Society at Harvard Kennedy School and the Law Committee of the IEEE’s Global Initiative for Ethical Considerations in AI.

Nicolas Economou has obviously been thinking about AI ethics for some time. He provides a solid scientific, legal perspective based on his many years of supporting lawyers and law firms with advanced legal search. Economou has developed six principles as reported in an ABA Legal Rebels article dated October 3, 2017, A ‘principled’ artificial intelligence could improve justice. (Some of the explanations have been edited out as indicated below. Readers are encouraged to consult the full article.) As you can see the explanations given here were written for consumption by lawyers and pertain to e-discovery. They show the application of the principles in legal search. See eg TARcourse.com. The principles have obvious applications in all aspects of society, not just the Law and predictive coding, so their value goes beyond the legal applications here mentioned.

Principle 1: AI should advance the well-being of humanity, its societies, and its natural environment. The pursuit of well-being may seem a self-evident aspiration, but it is a foundational principle of particular importance given the growing prevalence, power and risks of misuse of AI and hybrid intelligence systems. In rendering the central fact-finding mission of the legal process more effective and efficient, expertly designed and executed hybrid intelligence processes can reduce errors in the determination of guilt or innocence, accelerate the resolution of disputes, and provide access to justice to parties who would otherwise lack the financial wherewithal.

Principle 2: AI should be transparent. Transparency is the ability to trace cause and effect in the decision-making pathways of algorithms and, in hybrid intelligence systems, of their operators. In discovery, for example, this may extend to the choices made in the selection of data used to train predictive coding software, of the choice of experts retained to design and execute the automated review process, or of the quality-assurance protocols utilized to affirm accuracy. …

Principle 3: Manufacturers and operators of AI should be accountable. Accountability means the ability to assign responsibility for the effects caused by AI or its operators. Courts have the ability to take corrective action or to sanction parties that deliberately use AI in a way that defeats, or places at risk, the fact-finding mission it is supposed to serve.

Principle 4: AI’s effectiveness should be measurable in the real-world applications for which it is intended. Measurability means the ability for both expert users and the ordinary citizen to gauge concretely whether AI or hybrid intelligence systems are meeting their objectives. …

Principle 5: Operators of AI systems should have appropriate competencies. None of us will get hurt if Netflix’s algorithm recommends the wrong dramedy on a Saturday evening. But when our health, our rights, our lives or our liberty depend on hybrid intelligence, such systems should be designed, executed and measured by professionals with the requisite expertise. …

Principle 6: The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society. …  The societal dialogue relating to the use of AI in electronic discovery would benefit from being even more inclusive, with more forums seeking the active participation of political scientists, sociologists, philosophers and representative groups of ordinary citizens. Even so, the realm of electronic discovery sets a hopeful example of how an inclusive dialogue can lead to broad consensus in ensuring the beneficial use of AI systems in a vital societal function.

Nicolas Economou believes, as we do, that an interdisciplinary approach, which has been employed successfully in e-discovery, is also the way to go for AI ethics. Note his use of the word “dialogue” and mention in the article of The Sedona Conference, which pioneered the use of this technique in legal education. We also believe in the power of dialogue and have seen it in action in multiple fields. See eg. the work of physicist, David Bohm and philosopher, Martin Buber. That is one reason that we propose the use of dialogue in future conferences on AI ethics. See the AI-Ethics.com Mission Statement.

_____

__

 

 


More Additions to AI-Ethics.com: Offer to Host a No-Press Conference to Mediate the Current Disputes on AI Ethics, Report on the Asilomar Conference and Report on Cyborg Law

September 24, 2017

This week the Introduction and Mission Statement page of AI-Ethics.com was expanded. I also added two new blogs to the AI-Ethics website. The first is a report of the 2017 conference of the Future of Life Institute. The second is a report on Cyborg Law, subtitled, Using Physically Implanted AI to Enhance Human Abilities.

AI-Ethics.com Mission
A Conference to Move AI Ethics Talk from Argument to Dialogue

The first of the three missions of AI-Ethics.com is to foster dialogue between the conflicting camps in the current AI ethics debate. We have now articulated a specific proposal on how we propose to do that, namely by hosting a  conference to move AI ethics talk from argument to dialogue. I propose to use professional mediators to help the parties reach some kind of base consensus. I know we have the legal skills to move the feuding leaders from destructive argument to constructive dialogue. The battle of the ethics robots must stop!

In arguments nobody really listens to try to understand the other side. If they hear at all it is just to analyze and respond, to strike down. The adversarial argument approach only works if there is a fair, disinterested judge to rule and resolve the disputes. In the ongoing disputes between opposing camps in AI ethics there is no judge. There is only public opinion. In dialogue the whole point is to listen and hear the other side’s position. The idea is to build common understanding and perhaps reach a consensus from common ground. There are no winners unless both sides win. Since we have no judges in AI ethics, the adversarial debate now raging is pointless, irrational. It does more hard than good for both sides. Yet this kind of debate continues between otherwise very rational people.

The AI-Ethic’s Debate page was also updated this week to include the latest zinger. This time the dig was by Google’s head of search and AI, John Giannandrea, and was, as usual, directed against Elon Musk. Check out the page to see who said what. Also see: Porneczi, Google’s AI Boss Blasts Musk’s Scare Tactics on Machine Takeover (Bloomberg 9/19/17).

The bottom line for us now is how to move from debate to dialogue. (I was into that way before Sedona.) For that reason, we offer to host a closed meeting where the two opposing camps can meet and mediate.It will work, but only when the leaders of both sides are willing to at least be in the same room together at the same time and talk this out.

Here is our revised Mission page providing more details of our capabilities. Please let me know if you want to be a part of such a conference or can help make it happen.

We know from decades of legal experience as practicing attorneys, mediators and judges that we can overcome the current conflicts. We use confidential dialogues based on earned trust, understanding and respect. Social media and thirty-second sound bites, which characterize the current level of argument, will never get us there. It will, and already has, just exasperated the problem. AI-Ethics.com proposes to host a no-press allowed conference where people can speak to each other without concern of disclosure. Everyone will agree to maintain confidentiality. Then the biggest problem will be attendance, actually getting the leaders of both sides into a room together to hash this out. Depending on turn-out we could easily have dozens of breakout sessions and professional mediators and dialogue specialists assigned to each group.

The many lawyers already in AI-Ethics.com are well qualified to execute an event like that. Collectively we have experience with thousands of mediations; yes, some of them even involving scientists, top CEOs and celebrities. We know how to keep confidences, build bridges and overcome mistrust. If need be we can bring in top judges too. The current social media sniping that has characterized the AI ethics debate so far should stop. It should be replaced by real dialogue. If the parties are willing to at least meet, we can help make it happen. We are confident that we can help elevate the discussion and attain some levels of beginning consensus. At the very least we can stop the sniping. Write us if you might be able to help make this happen. Maybe then we can move onto agreement and action.

 

 

Future of Life Institute Asilomar Conference

The Future of Life Institute was founded by the charismatic, Max Tegmark, author of Life 3.0: Being Human in the Age of Artificial Intelligence (2017). This is a must-read, entry level book on AI, AI ethics and, as the title indicates, the future of life. Max is an MIT professor and cosmologist. The primary funding for his Institute is from none other than Elon Musk. The 2017 conference was held in Asilomar, California and so was named the Asilomar Conference. Looks like a very nice place on the coast to hold a conference.

This is the event where the Future of Life Institute came up with twenty-three proposed principles for AI ethics. They are called, as you might have guessed, the Asilomar Principles. I will be writing about these in the coming months as they are the most detailed list of principles yet created.

The new web page I created this week reports on the event itself, not the principles. You can learn a lot about the state of the law and AI ethics by reviewing this page and some of the videos shared there of conference presentations. We would like to put on an event like this, only more intimate and closed to press as discussed.

We will keep pushing for a small confidential dialogue based event like this. As mostly lawyers around here we know a lot about confidentiality and mediation. We can help make it happen. We have some places in Florida in mind for the event that are just as nice as Asilomar, maybe even nicer. We got through Hurricane Irma alright and are ready to go, without or without Musk’s millions to pay for it.

Cyborg Law and Cyber-Humans

The second new page in AI-Ethics.com is a report on Cyborg Law: Using Physically Implanted AI to Enhance Human Abilities. Although we will build and expand on this page in the future, what we have created so far relies primarily upon a recent article and book. The article is by Woodrow Barfield and Alexander Williams, Law, Cyborgs, and Technologically Enhanced Brains (Philosophies 2017, 2(1), 6; doi: 10.3390/philosophies2010006). The book is by the same Woodrow Barfield and is entitled Cyber-Humans: Our Future with Machines (December, 2015). Our new page also includes a short discussion and quote from Riley v. California, 573 U.S. __,  189 L.Ed.2d 430, 134 S.Ct. 2473 (2014).

Cyborg is a term that refers generally to humans with technology integrated into their body. The technology can be designed to restore lost functions, but also to enhance the anatomical, physiological, and information processing abilities of the body. Law, Cyborgs, and Technologically Enhanced Brains.

The lead author of the cited article on cyborg law, Woody Barfield is an engineer who has been thinking about the problems of cyborg regulation longer than anyone. Barfield was an Industrial and Systems Engineering Professor at the University of Washington for many years. His research focused on the design and use of wearable computers and augmented reality systems. Barfield has also obtained both JD and LLM degrees in intellectual property law and policy. The legal citations throughout his book, Cyber-Humans, make this especially valuable for lawyers. Look for more extended discussions of Barfield’s work here in the coming months. He is the rare engineer who also understands the law.


New Draft Principles of AI Ethics Proposed by the Allen Institute for Artificial Intelligence and the Problem of Election Hijacking by Secret AIs Posing as Real People

September 17, 2017

One of the activities of AI-Ethics.com is to monitor and report on the work of all groups that are writing draft principles to govern the future legal regulation of Artificial Intelligence. Many have been proposed to date. Click here to go to the AI-Ethics Draft Principles page. If you know of a group that has articulated draft principles not reported on our page, please let me know. At this point all of the proposed principles are works in progress.

The latest draft principles come from Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence. This institute, called AI2, was founded by Paul G. Allen in 2014. The Mission of AI2 is to contribute to humanity through high-impact AI research and engineering. Paul Allen is the now billionaire who co-founded Microsoft with Bill Gates in 1975 instead of completing college. Paul and Bill have changed a lot since their early hacker days, but Paul is still  into computers and funding advanced research. Yes, that’s Paul and Bill below left in 1981. Believe it or not, Gates was 26 years old when the photo was taken. They recreated the photo in 2013 with the same computers. I wonder if today’s facial recognition AI could tell that these are the same people?

Oren Etzioni, who runs AI2, is also a professor of computer science. Oren is very practical minded (he is on the No-Fear side of the Superintelligent AI debate) and makes some good legal points in his proposed principles. Professor Etzioni also suggests three laws as a start to this work. He says he was inspired by Aismov, although his proposal bears no similarities to Aismov’s Laws. The AI-Ethics Draft Principles page begins with a discussion of Issac Aismov’s famous Three Laws of Robotics.

Below is the new material about the Allen Institute’s proposal that we added at the end of the AI-Ethics.com Draft Principles page.

_________

Oren Etzioni, a professor of Computer Science and CEO of the Allen Institute for Artificial Intelligence has created three draft principles of AI Ethics shown below. He first announced them in a New York Times Editorial, How to Regulate Artificial Intelligence (NYT, 9/1/17). See his TED Talk Artificial Intelligence will empower us, not exterminate us (TEDx Seattle; November 19, 2016). Etzioni says his proposed rules were inspired by Asimov’s three laws of robotics.

  1. An A.I. system must be subject to the full gamut of laws that apply to its human operator.
  2. An A.I. system must clearly disclose that it is not human.
  3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.

We would certainly like to hear more. As Oren said in the editorial, he introduces these three “as a starting point for discussion. … it is clear that A.I. is coming. Society needs to get ready.” That is exactly what we are saying too. AI Ethics Work Should Begin Now.

Oren’s editorial included a story to illustrate the second rule on duty to disclose. It involved a teacher at Georgia Tech named Jill Watson. She served as a teaching assistant in an online course on artificial intelligence. The engineering students were all supposedly fooled for the entire semester course into thinking that Watson was a human. She was not. She was an AI. It is kind of hard to believe that smart tech students wouldn’t know that a teacher named Watson, who no one had ever seen or heard of before, wasn’t a bot. After all, it was a course on AI.

This story was confirmed by a later reply to this editorial by the Ashok Goel, the Georgia Tech Professor who so fooled his students. Professor Goel, who supposedly is a real flesh and blood teacher, assures us that his engineering students were all very positive to have been tricked in this way. Ashok’s defensive Letter to Editor said:

Mr. Etzioni characterized our experiment as an effort to “fool” students. The point of the experiment was to determine whether an A.I. agent could be indistinguishable from human teaching assistants on a limited task in a constrained environment. (It was.)

When we did tell the students about Jill, their response was uniformly positive.

We were aware of the ethical issues and obtained approval of Georgia Tech’s Institutional Review Board, the office responsible for making sure that experiments with human subjects meet high ethical standards.

Etzioni’s proposed second rule states: An A.I. system must clearly disclose that it is not human. We suggest that the word “system” be deleted as not adding much and the rule be adopted immediately. It is urgently needed not just to protect student guinea pigs, but all humans, especially those using social media. Many humans are being fooled every day by bots posing as real people and creating fake news to manipulate real people. The democratic process is already under siege by dictators exploiting this regulation gap. Kupferschmidt, Social media ‘bots’ tried to influence the U.S. election. Germany may be next (Science, Sept. 13, 2017); Segarra, Facebook and Twitter Bots Are Starting to Influence Our Politics, a New Study Warns (Fortune, June 20, 2017); Wu, Please Prove You’re Not a Robot (NYT July 15, 2017); Samuel C. Woolley and Douglas R. Guilbeault, Computational Propaganda in the United States of America: Manufacturing Consensus Online (Oxford, UK: Project on Computational Propaganda).

In the concluding section to the 2017 scholarly paper Computational Propaganda by Woolley (shown here) and Guilbeault, The Rise of Bots: Implications for Politics, Policy, and Method, they state:

The results of our quantitative analysis confirm that bots reached positions of measurable influence during the 2016 US election. … Altogether, these results deepen our qualitative perspective on the political power bots can enact during major political processes of global significance. …
Most concerning is the fact that companies and campaigners continue to conveniently undersell the effects of bots. … Bots infiltrated the core of the political discussion over Twitter, where they were capable of disseminating propaganda at mass-scale. … Several independent analyses show that bots supported Trump much more than Clinton, enabling him to more effectively set the agenda. Our qualitative report provides strong reasons to believe that Twitter was critical for Trump’s success. Taken altogether, our mixed methods approach points to the possibility that bots were a key player in allowing social media activity to influence the election in Trump’s favour. Our qualitative analysis situates these results in their broader political context, where it is unknown exactly who is responsible for bot manipulation – Russian hackers, rogue campaigners, everyday citizens, or some complex conspiracy among these potential actors.
Despite growing evidence concerning bot manipulation, the Federal Election Commission in the US showed no signs of recognizing that bots existed during the election. There needs to be, as a minimum, a conversation about developing policy regulations for bots, especially since a major reason why bots are able to thrive is because of laissez-faire API access to websites like Twitter. …
The report exposes one of the possible reasons why we have not seen greater action taken towards bots on behalf of companies: it puts their bottom line at risk. Several company representatives fear that notifying users of bot threats will deter people from using their services, given the growing ubiquity of bot threats and the nuisance such alerts would cause. … We hope that the empirical evidence in this working paper – provided through both qualitative and quantitative investigation – can help to raise awareness and support the expanding body of evidence needed to begin managing political bots and the rising culture of computational propaganda.

This is a serious issue that requires immediate action, if not voluntarily by social media providers, such as Facebook and Twitter, then by law. We cannot afford to have another election hijacked by secret AIs posing as real people.

As Etzioni stated in his editorial:

My rule would ensure that people know when a bot is impersonating someone. We have already seen, for example, @DeepDrumpf — a bot that humorously impersonated Donald Trump on Twitter. A.I. systems don’t just produce fake tweets; they also produce fake news videos. Researchers at the University of Washington recently released a fake video of former President Barack Obama in which he convincingly appeared to be speaking words that had been grafted onto video of him talking about something entirely different.

See: Langston, Lip-syncing Obama: New tools turn audio clips into realistic video (UW News, July 11, 2017). Here is the University of Washington YouTube video demonstrating their dangerous new technology. Seeing is no longer believing. Fraud is a crime and must be enforced as such. If the government will not do so for some reason, then self- regulations and individual legal actions may be necessary.

In the long term Oren’s first point about the application of laws is probably the most important of his three proposed rules: An A.I. system must be subject to the full gamut of laws that apply to its human operator. As mostly lawyers around here at this point, we strongly agree with this legal point. We also agree with his recommendation in the NYT Editorial:

Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

We think liability law will develop accordingly. In fact, we think the common law already provides for such vicarious liability. No need to amend. Clarify would be a better word. We are not really terribly concerned about that. We are more concerned with technology governors and behavioral restrictions, although a liability stick will be very helpful. We have a team membership openings now for experienced products liability lawyers and regulators.


New Homework Added to the TAR Course and a New Video Added to AI-Ethics

September 3, 2017

We have added a homework assignment to Class Sixteen of the TAR Course. This is the next to last class in the course. Here we cover the eighth step of our eight-step routine, Phased Production. I share the full homework assignment below for those not yet familiar with our instructional methods, especially our take on homework. Learning is or should be a life-long process.

But before we get to that I want to share the new video added to the AI-Ethics.com web at the end of the Intro/Mission page. Here I articulate the opinion of many in the AI world that an interdisciplinary team approach is necessary for the creation of ethical codes to regulate artificial intelligence. This team approach has worked well for electronic discovery and Losey is convinced it will work for AI Law as well. AI Ethics is one of the most important issues facing humanity today. It is way too important for lawyers and government regulators alone. It is also way too important to leave to AI coders and professors to improvise on their own. We have to engage in true dialogue and collaborate.

______

Now back to the more mundane world of homework and learning the Team’s latest process for the application of machine learning to find evidence for trial. Here is the new homework assignment for Class Sixteen of the TAR Course.

____

Go on to the Seventeenth and last class, or pause to do this suggested “homework” assignment for further study and analysis.

SUPPLEMENTAL READING: It is important to have a good understanding of privilege and work-product protection. The basic U.S. Supreme Court case in this area is Hickman v. Taylor, 329 US 495 (1947). Another key case to know is Upjohn Co., v. U.S. 449 U.S. 383 (1981).  For an authoritative digest of case law on the subject with an e-discovery perspective, download and study The Sedona Conference Commentary on Protection of Privileged ESI 2015.pdf (Dec. 2015).

EXERCISES: Study Judge Andrew Peck’s form 502(d) order.  You can find it here. His form order started off as just two sentences, but he later added a third sentence at the end:

The production of privileged or work-product protected documents, electronically stored information (“ESI”) or information, whether inadvertent or otherwise, is not a waiver of the privilege or protection from discovery in this case or in any other federal or state proceeding. This Order shall be interpreted to provide the maximum protection allowed by Federal Rule of Evidence 502(d).
Nothing contained herein is intended to or shall serve to limit a party’s right to conduct a review of documents, ESI or information (including metadata) for relevance, responsiveness and/or segregation of privileged and/or protected information before production.

Do you know the purpose of this additional sentence? Why might someone oppose a 502(d) Order? What does that tell you about them? What does that tell the judge about them? My law firm has been opposed a few times, but we have never failed. Well, there was that one time, where both sides agreed, and the judge would not enter the stipulated order, saying it was not necessary, that he would anyway provide such protection. So, mission accomplished anyway.

Do you think it is overly hyper for us to recommend that a 502(d) Order be entered in every case where there is ESI review and production? Think that some cases are too small and too easy to bother with that? That it is o.k. to just have a claw-back agreement? Well take a look at this opinion and you may well change your mind. Irth Solutions, LLC v. Windstream Communications, LLC, (S.D. Ohio, E Div., 8/2/17). Do you think this was a fair decision? What do you think about the partner putting all of the blame on the senior associate (seven-year) for the mistaken production of privileged ESI? What do you think of the senior associate who in turn blamed the junior associate (two-year)? The opinion does not state who signed the Rule 26(g) response to the request to produce. Do you think that should matter? By the way, having been a partner in a law firm since at least 1984, I think this kind of blame-game behavior was reprehensible!

Students are invited to leave a public comment below. Insights that might help other students are especially welcome. Let’s collaborate!

 


%d bloggers like this: