Six Sets of Draft Principles Are Now Listed at AI-Ethics.com

October 8, 2017

Arguably the most important information resource of AI-Ethics.com is the page with the collection of Draft Principles underway by other AI Ethics groups around the world. We added a new one that came to our attention this week from an ABA article, A ‘principled’ artificial intelligence could improve justice (ABA Legal Rebels, October 3, 2017). They listed six proposed principles from the talented Nicolas Economou, the CEO of electronic discovery search company, H5.

Although Nicolas Economou is an e-discovery search pioneer and past Sedona participant, I do not know him. I was, of course, familiar with H5’s work as one of the early TREC Legal Track pioneers, but I had no idea Economou was also involved with AI ethics. Interestingly, I recently learned that another legal search expert, Maura Grossman, whom I do know quite well, is also interested in AI ethics. She is even teaching a course on AI ethics at Waterloo. All three of us seem to have independently heard the Siren’s song.

With the addition of Economou’s draft Principles we now have six different sets of AI Ethics principles listed. Economou’s new list is added at the end of the page and reproduced below. It presents a decidedly e-discovery view with which all readers here are familiar.

Nicolas Economou, like many of us, is an alumni of The Sedona Conference. His sixth principle is based on what he calls thoughtful, inclusive dialogue with civil society. Sedona was the first legal group to try to incorporate the principles of dialogue into continuing legal education programs. That is what first attracted me to The Sedona Conference. AI-Ethics.com intends to incorporate dialogue principles in conferences that it will sponsor in the future. This is explained in the Mission Statement page of AI-Ethics.com.

The mission of AI-Ethics.com is threefold:

  1. Foster dialogue between the conflicting camps in the current AI ethics debate.
  2. Help articulate basic regulatory principles for government and industry groups.
  3. Inspire and educate everyone on the importance of artificial intelligence.

First Mission: Foster Dialogue Between Opposing Camps

The first, threshold mission of AI-Ethics.com is to go beyond argumentative debates, formal and informal, and move to dialogue between the competing camps. See eg. Bohm Dialogue, Martin Buber and The Sedona Conference. Then, once this conflict is resolved, we will all be in a much better position to attain the other two goals. We need experienced mediators, dialogue specialists and judges to help us with that first goal. Although we already have many lined up, we could always use more.

We hope to use skills in both dialogue and mediation to transcend the polarized bickering that now tends to dominate AI ethics discussions. See eg. AI Ethics Debate. We need to move from debate to dialogue, and we need to do so fast.

_____

Here is the new segment we added to the Draft Principles page.

6. Nicolas Economou

The latest attempt at articulating AI Ethics principles comes from Nicolas Economou, the CEO of electronic discovery search company, H5. Nicolas has a lot of experience with legal search using AI, as do several of us at AI-Ethics.com. In addition to his work with legal search and H5, Nicholas is involved in several AI ethics groups, including the AI Initiative of the Future Society at Harvard Kennedy School and the Law Committee of the IEEE’s Global Initiative for Ethical Considerations in AI.

Nicolas Economou has obviously been thinking about AI ethics for some time. He provides a solid scientific, legal perspective based on his many years of supporting lawyers and law firms with advanced legal search. Economou has developed six principles as reported in an ABA Legal Rebels article dated October 3, 2017, A ‘principled’ artificial intelligence could improve justice. (Some of the explanations have been edited out as indicated below. Readers are encouraged to consult the full article.) As you can see the explanations given here were written for consumption by lawyers and pertain to e-discovery. They show the application of the principles in legal search. See eg TARcourse.com. The principles have obvious applications in all aspects of society, not just the Law and predictive coding, so their value goes beyond the legal applications here mentioned.

Principle 1: AI should advance the well-being of humanity, its societies, and its natural environment. The pursuit of well-being may seem a self-evident aspiration, but it is a foundational principle of particular importance given the growing prevalence, power and risks of misuse of AI and hybrid intelligence systems. In rendering the central fact-finding mission of the legal process more effective and efficient, expertly designed and executed hybrid intelligence processes can reduce errors in the determination of guilt or innocence, accelerate the resolution of disputes, and provide access to justice to parties who would otherwise lack the financial wherewithal.

Principle 2: AI should be transparent. Transparency is the ability to trace cause and effect in the decision-making pathways of algorithms and, in hybrid intelligence systems, of their operators. In discovery, for example, this may extend to the choices made in the selection of data used to train predictive coding software, of the choice of experts retained to design and execute the automated review process, or of the quality-assurance protocols utilized to affirm accuracy. …

Principle 3: Manufacturers and operators of AI should be accountable. Accountability means the ability to assign responsibility for the effects caused by AI or its operators. Courts have the ability to take corrective action or to sanction parties that deliberately use AI in a way that defeats, or places at risk, the fact-finding mission it is supposed to serve.

Principle 4: AI’s effectiveness should be measurable in the real-world applications for which it is intended. Measurability means the ability for both expert users and the ordinary citizen to gauge concretely whether AI or hybrid intelligence systems are meeting their objectives. …

Principle 5: Operators of AI systems should have appropriate competencies. None of us will get hurt if Netflix’s algorithm recommends the wrong dramedy on a Saturday evening. But when our health, our rights, our lives or our liberty depend on hybrid intelligence, such systems should be designed, executed and measured by professionals with the requisite expertise. …

Principle 6: The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society. …  The societal dialogue relating to the use of AI in electronic discovery would benefit from being even more inclusive, with more forums seeking the active participation of political scientists, sociologists, philosophers and representative groups of ordinary citizens. Even so, the realm of electronic discovery sets a hopeful example of how an inclusive dialogue can lead to broad consensus in ensuring the beneficial use of AI systems in a vital societal function.

Nicolas Economou believes, as we do, that an interdisciplinary approach, which has been employed successfully in e-discovery, is also the way to go for AI ethics. Note his use of the word “dialogue” and mention in the article of The Sedona Conference, which pioneered the use of this technique in legal education. We also believe in the power of dialogue and have seen it in action in multiple fields. See eg. the work of physicist, David Bohm and philosopher, Martin Buber. That is one reason that we propose the use of dialogue in future conferences on AI ethics. See the AI-Ethics.com Mission Statement.

_____

__

 

 


Good New 33-Point e-Discovery Checklist From Miami

October 1, 2017

The United States District Court for the Southern District Court of Florida is now revising it’s Local Rule 16.1 on Pretrial Procedure in Civil Actions. (In the interests of full disclosure, I am a member of that Court, but am not on the Committee that prepared the proposed revisions.) The revisions pertain to Rule 16.1(b),  Scheduling Conference and Order. The amendments will go into effect on December 1, 2017. These amendments include an excellent new 33-point e-discovery checklist.

The main revision in the local rules is the addition of a new subsection (K) under 16.1(b)(2) Conference Report that lists what must be included in the attorneys’ report:

(K) any issues about: (i) disclosure, discovery, or preservation of electronically stored information, including the form or forms in which it should be produced; (ii) claims of privilege or of protection as trial-preparation materials, including — if the parties agree on a procedure to assert those claims after production — whether to ask the court to include their agreement in an order under Federal Rule of Evidence 502: and (iii) when the parties have agreed to use the ESI Checklist available on the Court’s website (www.flsd.uscourts.gov), matters enumerated on the ESI Checklist;

This rule revision and checklist are a fine addition to the local rules. My congratulations to the ad hoc committee that prepared them. My only criticism of the rule change is that it does not go far enough on Federal Rule of Evidence 502. A 502(d) order should be entered in every case where there is a production of ESI. It should be a standing order and follow the standard language used by Judge Andrew Peck and many others, including my law firm:

1. The production of privileged or work-product protected documents, electronically stored information (“ESI”) or information, whether inadvertent or otherwise, is not a waiver of the privilege or protection from discovery in this case or in any other federal or state proceeding. This Order shall be interpreted to provide the maximum protection allowed by Federal Rule of Evidence 502(d).
2. Nothing contained herein is intended to or shall serve to limit a party’s right to conduct a review of documents, ESI or information (including metadata) for relevance, responsiveness and/or segregation of privileged and/or protected information before production.

My only criticism of the ESI Checklist itself in their use of vague bullet-points, instead of numbering. With that one exception, other courts around the country should consider using the 33-point ESI Checklist for their own local rules. Many already have their own similar checklists, of course, but this is the latest and one of the best. It is complete, but not overly long and complicated.

Checklist Use Is Discretionary

The first thing to note about this new local rule 16.1(b)(2)(K) is that it does not require attorneys to use or follow the ESI Checklist in their discovery plan discussions. Perhaps future versions of the rule will require its use, but I agree with the Ad Hoc Committee’s thinking here to start with discretionary use. There are still plenty of Milton-type lawyers in Florida, and elsewhere, who only think of discovery as near-endless mind-numbing exercises of looking at boxes of paper. IMO there are way too many of these guys, young and old, but the clients who retain attorneys seem to love them, so what can you do? They do often seem to win at the end, as all Office Space fans know. I knew a multi-zillionairre attorney in Miami once where you had to clear a path through all of the paper in his office just to walk to his desk.

If, however, the parties are cool and do agree to use the ESI Checklist, then they are required by the new local rule to include the Checklist points in their Conference Report. It is unclear whether they must include all 33-items in their Report, which, by the way, is supposed to be a Joint Report, but I predict that most will. The Checklist does, however, include the introductory sentence that justifies partial use: “The usefulness of any particular topic may depend on the nature and complexity of the matter.

I also predict that some judges will strongly encourage the use of the Checklist, the way that only judges can do. It may even become an Order when the failure to do so causes time-consuming disputes and other issues that could have been avoided by timely discussion of the checklist points. In most complex cases especially, attorneys would be well advised to agree to this list and not hide their head in the sands of wishful thinking. Better to be realistic and spend the time necessary for the proper use of the ESI List. The List is an excellent way to timely and efficiently comply with the rules.

Preparing for 26(f) conferences and talking about all of the items on the list may increase the costs somewhat upfront, but this expense will almost certainly cause substantial cost-savings dividends down the road. Attorneys and their clients should not be penny wise and pound foolish. You can have your cake and eat it too. Case preparation does not drive up the costs of litigation. It allows you to win, even in the close cases, or at least to mitigate damages. The failure to prepare is not only a sure way to lose, but also to generate high fees from motion practice over discovery. Better to avoid and contain the disputes than to hope they will never happen. Hoping for the best, including incompetence by opposing counsel, is not what lawyers are paid to do.

ESI Checklist

This blog next will explore and make some comments on the 33-point checklist. I begin by reproduction below of the checklist itself in somewhat altered form. I have not revised any of the words in the checklist, but I have added numbers not found in the original to facilitate discussion (actually Roman Numeral letters). So it is fair to say my revisions are not of actual content, but of metadata only. I also add after each item a personal comment, put in parenthesis, italicized and in blue font, so as to be very clear on what is Losey and what is not.

If you want to see the original, bullet points and all,  the PDF version of the Checklist and Rules are published on the Court’s web. Go to the end of the document (currently pages 78-79) to find the ESI Checklist.

United States District Court
Southern District of Florida

Checklist for Rule 26(f) Conference
Regarding Electronically Stored Information (“ESI”)
(Original Bullet Points Changed to Letters and Losey Comments put in parenthesis after each list item, are shown in italics and blue font)

In connection with the Federal Rule of Civil Procedure 26(f) conference and in preparing the Local Rule 16.1(b)(2) conference report, the Court encourages the use of the following checklist. The usefulness of any particular topic may depend on the nature and complexity of the matter.

I. Preservation

A. The ranges of creation or receipt dates for any ESI to be preserved. (In almost every case there is a date before which the ESI is not relevant. In many there is also an after date. Disagreement between parties on date range should be resolved by phased discovery and reservation of rights to object after first phase is completed.)

B. The description of ESI from sources that are not reasonably accessible because of undue burden or cost and that will not be reviewed for responsiveness or produced, but that will be preserved in accordance with Federal Rule of Civil Procedure 26(b)(2)(B). (Backup ESI is  almost always so protected, unless it has the only copy of important information.)

C. The description of ESI from sources that: (a) the party believes could contain relevant information; but (b) has determined, under the proportionality factors, is not discoverable and should not be preserved. (The keyword here is “could.” Maybe it has relevant information, maybe it does not. Also important in determining discoverability under governing proportionaity rules is the “importance” of the information to material issues of fact in dispute. You must consider probative value. In my experience with big data most “merely relevant” information is a waste of time. There is too little probative value to most of it to even try to capture it all.)

D. Whether to continue any interdiction of any document-destruction program, such as ongoing erasures of e-mails, voicemails, and other electronically recorded material. (Typically the key custodians identified should have their email auto-delete functions turned off, and voice mail, but as to them only, not the whole enterprise. Plus, I cannot recall voice mail ever making a difference in a case. It typically has low probative value.)

E. The number and names or general job titles or descriptions of custodians for whom ESI will be preserved (e.g., “HR head,” “scientist,” “marketing manager”). (This is the broad list of key custodians. They are often divided in classes by probable importance of their ESI to the outcome of the case. Although all classes may be preserved, only the most important are actually reviewed, at least at first.)

F. The list of systems, if any, that contain ESI not associated with individual custodians and that will be preserved, such as enterprise databases. (A list not associated with custodians usually refers to department type servers where a number of people in the department could store documents, to document management systems, or to general databases, such as payroll.)

G. Any disputes related to scope or manner of preservation. (You should get these issues resolved asap. Typically you would want to preserve until the issue is resolved, unless the expense is too great or the other side’s position is too unreasonable. But even then you run some risk, and so quick adjudication on issues like this are important.)

II. Liaison

A. The identity of each party’s e-discovery liaison, who will be knowledgeable about and responsible for each party’s ESI. (I always like to see the role and name that I invented back in 2006 – “e-discovery liaison” – used by a court. One of my first e-Discovery “Liaisons” is now a U.S. Magistrate Judge in the Southern District, and a very good one at that, especially in e-discovery.)

III. Informal Discovery About Location and Types of Systems

A. Identification of systems from which discovery will be prioritized (e.g., e-mail, finance, HR systems). (Typically the communications between people, the contemporaneous writings, are the ESI with the highest probative value.)

B.  Descriptions and location of systems in which potentially discoverable information is stored. (Typically this means a description of all IT systems where relevant ESI might be stored, and not just the high value targets like communications. Document management systems and network drives might also be listed here.)

C.  How potentially discoverable information is stored. (This is a follow-up on the prior checklist item that describes how the ESI is stored. Usually it is stored manually at the discretion of listed custodians. They either save the documents or email or not. Where they save it may also be within their control. They may save it on personal thumb drives, or they may print it out to store. You have to interview the custodians to find out how they stored it. Sometimes the potentially discoverable information is stored automatically by other software systems, such as payroll systems, and sometimes the location is predetermined.)

D.  How discoverable information can be collected from systems and media in which it is stored. (Usually it is collected by copying. That needs to be done carefully so that metadata is not changed. Not hard to do, but IT expertise is usually required to do it correctly. Forensic collection is usually not necessary, especially collection of double-deleted files and unallocated space, as such ESI is usually protected under 26(b)(2)(B).)

IV. Proportionality and Costs

A.  The amount and nature of the claims being made by either party. (The monetary value should not be exaggerated by plaintiffs, but usually they feel the need to do so for posturing purposes and other reasons. Suggest this impediment be avoided by disclaimers and reservation of rights. Beyond amount issues, the “nature” of the claims should be carefully understood and discussed with an aim to identifying the actual disputed facts. Discovery should always be focused and have evidentiary value. It is never an end in itself, or at least should not be. Also, do not forget that subject matter discovery is no longer permitted under revised Rule 26(b)(1). It is now limited to claims and defenses that have actually been raised in the case.)

B.  The nature and scope of burdens associated with the proposed preservation and discovery of ESI. (Try to include actual monetary burden expected, usually with a range, but restrain the urge to exaggerate. Spend time to do this right and get into some detailed metrics. Consult an expert where necessary, but never b.s. the judge. They do not like that and will remember you.)

C.  The likely benefit of the proposed discovery. (The requesting party should spell it out. Fishing expeditions are not permitted. The old “reasonably calculated” jargon is gone from new Rule 26(b)(1), at least as a definition of scope, and that change voids a lot of case-law on the subject.)

D.  Costs that the parties will share to reduce overall discovery expenses, such as the use of a common electronic-discovery vendor or a shared document repository, or other cost saving measures. (In my experience this is very rare, Typically it only makes sense in very big cases and or between co-defendants or co-plaintiffs. There are usually too many confidentiality issues to share a vendor with opposing parties.)

E.  Limits on the scope of preservation or other cost-saving measures. (Cost savings should always be considered. This is required of all parties, attorneys and judges under the 2015 revision to Rule 1, FRCP. So too is “speedy” and “just.”)

F.  Whether there is relevant ESI that will not be preserved in accordance with Federal Rule of Civil Procedure 26(b)(1), requiring discovery to be proportionate to the needs of the case. (Typically the answer here is yes, or should be, and some discussion may be required. Preservation is required by law to be reasonable, not exhaustive or perfect. Reasonable means proportionate. Moreover, if ESI is not relevant under the proportionate definitions of revised Rule 26(b)(1) then it does not have to be preserved because only relevant ESI need be preserved.)

V. Search

A.  The search method(s), including specific words or phrases or other methodology, that will be used to identify discoverable ESI and filter out ESI that is not subject to discovery. (Please people, exchanging keywords should be just the beginning, not the whole process. It is only one of many possible search methods. Use the Hybrid Multimodal method, which all readers of my blog and books should know pretty well by now.)

B.  The quality-control method(s) the producing party will use to evaluate whether a production is missing relevant ESI or contains substantial amounts of irrelevant ESI. (The problem of missing relevant ESI is the problem of Recall, whereas the problem of too much irrelevant ESI is the problem of Precision, but also, to some extent, to the problem of duplication. All good electronic document review experts have a number of different quality control techniques to improve recall and precision. Not an expert? Then perhaps you should consult with one in your firm, or if you have none (pity), then ask your e-discovery vendor.)

VI. Phasing

A.  Whether it is appropriate to conduct discovery of ESI in phases. (Yes. It is a great way to resolve disagreements by postponing excessive demands for second or third phases. Chances are these other phases will not be necessary because all that is needed is produced in the first phase. Alternatively, the producing party might agree to them if the first production makes their necessity obvious.)

B.  Sources of ESI most likely to contain discoverable information and that will be included in the first phases of Federal Rule of Civil Procedure 34 document discovery. (Here is where the producing party lists what sources they will search, most often communication ESI such as Outlook Exchange email servers.)

C.  Sources of ESI less likely to contain discoverable information from which discovery will be postponed or not reviewed. (These are sources that are unlikely to have ESI with strong probative value, if any, but might. There may never be a need to review these sources. As a compromise where there is disagreement put these sources in a later phase. After the first phase is completed  it may not be necessary to look for more evidence in these secondary sources.)

D.  Custodians (by name or role) most likely to have discoverable information and whose ESI will be included in the first phases of document discovery. (Here is where you list the key custodians. In most lawsuits all you will ever need to search is the contents of the mailboxes of these key witnesses, the emails, attachments, calendar items, etc in their email system.)

E.  Custodians (by name or role) less likely to have discoverable information from whom discovery of ESI will be postponed or avoided. (These are secondary custodians that might possibly have important information, but it is less likely. Typically, if you cannot revolve disagreements on importance, you agree to postpone the disputed custodians to second phases.)

F.  The time period during which discoverable information was most likely to have been created or received. (Again, limit the review by timing and if you cannot agree, then postpone disputed additional times for second phases.)

VII. Production

A.  The formats in which structured ESI (database, collaboration sites, etc.) will be produced. (Typically database production is done by spreadsheet reports, or sometimes native. The person in charge of the structured ESI should know.)

B.  The formats in which unstructured ESI (e-mail, presentations, word processing, etc.) will be produced. (Producing parties should follow the requesting parties format request most of the time, except if they ask for paper production. Paper production is ridiculous and expensive for ESI. Otherwise format should not matter. It is, or should be, a non-issue.)

C.  The extent, if any, to which metadata will be produced and the fields of metadata to be produced. (A non-issue too. If metadata is part of the document, then produce it. Your vendor can give you a standard list.)

D.  The production format(s) that ensure(s) that any inherent searchability of ESI is not degraded when produced. (This is a must. In my court it can be sanctionable to change an electronic document so that it is no longer searchable.)

VIII. Privilege

A.  How any production of privileged or work-product protected information will be handled. (Of course you do not produce it, but you log it.)

B.  Whether the parties can agree on alternative ways to identify documents withheld on the grounds of privilege or work product to reduce the burdens of such identification. (Look for ways to streamline your privilege log. For instance, under other Southern District local rule you never have to log communications made after suit was filed.)

C.  Whether the parties will enter into a Federal Rule of Evidence 502(d) stipulation and order that addresses inadvertent or agreed production. (You should always have a 502(d) Order whenever you are making an electronic production. Mistakes happen and this is the closest thing we have in the law to a fail-safe. There is no valid reason to oppose this order. Clear enough for you?)

 

 

 

 


More Additions to AI-Ethics.com: Offer to Host a No-Press Conference to Mediate the Current Disputes on AI Ethics, Report on the Asilomar Conference and Report on Cyborg Law

September 24, 2017

This week the Introduction and Mission Statement page of AI-Ethics.com was expanded. I also added two new blogs to the AI-Ethics website. The first is a report of the 2017 conference of the Future of Life Institute. The second is a report on Cyborg Law, subtitled, Using Physically Implanted AI to Enhance Human Abilities.

AI-Ethics.com Mission
A Conference to Move AI Ethics Talk from Argument to Dialogue

The first of the three missions of AI-Ethics.com is to foster dialogue between the conflicting camps in the current AI ethics debate. We have now articulated a specific proposal on how we propose to do that, namely by hosting a  conference to move AI ethics talk from argument to dialogue. I propose to use professional mediators to help the parties reach some kind of base consensus. I know we have the legal skills to move the feuding leaders from destructive argument to constructive dialogue. The battle of the ethics robots must stop!

In arguments nobody really listens to try to understand the other side. If they hear at all it is just to analyze and respond, to strike down. The adversarial argument approach only works if there is a fair, disinterested judge to rule and resolve the disputes. In the ongoing disputes between opposing camps in AI ethics there is no judge. There is only public opinion. In dialogue the whole point is to listen and hear the other side’s position. The idea is to build common understanding and perhaps reach a consensus from common ground. There are no winners unless both sides win. Since we have no judges in AI ethics, the adversarial debate now raging is pointless, irrational. It does more hard than good for both sides. Yet this kind of debate continues between otherwise very rational people.

The AI-Ethic’s Debate page was also updated this week to include the latest zinger. This time the dig was by Google’s head of search and AI, John Giannandrea, and was, as usual, directed against Elon Musk. Check out the page to see who said what. Also see: Porneczi, Google’s AI Boss Blasts Musk’s Scare Tactics on Machine Takeover (Bloomberg 9/19/17).

The bottom line for us now is how to move from debate to dialogue. (I was into that way before Sedona.) For that reason, we offer to host a closed meeting where the two opposing camps can meet and mediate.It will work, but only when the leaders of both sides are willing to at least be in the same room together at the same time and talk this out.

Here is our revised Mission page providing more details of our capabilities. Please let me know if you want to be a part of such a conference or can help make it happen.

We know from decades of legal experience as practicing attorneys, mediators and judges that we can overcome the current conflicts. We use confidential dialogues based on earned trust, understanding and respect. Social media and thirty-second sound bites, which characterize the current level of argument, will never get us there. It will, and already has, just exasperated the problem. AI-Ethics.com proposes to host a no-press allowed conference where people can speak to each other without concern of disclosure. Everyone will agree to maintain confidentiality. Then the biggest problem will be attendance, actually getting the leaders of both sides into a room together to hash this out. Depending on turn-out we could easily have dozens of breakout sessions and professional mediators and dialogue specialists assigned to each group.

The many lawyers already in AI-Ethics.com are well qualified to execute an event like that. Collectively we have experience with thousands of mediations; yes, some of them even involving scientists, top CEOs and celebrities. We know how to keep confidences, build bridges and overcome mistrust. If need be we can bring in top judges too. The current social media sniping that has characterized the AI ethics debate so far should stop. It should be replaced by real dialogue. If the parties are willing to at least meet, we can help make it happen. We are confident that we can help elevate the discussion and attain some levels of beginning consensus. At the very least we can stop the sniping. Write us if you might be able to help make this happen. Maybe then we can move onto agreement and action.

 

 

Future of Life Institute Asilomar Conference

The Future of Life Institute was founded by the charismatic, Max Tegmark, author of Life 3.0: Being Human in the Age of Artificial Intelligence (2017). This is a must-read, entry level book on AI, AI ethics and, as the title indicates, the future of life. Max is an MIT professor and cosmologist. The primary funding for his Institute is from none other than Elon Musk. The 2017 conference was held in Asilomar, California and so was named the Asilomar Conference. Looks like a very nice place on the coast to hold a conference.

This is the event where the Future of Life Institute came up with twenty-three proposed principles for AI ethics. They are called, as you might have guessed, the Asilomar Principles. I will be writing about these in the coming months as they are the most detailed list of principles yet created.

The new web page I created this week reports on the event itself, not the principles. You can learn a lot about the state of the law and AI ethics by reviewing this page and some of the videos shared there of conference presentations. We would like to put on an event like this, only more intimate and closed to press as discussed.

We will keep pushing for a small confidential dialogue based event like this. As mostly lawyers around here we know a lot about confidentiality and mediation. We can help make it happen. We have some places in Florida in mind for the event that are just as nice as Asilomar, maybe even nicer. We got through Hurricane Irma alright and are ready to go, without or without Musk’s millions to pay for it.

Cyborg Law and Cyber-Humans

The second new page in AI-Ethics.com is a report on Cyborg Law: Using Physically Implanted AI to Enhance Human Abilities. Although we will build and expand on this page in the future, what we have created so far relies primarily upon a recent article and book. The article is by Woodrow Barfield and Alexander Williams, Law, Cyborgs, and Technologically Enhanced Brains (Philosophies 2017, 2(1), 6; doi: 10.3390/philosophies2010006). The book is by the same Woodrow Barfield and is entitled Cyber-Humans: Our Future with Machines (December, 2015). Our new page also includes a short discussion and quote from Riley v. California, 573 U.S. __,  189 L.Ed.2d 430, 134 S.Ct. 2473 (2014).

Cyborg is a term that refers generally to humans with technology integrated into their body. The technology can be designed to restore lost functions, but also to enhance the anatomical, physiological, and information processing abilities of the body. Law, Cyborgs, and Technologically Enhanced Brains.

The lead author of the cited article on cyborg law, Woody Barfield is an engineer who has been thinking about the problems of cyborg regulation longer than anyone. Barfield was an Industrial and Systems Engineering Professor at the University of Washington for many years. His research focused on the design and use of wearable computers and augmented reality systems. Barfield has also obtained both JD and LLM degrees in intellectual property law and policy. The legal citations throughout his book, Cyber-Humans, make this especially valuable for lawyers. Look for more extended discussions of Barfield’s work here in the coming months. He is the rare engineer who also understands the law.


New Draft Principles of AI Ethics Proposed by the Allen Institute for Artificial Intelligence and the Problem of Election Hijacking by Secret AIs Posing as Real People

September 17, 2017

One of the activities of AI-Ethics.com is to monitor and report on the work of all groups that are writing draft principles to govern the future legal regulation of Artificial Intelligence. Many have been proposed to date. Click here to go to the AI-Ethics Draft Principles page. If you know of a group that has articulated draft principles not reported on our page, please let me know. At this point all of the proposed principles are works in progress.

The latest draft principles come from Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence. This institute, called AI2, was founded by Paul G. Allen in 2014. The Mission of AI2 is to contribute to humanity through high-impact AI research and engineering. Paul Allen is the now billionaire who co-founded Microsoft with Bill Gates in 1975 instead of completing college. Paul and Bill have changed a lot since their early hacker days, but Paul is still  into computers and funding advanced research. Yes, that’s Paul and Bill below left in 1981. Believe it or not, Gates was 26 years old when the photo was taken. They recreated the photo in 2013 with the same computers. I wonder if today’s facial recognition AI could tell that these are the same people?

Oren Etzioni, who runs AI2, is also a professor of computer science. Oren is very practical minded (he is on the No-Fear side of the Superintelligent AI debate) and makes some good legal points in his proposed principles. Professor Etzioni also suggests three laws as a start to this work. He says he was inspired by Aismov, although his proposal bears no similarities to Aismov’s Laws. The AI-Ethics Draft Principles page begins with a discussion of Issac Aismov’s famous Three Laws of Robotics.

Below is the new material about the Allen Institute’s proposal that we added at the end of the AI-Ethics.com Draft Principles page.

_________

Oren Etzioni, a professor of Computer Science and CEO of the Allen Institute for Artificial Intelligence has created three draft principles of AI Ethics shown below. He first announced them in a New York Times Editorial, How to Regulate Artificial Intelligence (NYT, 9/1/17). See his TED Talk Artificial Intelligence will empower us, not exterminate us (TEDx Seattle; November 19, 2016). Etzioni says his proposed rules were inspired by Asimov’s three laws of robotics.

  1. An A.I. system must be subject to the full gamut of laws that apply to its human operator.
  2. An A.I. system must clearly disclose that it is not human.
  3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.

We would certainly like to hear more. As Oren said in the editorial, he introduces these three “as a starting point for discussion. … it is clear that A.I. is coming. Society needs to get ready.” That is exactly what we are saying too. AI Ethics Work Should Begin Now.

Oren’s editorial included a story to illustrate the second rule on duty to disclose. It involved a teacher at Georgia Tech named Jill Watson. She served as a teaching assistant in an online course on artificial intelligence. The engineering students were all supposedly fooled for the entire semester course into thinking that Watson was a human. She was not. She was an AI. It is kind of hard to believe that smart tech students wouldn’t know that a teacher named Watson, who no one had ever seen or heard of before, wasn’t a bot. After all, it was a course on AI.

This story was confirmed by a later reply to this editorial by the Ashok Goel, the Georgia Tech Professor who so fooled his students. Professor Goel, who supposedly is a real flesh and blood teacher, assures us that his engineering students were all very positive to have been tricked in this way. Ashok’s defensive Letter to Editor said:

Mr. Etzioni characterized our experiment as an effort to “fool” students. The point of the experiment was to determine whether an A.I. agent could be indistinguishable from human teaching assistants on a limited task in a constrained environment. (It was.)

When we did tell the students about Jill, their response was uniformly positive.

We were aware of the ethical issues and obtained approval of Georgia Tech’s Institutional Review Board, the office responsible for making sure that experiments with human subjects meet high ethical standards.

Etzioni’s proposed second rule states: An A.I. system must clearly disclose that it is not human. We suggest that the word “system” be deleted as not adding much and the rule be adopted immediately. It is urgently needed not just to protect student guinea pigs, but all humans, especially those using social media. Many humans are being fooled every day by bots posing as real people and creating fake news to manipulate real people. The democratic process is already under siege by dictators exploiting this regulation gap. Kupferschmidt, Social media ‘bots’ tried to influence the U.S. election. Germany may be next (Science, Sept. 13, 2017); Segarra, Facebook and Twitter Bots Are Starting to Influence Our Politics, a New Study Warns (Fortune, June 20, 2017); Wu, Please Prove You’re Not a Robot (NYT July 15, 2017); Samuel C. Woolley and Douglas R. Guilbeault, Computational Propaganda in the United States of America: Manufacturing Consensus Online (Oxford, UK: Project on Computational Propaganda).

In the concluding section to the 2017 scholarly paper Computational Propaganda by Woolley (shown here) and Guilbeault, The Rise of Bots: Implications for Politics, Policy, and Method, they state:

The results of our quantitative analysis confirm that bots reached positions of measurable influence during the 2016 US election. … Altogether, these results deepen our qualitative perspective on the political power bots can enact during major political processes of global significance. …
Most concerning is the fact that companies and campaigners continue to conveniently undersell the effects of bots. … Bots infiltrated the core of the political discussion over Twitter, where they were capable of disseminating propaganda at mass-scale. … Several independent analyses show that bots supported Trump much more than Clinton, enabling him to more effectively set the agenda. Our qualitative report provides strong reasons to believe that Twitter was critical for Trump’s success. Taken altogether, our mixed methods approach points to the possibility that bots were a key player in allowing social media activity to influence the election in Trump’s favour. Our qualitative analysis situates these results in their broader political context, where it is unknown exactly who is responsible for bot manipulation – Russian hackers, rogue campaigners, everyday citizens, or some complex conspiracy among these potential actors.
Despite growing evidence concerning bot manipulation, the Federal Election Commission in the US showed no signs of recognizing that bots existed during the election. There needs to be, as a minimum, a conversation about developing policy regulations for bots, especially since a major reason why bots are able to thrive is because of laissez-faire API access to websites like Twitter. …
The report exposes one of the possible reasons why we have not seen greater action taken towards bots on behalf of companies: it puts their bottom line at risk. Several company representatives fear that notifying users of bot threats will deter people from using their services, given the growing ubiquity of bot threats and the nuisance such alerts would cause. … We hope that the empirical evidence in this working paper – provided through both qualitative and quantitative investigation – can help to raise awareness and support the expanding body of evidence needed to begin managing political bots and the rising culture of computational propaganda.

This is a serious issue that requires immediate action, if not voluntarily by social media providers, such as Facebook and Twitter, then by law. We cannot afford to have another election hijacked by secret AIs posing as real people.

As Etzioni stated in his editorial:

My rule would ensure that people know when a bot is impersonating someone. We have already seen, for example, @DeepDrumpf — a bot that humorously impersonated Donald Trump on Twitter. A.I. systems don’t just produce fake tweets; they also produce fake news videos. Researchers at the University of Washington recently released a fake video of former President Barack Obama in which he convincingly appeared to be speaking words that had been grafted onto video of him talking about something entirely different.

See: Langston, Lip-syncing Obama: New tools turn audio clips into realistic video (UW News, July 11, 2017). Here is the University of Washington YouTube video demonstrating their dangerous new technology. Seeing is no longer believing. Fraud is a crime and must be enforced as such. If the government will not do so for some reason, then self- regulations and individual legal actions may be necessary.

In the long term Oren’s first point about the application of laws is probably the most important of his three proposed rules: An A.I. system must be subject to the full gamut of laws that apply to its human operator. As mostly lawyers around here at this point, we strongly agree with this legal point. We also agree with his recommendation in the NYT Editorial:

Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

We think liability law will develop accordingly. In fact, we think the common law already provides for such vicarious liability. No need to amend. Clarify would be a better word. We are not really terribly concerned about that. We are more concerned with technology governors and behavioral restrictions, although a liability stick will be very helpful. We have a team membership openings now for experienced products liability lawyers and regulators.


%d bloggers like this: