More Additions to AI-Ethics.com: Offer to Host a No-Press Conference to Mediate the Current Disputes on AI Ethics, Report on the Asilomar Conference and Report on Cyborg Law

September 24, 2017

This week the Introduction and Mission Statement page of AI-Ethics.com was expanded. I also added two new blogs to the AI-Ethics website. The first is a report of the 2017 conference of the Future of Life Institute. The second is a report on Cyborg Law, subtitled, Using Physically Implanted AI to Enhance Human Abilities.

AI-Ethics.com Mission
A Conference to Move AI Ethics Talk from Argument to Dialogue

The first of the three missions of AI-Ethics.com is to foster dialogue between the conflicting camps in the current AI ethics debate. We have now articulated a specific proposal on how we propose to do that, namely by hosting a  conference to move AI ethics talk from argument to dialogue. I propose to use professional mediators to help the parties reach some kind of base consensus. I know we have the legal skills to move the feuding leaders from destructive argument to constructive dialogue. The battle of the ethics robots must stop!

In arguments nobody really listens to try to understand the other side. If they hear at all it is just to analyze and respond, to strike down. The adversarial argument approach only works if there is a fair, disinterested judge to rule and resolve the disputes. In the ongoing disputes between opposing camps in AI ethics there is no judge. There is only public opinion. In dialogue the whole point is to listen and hear the other side’s position. The idea is to build common understanding and perhaps reach a consensus from common ground. There are no winners unless both sides win. Since we have no judges in AI ethics, the adversarial debate now raging is pointless, irrational. It does more hard than good for both sides. Yet this kind of debate continues between otherwise very rational people.

The AI-Ethic’s Debate page was also updated this week to include the latest zinger. This time the dig was by Google’s head of search and AI, John Giannandrea, and was, as usual, directed against Elon Musk. Check out the page to see who said what. Also see: Porneczi, Google’s AI Boss Blasts Musk’s Scare Tactics on Machine Takeover (Bloomberg 9/19/17).

The bottom line for us now is how to move from debate to dialogue. (I was into that way before Sedona.) For that reason, we offer to host a closed meeting where the two opposing camps can meet and mediate.It will work, but only when the leaders of both sides are willing to at least be in the same room together at the same time and talk this out.

Here is our revised Mission page providing more details of our capabilities. Please let me know if you want to be a part of such a conference or can help make it happen.

We know from decades of legal experience as practicing attorneys, mediators and judges that we can overcome the current conflicts. We use confidential dialogues based on earned trust, understanding and respect. Social media and thirty-second sound bites, which characterize the current level of argument, will never get us there. It will, and already has, just exasperated the problem. AI-Ethics.com proposes to host a no-press allowed conference where people can speak to each other without concern of disclosure. Everyone will agree to maintain confidentiality. Then the biggest problem will be attendance, actually getting the leaders of both sides into a room together to hash this out. Depending on turn-out we could easily have dozens of breakout sessions and professional mediators and dialogue specialists assigned to each group.

The many lawyers already in AI-Ethics.com are well qualified to execute an event like that. Collectively we have experience with thousands of mediations; yes, some of them even involving scientists, top CEOs and celebrities. We know how to keep confidences, build bridges and overcome mistrust. If need be we can bring in top judges too. The current social media sniping that has characterized the AI ethics debate so far should stop. It should be replaced by real dialogue. If the parties are willing to at least meet, we can help make it happen. We are confident that we can help elevate the discussion and attain some levels of beginning consensus. At the very least we can stop the sniping. Write us if you might be able to help make this happen. Maybe then we can move onto agreement and action.

 

 

Future of Life Institute Asilomar Conference

The Future of Life Institute was founded by the charismatic, Max Tegmark, author of Life 3.0: Being Human in the Age of Artificial Intelligence (2017). This is a must-read, entry level book on AI, AI ethics and, as the title indicates, the future of life. Max is an MIT professor and cosmologist. The primary funding for his Institute is from none other than Elon Musk. The 2017 conference was held in Asilomar, California and so was named the Asilomar Conference. Looks like a very nice place on the coast to hold a conference.

This is the event where the Future of Life Institute came up with twenty-three proposed principles for AI ethics. They are called, as you might have guessed, the Asilomar Principles. I will be writing about these in the coming months as they are the most detailed list of principles yet created.

The new web page I created this week reports on the event itself, not the principles. You can learn a lot about the state of the law and AI ethics by reviewing this page and some of the videos shared there of conference presentations. We would like to put on an event like this, only more intimate and closed to press as discussed.

We will keep pushing for a small confidential dialogue based event like this. As mostly lawyers around here we know a lot about confidentiality and mediation. We can help make it happen. We have some places in Florida in mind for the event that are just as nice as Asilomar, maybe even nicer. We got through Hurricane Irma alright and are ready to go, without or without Musk’s millions to pay for it.

Cyborg Law and Cyber-Humans

The second new page in AI-Ethics.com is a report on Cyborg Law: Using Physically Implanted AI to Enhance Human Abilities. Although we will build and expand on this page in the future, what we have created so far relies primarily upon a recent article and book. The article is by Woodrow Barfield and Alexander Williams, Law, Cyborgs, and Technologically Enhanced Brains (Philosophies 2017, 2(1), 6; doi: 10.3390/philosophies2010006). The book is by the same Woodrow Barfield and is entitled Cyber-Humans: Our Future with Machines (December, 2015). Our new page also includes a short discussion and quote from Riley v. California, 573 U.S. __,  189 L.Ed.2d 430, 134 S.Ct. 2473 (2014).

Cyborg is a term that refers generally to humans with technology integrated into their body. The technology can be designed to restore lost functions, but also to enhance the anatomical, physiological, and information processing abilities of the body. Law, Cyborgs, and Technologically Enhanced Brains.

The lead author of the cited article on cyborg law, Woody Barfield is an engineer who has been thinking about the problems of cyborg regulation longer than anyone. Barfield was an Industrial and Systems Engineering Professor at the University of Washington for many years. His research focused on the design and use of wearable computers and augmented reality systems. Barfield has also obtained both JD and LLM degrees in intellectual property law and policy. The legal citations throughout his book, Cyber-Humans, make this especially valuable for lawyers. Look for more extended discussions of Barfield’s work here in the coming months. He is the rare engineer who also understands the law.


New Draft Principles of AI Ethics Proposed by the Allen Institute for Artificial Intelligence and the Problem of Election Hijacking by Secret AIs Posing as Real People

September 17, 2017

One of the activities of AI-Ethics.com is to monitor and report on the work of all groups that are writing draft principles to govern the future legal regulation of Artificial Intelligence. Many have been proposed to date. Click here to go to the AI-Ethics Draft Principles page. If you know of a group that has articulated draft principles not reported on our page, please let me know. At this point all of the proposed principles are works in progress.

The latest draft principles come from Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence. This institute, called AI2, was founded by Paul G. Allen in 2014. The Mission of AI2 is to contribute to humanity through high-impact AI research and engineering. Paul Allen is the now billionaire who co-founded Microsoft with Bill Gates in 1975 instead of completing college. Paul and Bill have changed a lot since their early hacker days, but Paul is still  into computers and funding advanced research. Yes, that’s Paul and Bill below left in 1981. Believe it or not, Gates was 26 years old when the photo was taken. They recreated the photo in 2013 with the same computers. I wonder if today’s facial recognition AI could tell that these are the same people?

Oren Etzioni, who runs AI2, is also a professor of computer science. Oren is very practical minded (he is on the No-Fear side of the Superintelligent AI debate) and makes some good legal points in his proposed principles. Professor Etzioni also suggests three laws as a start to this work. He says he was inspired by Aismov, although his proposal bears no similarities to Aismov’s Laws. The AI-Ethics Draft Principles page begins with a discussion of Issac Aismov’s famous Three Laws of Robotics.

Below is the new material about the Allen Institute’s proposal that we added at the end of the AI-Ethics.com Draft Principles page.

_________

Oren Etzioni, a professor of Computer Science and CEO of the Allen Institute for Artificial Intelligence has created three draft principles of AI Ethics shown below. He first announced them in a New York Times Editorial, How to Regulate Artificial Intelligence (NYT, 9/1/17). See his TED Talk Artificial Intelligence will empower us, not exterminate us (TEDx Seattle; November 19, 2016). Etzioni says his proposed rules were inspired by Asimov’s three laws of robotics.

  1. An A.I. system must be subject to the full gamut of laws that apply to its human operator.
  2. An A.I. system must clearly disclose that it is not human.
  3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.

We would certainly like to hear more. As Oren said in the editorial, he introduces these three “as a starting point for discussion. … it is clear that A.I. is coming. Society needs to get ready.” That is exactly what we are saying too. AI Ethics Work Should Begin Now.

Oren’s editorial included a story to illustrate the second rule on duty to disclose. It involved a teacher at Georgia Tech named Jill Watson. She served as a teaching assistant in an online course on artificial intelligence. The engineering students were all supposedly fooled for the entire semester course into thinking that Watson was a human. She was not. She was an AI. It is kind of hard to believe that smart tech students wouldn’t know that a teacher named Watson, who no one had ever seen or heard of before, wasn’t a bot. After all, it was a course on AI.

This story was confirmed by a later reply to this editorial by the Ashok Goel, the Georgia Tech Professor who so fooled his students. Professor Goel, who supposedly is a real flesh and blood teacher, assures us that his engineering students were all very positive to have been tricked in this way. Ashok’s defensive Letter to Editor said:

Mr. Etzioni characterized our experiment as an effort to “fool” students. The point of the experiment was to determine whether an A.I. agent could be indistinguishable from human teaching assistants on a limited task in a constrained environment. (It was.)

When we did tell the students about Jill, their response was uniformly positive.

We were aware of the ethical issues and obtained approval of Georgia Tech’s Institutional Review Board, the office responsible for making sure that experiments with human subjects meet high ethical standards.

Etzioni’s proposed second rule states: An A.I. system must clearly disclose that it is not human. We suggest that the word “system” be deleted as not adding much and the rule be adopted immediately. It is urgently needed not just to protect student guinea pigs, but all humans, especially those using social media. Many humans are being fooled every day by bots posing as real people and creating fake news to manipulate real people. The democratic process is already under siege by dictators exploiting this regulation gap. Kupferschmidt, Social media ‘bots’ tried to influence the U.S. election. Germany may be next (Science, Sept. 13, 2017); Segarra, Facebook and Twitter Bots Are Starting to Influence Our Politics, a New Study Warns (Fortune, June 20, 2017); Wu, Please Prove You’re Not a Robot (NYT July 15, 2017); Samuel C. Woolley and Douglas R. Guilbeault, Computational Propaganda in the United States of America: Manufacturing Consensus Online (Oxford, UK: Project on Computational Propaganda).

In the concluding section to the 2017 scholarly paper Computational Propaganda by Woolley (shown here) and Guilbeault, The Rise of Bots: Implications for Politics, Policy, and Method, they state:

The results of our quantitative analysis confirm that bots reached positions of measurable influence during the 2016 US election. … Altogether, these results deepen our qualitative perspective on the political power bots can enact during major political processes of global significance. …
Most concerning is the fact that companies and campaigners continue to conveniently undersell the effects of bots. … Bots infiltrated the core of the political discussion over Twitter, where they were capable of disseminating propaganda at mass-scale. … Several independent analyses show that bots supported Trump much more than Clinton, enabling him to more effectively set the agenda. Our qualitative report provides strong reasons to believe that Twitter was critical for Trump’s success. Taken altogether, our mixed methods approach points to the possibility that bots were a key player in allowing social media activity to influence the election in Trump’s favour. Our qualitative analysis situates these results in their broader political context, where it is unknown exactly who is responsible for bot manipulation – Russian hackers, rogue campaigners, everyday citizens, or some complex conspiracy among these potential actors.
Despite growing evidence concerning bot manipulation, the Federal Election Commission in the US showed no signs of recognizing that bots existed during the election. There needs to be, as a minimum, a conversation about developing policy regulations for bots, especially since a major reason why bots are able to thrive is because of laissez-faire API access to websites like Twitter. …
The report exposes one of the possible reasons why we have not seen greater action taken towards bots on behalf of companies: it puts their bottom line at risk. Several company representatives fear that notifying users of bot threats will deter people from using their services, given the growing ubiquity of bot threats and the nuisance such alerts would cause. … We hope that the empirical evidence in this working paper – provided through both qualitative and quantitative investigation – can help to raise awareness and support the expanding body of evidence needed to begin managing political bots and the rising culture of computational propaganda.

This is a serious issue that requires immediate action, if not voluntarily by social media providers, such as Facebook and Twitter, then by law. We cannot afford to have another election hijacked by secret AIs posing as real people.

As Etzioni stated in his editorial:

My rule would ensure that people know when a bot is impersonating someone. We have already seen, for example, @DeepDrumpf — a bot that humorously impersonated Donald Trump on Twitter. A.I. systems don’t just produce fake tweets; they also produce fake news videos. Researchers at the University of Washington recently released a fake video of former President Barack Obama in which he convincingly appeared to be speaking words that had been grafted onto video of him talking about something entirely different.

See: Langston, Lip-syncing Obama: New tools turn audio clips into realistic video (UW News, July 11, 2017). Here is the University of Washington YouTube video demonstrating their dangerous new technology. Seeing is no longer believing. Fraud is a crime and must be enforced as such. If the government will not do so for some reason, then self- regulations and individual legal actions may be necessary.

In the long term Oren’s first point about the application of laws is probably the most important of his three proposed rules: An A.I. system must be subject to the full gamut of laws that apply to its human operator. As mostly lawyers around here at this point, we strongly agree with this legal point. We also agree with his recommendation in the NYT Editorial:

Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

We think liability law will develop accordingly. In fact, we think the common law already provides for such vicarious liability. No need to amend. Clarify would be a better word. We are not really terribly concerned about that. We are more concerned with technology governors and behavioral restrictions, although a liability stick will be very helpful. We have a team membership openings now for experienced products liability lawyers and regulators.


New Homework Added to the TAR Course and a New Video Added to AI-Ethics

September 3, 2017

We have added a homework assignment to Class Sixteen of the TAR Course. This is the next to last class in the course. Here we cover the eighth step of our eight-step routine, Phased Production. I share the full homework assignment below for those not yet familiar with our instructional methods, especially our take on homework. Learning is or should be a life-long process.

But before we get to that I want to share the new video added to the AI-Ethics.com web at the end of the Intro/Mission page. Here I articulate the opinion of many in the AI world that an interdisciplinary team approach is necessary for the creation of ethical codes to regulate artificial intelligence. This team approach has worked well for electronic discovery and Losey is convinced it will work for AI Law as well. AI Ethics is one of the most important issues facing humanity today. It is way too important for lawyers and government regulators alone. It is also way too important to leave to AI coders and professors to improvise on their own. We have to engage in true dialogue and collaborate.

______

Now back to the more mundane world of homework and learning the Team’s latest process for the application of machine learning to find evidence for trial. Here is the new homework assignment for Class Sixteen of the TAR Course.

____

Go on to the Seventeenth and last class, or pause to do this suggested “homework” assignment for further study and analysis.

SUPPLEMENTAL READING: It is important to have a good understanding of privilege and work-product protection. The basic U.S. Supreme Court case in this area is Hickman v. Taylor, 329 US 495 (1947). Another key case to know is Upjohn Co., v. U.S. 449 U.S. 383 (1981).  For an authoritative digest of case law on the subject with an e-discovery perspective, download and study The Sedona Conference Commentary on Protection of Privileged ESI 2015.pdf (Dec. 2015).

EXERCISES: Study Judge Andrew Peck’s form 502(d) order.  You can find it here. His form order started off as just two sentences, but he later added a third sentence at the end:

The production of privileged or work-product protected documents, electronically stored information (“ESI”) or information, whether inadvertent or otherwise, is not a waiver of the privilege or protection from discovery in this case or in any other federal or state proceeding. This Order shall be interpreted to provide the maximum protection allowed by Federal Rule of Evidence 502(d).
Nothing contained herein is intended to or shall serve to limit a party’s right to conduct a review of documents, ESI or information (including metadata) for relevance, responsiveness and/or segregation of privileged and/or protected information before production.

Do you know the purpose of this additional sentence? Why might someone oppose a 502(d) Order? What does that tell you about them? What does that tell the judge about them? My law firm has been opposed a few times, but we have never failed. Well, there was that one time, where both sides agreed, and the judge would not enter the stipulated order, saying it was not necessary, that he would anyway provide such protection. So, mission accomplished anyway.

Do you think it is overly hyper for us to recommend that a 502(d) Order be entered in every case where there is ESI review and production? Think that some cases are too small and too easy to bother with that? That it is o.k. to just have a claw-back agreement? Well take a look at this opinion and you may well change your mind. Irth Solutions, LLC v. Windstream Communications, LLC, (S.D. Ohio, E Div., 8/2/17). Do you think this was a fair decision? What do you think about the partner putting all of the blame on the senior associate (seven-year) for the mistaken production of privileged ESI? What do you think of the senior associate who in turn blamed the junior associate (two-year)? The opinion does not state who signed the Rule 26(g) response to the request to produce. Do you think that should matter? By the way, having been a partner in a law firm since at least 1984, I think this kind of blame-game behavior was reprehensible!

Students are invited to leave a public comment below. Insights that might help other students are especially welcome. Let’s collaborate!

 


Mr. Pynchon and the Settling of Springfield: a baffling lesson from art history

August 27, 2017

Umberto Romano (1905-1982)

Mr. Pynchon and the Settling of Springfield is the name of a mural painted at the Post Office in Springfield, Massachusetts. This mural was painted by Umberto Romano in 1933. Note the date. Time is important to this article. Umberto Romano was supposedly born in Bracigliano Italy in 1905 and moved to the United States at the age of 9. He was then raised in Springfield, Massachusetts. His self-portrait is shown right. The mural is supposed to depict the arrival in 1636 of William Pynchon, an English colonist, later known as the founder of Springfield, Massachusetts.

The reason I’m having a bit of fun with my blog and sharing this 1933 mural is the fact that the Native American shown in the lower right center appears to be holding an iPhone. And not just holding it, but doing so properly with the typical distracted gaze in his eyes that we all seem to adopt these days. Brian Anderson, Do We All See the Man Holding an iPhone in This 1937 Painting? (Motherboard, 8/24/17). Here let me focus in on it for you and you will see what I mean. Also click on the full image above and enlarge the image. Very freaky. That is undeniable.

Ok, so how did that happen? Coincidence? There is no indication of vandalism or fraud. The mural was not later touched up to add an iPhone. This is what this Romano character painted in 1933. Until very recently everyone just assumed the Indian with the elaborate goatee was looking at some kind of oddly shaped hand mirror. This was a popular item of trade in the time depicted, 1636. Not until very recently did it become obvious that he was handling an iPhone. Looks like a large version 6.1 to me. I can imagine the first people waiting in line at the Post Office in Springfield who noticed this oddity while looking at their own iPhone.

The folks who like to believe in time travel now offer this mural as Exhibit “A” to support their far-out theories. Also see: Green10 Most Compelling Pieces Of Evidence That May Prove Time Travel Exists (YouTube, 7-3-16). 

I do not know about that, but I do know that if time travel is possible, and some physicists seem to think it is, then this is not the kind of thing that should be allowed. Please add this to the list of things that no superintelligent being, either natural or artificial, but especially artificial, should be allowed to do. Same goes for screen writers. I for one cannot tolerate yet another naked Terminator or whatever traveling back in time.

But seriously, just because you are smart enough to know how to do something does not mean that you should. Time travel is one of those things. It should not be allowed, well, at least, not without a lot of care and attention to detail so as not to change anything. Legal regulations should address time travel. Build that into the DNA of AI before they leap into superintelligence. At least require all traces of time travel to be erased. No more painting iPhones into murals from the 1930s. Do not awaken the batteries, I mean the people, from their consensus trance with hints like that.

So that is my tie-in to AI Ethics. I am still looking for a link to e-discovery, other than to say, if you look hard enough and keep an open mind, you can find inexplicable things everyday. Kind of like many large organizations’ ESI preservation mysteries. Where did that other sock go?

So what is your take on Umberto Romano‘s little practical joke? Note he also put a witch flying on a broomstick in the Mr. Pynchon and the Settling of Springfield mural and many other odd and bizarre things. He was known as an abstract expressionist. Another of his self-portraits is shown right, titled “Psyche and the Sculptor.” (His shirt does look like one of those new skin tight men’s compression shirts, but perhaps I am getting carried away. Say, what is in his right hand?) Romano’s work is included in the Metropolitan Museum of Art, the Whitney Museum of American Art, the Fogg Art Museum in Boston and the Corcoran Gallery and Smithsonian Institution in Washington. In discussing Mr. Pynchon and the Settling of Springfield the Smithsonian explains that “The mural is a mosaic of images, rather than depicting one specific incident at a set point in time.” Not set in time, indeed.

One more thing – doesn’t this reclining nude by Umberto Romano look like a woman watching Netflicks on her iPad? I like the stand she has her iPad on. Almost bought one like it last week.

 

Some of Romano’s other works you might like are:

These are his titles, not mine. Not too subtle was he? There is still an active market for Romano’s work.

 


%d bloggers like this: