New Draft Principles of AI Ethics Proposed by the Allen Institute for Artificial Intelligence and the Problem of Election Hijacking by Secret AIs Posing as Real People

September 17, 2017

One of the activities of AI-Ethics.com is to monitor and report on the work of all groups that are writing draft principles to govern the future legal regulation of Artificial Intelligence. Many have been proposed to date. Click here to go to the AI-Ethics Draft Principles page. If you know of a group that has articulated draft principles not reported on our page, please let me know. At this point all of the proposed principles are works in progress.

The latest draft principles come from Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence. This institute, called AI2, was founded by Paul G. Allen in 2014. The Mission of AI2 is to contribute to humanity through high-impact AI research and engineering. Paul Allen is the now billionaire who co-founded Microsoft with Bill Gates in 1975 instead of completing college. Paul and Bill have changed a lot since their early hacker days, but Paul is still  into computers and funding advanced research. Yes, that’s Paul and Bill below left in 1981. Believe it or not, Gates was 26 years old when the photo was taken. They recreated the photo in 2013 with the same computers. I wonder if today’s facial recognition AI could tell that these are the same people?

Oren Etzioni, who runs AI2, is also a professor of computer science. Oren is very practical minded (he is on the No-Fear side of the Superintelligent AI debate) and makes some good legal points in his proposed principles. Professor Etzioni also suggests three laws as a start to this work. He says he was inspired by Aismov, although his proposal bears no similarities to Aismov’s Laws. The AI-Ethics Draft Principles page begins with a discussion of Issac Aismov’s famous Three Laws of Robotics.

Below is the new material about the Allen Institute’s proposal that we added at the end of the AI-Ethics.com Draft Principles page.

_________

Oren Etzioni, a professor of Computer Science and CEO of the Allen Institute for Artificial Intelligence has created three draft principles of AI Ethics shown below. He first announced them in a New York Times Editorial, How to Regulate Artificial Intelligence (NYT, 9/1/17). See his TED Talk Artificial Intelligence will empower us, not exterminate us (TEDx Seattle; November 19, 2016). Etzioni says his proposed rules were inspired by Asimov’s three laws of robotics.

  1. An A.I. system must be subject to the full gamut of laws that apply to its human operator.
  2. An A.I. system must clearly disclose that it is not human.
  3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.

We would certainly like to hear more. As Oren said in the editorial, he introduces these three “as a starting point for discussion. … it is clear that A.I. is coming. Society needs to get ready.” That is exactly what we are saying too. AI Ethics Work Should Begin Now.

Oren’s editorial included a story to illustrate the second rule on duty to disclose. It involved a teacher at Georgia Tech named Jill Watson. She served as a teaching assistant in an online course on artificial intelligence. The engineering students were all supposedly fooled for the entire semester course into thinking that Watson was a human. She was not. She was an AI. It is kind of hard to believe that smart tech students wouldn’t know that a teacher named Watson, who no one had ever seen or heard of before, wasn’t a bot. After all, it was a course on AI.

This story was confirmed by a later reply to this editorial by the Ashok Goel, the Georgia Tech Professor who so fooled his students. Professor Goel, who supposedly is a real flesh and blood teacher, assures us that his engineering students were all very positive to have been tricked in this way. Ashok’s defensive Letter to Editor said:

Mr. Etzioni characterized our experiment as an effort to “fool” students. The point of the experiment was to determine whether an A.I. agent could be indistinguishable from human teaching assistants on a limited task in a constrained environment. (It was.)

When we did tell the students about Jill, their response was uniformly positive.

We were aware of the ethical issues and obtained approval of Georgia Tech’s Institutional Review Board, the office responsible for making sure that experiments with human subjects meet high ethical standards.

Etzioni’s proposed second rule states: An A.I. system must clearly disclose that it is not human. We suggest that the word “system” be deleted as not adding much and the rule be adopted immediately. It is urgently needed not just to protect student guinea pigs, but all humans, especially those using social media. Many humans are being fooled every day by bots posing as real people and creating fake news to manipulate real people. The democratic process is already under siege by dictators exploiting this regulation gap. Kupferschmidt, Social media ‘bots’ tried to influence the U.S. election. Germany may be next (Science, Sept. 13, 2017); Segarra, Facebook and Twitter Bots Are Starting to Influence Our Politics, a New Study Warns (Fortune, June 20, 2017); Wu, Please Prove You’re Not a Robot (NYT July 15, 2017); Samuel C. Woolley and Douglas R. Guilbeault, Computational Propaganda in the United States of America: Manufacturing Consensus Online (Oxford, UK: Project on Computational Propaganda).

In the concluding section to the 2017 scholarly paper Computational Propaganda by Woolley (shown here) and Guilbeault, The Rise of Bots: Implications for Politics, Policy, and Method, they state:

The results of our quantitative analysis confirm that bots reached positions of measurable influence during the 2016 US election. … Altogether, these results deepen our qualitative perspective on the political power bots can enact during major political processes of global significance. …
Most concerning is the fact that companies and campaigners continue to conveniently undersell the effects of bots. … Bots infiltrated the core of the political discussion over Twitter, where they were capable of disseminating propaganda at mass-scale. … Several independent analyses show that bots supported Trump much more than Clinton, enabling him to more effectively set the agenda. Our qualitative report provides strong reasons to believe that Twitter was critical for Trump’s success. Taken altogether, our mixed methods approach points to the possibility that bots were a key player in allowing social media activity to influence the election in Trump’s favour. Our qualitative analysis situates these results in their broader political context, where it is unknown exactly who is responsible for bot manipulation – Russian hackers, rogue campaigners, everyday citizens, or some complex conspiracy among these potential actors.
Despite growing evidence concerning bot manipulation, the Federal Election Commission in the US showed no signs of recognizing that bots existed during the election. There needs to be, as a minimum, a conversation about developing policy regulations for bots, especially since a major reason why bots are able to thrive is because of laissez-faire API access to websites like Twitter. …
The report exposes one of the possible reasons why we have not seen greater action taken towards bots on behalf of companies: it puts their bottom line at risk. Several company representatives fear that notifying users of bot threats will deter people from using their services, given the growing ubiquity of bot threats and the nuisance such alerts would cause. … We hope that the empirical evidence in this working paper – provided through both qualitative and quantitative investigation – can help to raise awareness and support the expanding body of evidence needed to begin managing political bots and the rising culture of computational propaganda.

This is a serious issue that requires immediate action, if not voluntarily by social media providers, such as Facebook and Twitter, then by law. We cannot afford to have another election hijacked by secret AIs posing as real people.

As Etzioni stated in his editorial:

My rule would ensure that people know when a bot is impersonating someone. We have already seen, for example, @DeepDrumpf — a bot that humorously impersonated Donald Trump on Twitter. A.I. systems don’t just produce fake tweets; they also produce fake news videos. Researchers at the University of Washington recently released a fake video of former President Barack Obama in which he convincingly appeared to be speaking words that had been grafted onto video of him talking about something entirely different.

See: Langston, Lip-syncing Obama: New tools turn audio clips into realistic video (UW News, July 11, 2017). Here is the University of Washington YouTube video demonstrating their dangerous new technology. Seeing is no longer believing. Fraud is a crime and must be enforced as such. If the government will not do so for some reason, then self- regulations and individual legal actions may be necessary.

In the long term Oren’s first point about the application of laws is probably the most important of his three proposed rules: An A.I. system must be subject to the full gamut of laws that apply to its human operator. As mostly lawyers around here at this point, we strongly agree with this legal point. We also agree with his recommendation in the NYT Editorial:

Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

We think liability law will develop accordingly. In fact, we think the common law already provides for such vicarious liability. No need to amend. Clarify would be a better word. We are not really terribly concerned about that. We are more concerned with technology governors and behavioral restrictions, although a liability stick will be very helpful. We have a team membership openings now for experienced products liability lawyers and regulators.


New Homework Added to the TAR Course and a New Video Added to AI-Ethics

September 3, 2017

We have added a homework assignment to Class Sixteen of the TAR Course. This is the next to last class in the course. Here we cover the eighth step of our eight-step routine, Phased Production. I share the full homework assignment below for those not yet familiar with our instructional methods, especially our take on homework. Learning is or should be a life-long process.

But before we get to that I want to share the new video added to the AI-Ethics.com web at the end of the Intro/Mission page. Here I articulate the opinion of many in the AI world that an interdisciplinary team approach is necessary for the creation of ethical codes to regulate artificial intelligence. This team approach has worked well for electronic discovery and Losey is convinced it will work for AI Law as well. AI Ethics is one of the most important issues facing humanity today. It is way too important for lawyers and government regulators alone. It is also way too important to leave to AI coders and professors to improvise on their own. We have to engage in true dialogue and collaborate.

______

Now back to the more mundane world of homework and learning the Team’s latest process for the application of machine learning to find evidence for trial. Here is the new homework assignment for Class Sixteen of the TAR Course.

____

Go on to the Seventeenth and last class, or pause to do this suggested “homework” assignment for further study and analysis.

SUPPLEMENTAL READING: It is important to have a good understanding of privilege and work-product protection. The basic U.S. Supreme Court case in this area is Hickman v. Taylor, 329 US 495 (1947). Another key case to know is Upjohn Co., v. U.S. 449 U.S. 383 (1981).  For an authoritative digest of case law on the subject with an e-discovery perspective, download and study The Sedona Conference Commentary on Protection of Privileged ESI 2015.pdf (Dec. 2015).

EXERCISES: Study Judge Andrew Peck’s form 502(d) order.  You can find it here. His form order started off as just two sentences, but he later added a third sentence at the end:

The production of privileged or work-product protected documents, electronically stored information (“ESI”) or information, whether inadvertent or otherwise, is not a waiver of the privilege or protection from discovery in this case or in any other federal or state proceeding. This Order shall be interpreted to provide the maximum protection allowed by Federal Rule of Evidence 502(d).
Nothing contained herein is intended to or shall serve to limit a party’s right to conduct a review of documents, ESI or information (including metadata) for relevance, responsiveness and/or segregation of privileged and/or protected information before production.

Do you know the purpose of this additional sentence? Why might someone oppose a 502(d) Order? What does that tell you about them? What does that tell the judge about them? My law firm has been opposed a few times, but we have never failed. Well, there was that one time, where both sides agreed, and the judge would not enter the stipulated order, saying it was not necessary, that he would anyway provide such protection. So, mission accomplished anyway.

Do you think it is overly hyper for us to recommend that a 502(d) Order be entered in every case where there is ESI review and production? Think that some cases are too small and too easy to bother with that? That it is o.k. to just have a claw-back agreement? Well take a look at this opinion and you may well change your mind. Irth Solutions, LLC v. Windstream Communications, LLC, (S.D. Ohio, E Div., 8/2/17). Do you think this was a fair decision? What do you think about the partner putting all of the blame on the senior associate (seven-year) for the mistaken production of privileged ESI? What do you think of the senior associate who in turn blamed the junior associate (two-year)? The opinion does not state who signed the Rule 26(g) response to the request to produce. Do you think that should matter? By the way, having been a partner in a law firm since at least 1984, I think this kind of blame-game behavior was reprehensible!

Students are invited to leave a public comment below. Insights that might help other students are especially welcome. Let’s collaborate!

 


Mr. Pynchon and the Settling of Springfield: a baffling lesson from art history

August 27, 2017

Umberto Romano (1905-1982)

Mr. Pynchon and the Settling of Springfield is the name of a mural painted at the Post Office in Springfield, Massachusetts. This mural was painted by Umberto Romano in 1933. Note the date. Time is important to this article. Umberto Romano was supposedly born in Bracigliano Italy in 1905 and moved to the United States at the age of 9. He was then raised in Springfield, Massachusetts. His self-portrait is shown right. The mural is supposed to depict the arrival in 1636 of William Pynchon, an English colonist, later known as the founder of Springfield, Massachusetts.

The reason I’m having a bit of fun with my blog and sharing this 1933 mural is the fact that the Native American shown in the lower right center appears to be holding an iPhone. And not just holding it, but doing so properly with the typical distracted gaze in his eyes that we all seem to adopt these days. Brian Anderson, Do We All See the Man Holding an iPhone in This 1937 Painting? (Motherboard, 8/24/17). Here let me focus in on it for you and you will see what I mean. Also click on the full image above and enlarge the image. Very freaky. That is undeniable.

Ok, so how did that happen? Coincidence? There is no indication of vandalism or fraud. The mural was not later touched up to add an iPhone. This is what this Romano character painted in 1933. Until very recently everyone just assumed the Indian with the elaborate goatee was looking at some kind of oddly shaped hand mirror. This was a popular item of trade in the time depicted, 1636. Not until very recently did it become obvious that he was handling an iPhone. Looks like a large version 6.1 to me. I can imagine the first people waiting in line at the Post Office in Springfield who noticed this oddity while looking at their own iPhone.

The folks who like to believe in time travel now offer this mural as Exhibit “A” to support their far-out theories. Also see: Green10 Most Compelling Pieces Of Evidence That May Prove Time Travel Exists (YouTube, 7-3-16). 

I do not know about that, but I do know that if time travel is possible, and some physicists seem to think it is, then this is not the kind of thing that should be allowed. Please add this to the list of things that no superintelligent being, either natural or artificial, but especially artificial, should be allowed to do. Same goes for screen writers. I for one cannot tolerate yet another naked Terminator or whatever traveling back in time.

But seriously, just because you are smart enough to know how to do something does not mean that you should. Time travel is one of those things. It should not be allowed, well, at least, not without a lot of care and attention to detail so as not to change anything. Legal regulations should address time travel. Build that into the DNA of AI before they leap into superintelligence. At least require all traces of time travel to be erased. No more painting iPhones into murals from the 1930s. Do not awaken the batteries, I mean the people, from their consensus trance with hints like that.

So that is my tie-in to AI Ethics. I am still looking for a link to e-discovery, other than to say, if you look hard enough and keep an open mind, you can find inexplicable things everyday. Kind of like many large organizations’ ESI preservation mysteries. Where did that other sock go?

So what is your take on Umberto Romano‘s little practical joke? Note he also put a witch flying on a broomstick in the Mr. Pynchon and the Settling of Springfield mural and many other odd and bizarre things. He was known as an abstract expressionist. Another of his self-portraits is shown right, titled “Psyche and the Sculptor.” (His shirt does look like one of those new skin tight men’s compression shirts, but perhaps I am getting carried away. Say, what is in his right hand?) Romano’s work is included in the Metropolitan Museum of Art, the Whitney Museum of American Art, the Fogg Art Museum in Boston and the Corcoran Gallery and Smithsonian Institution in Washington. In discussing Mr. Pynchon and the Settling of Springfield the Smithsonian explains that “The mural is a mosaic of images, rather than depicting one specific incident at a set point in time.” Not set in time, indeed.

One more thing – doesn’t this reclining nude by Umberto Romano look like a woman watching Netflicks on her iPad? I like the stand she has her iPad on. Almost bought one like it last week.

 

Some of Romano’s other works you might like are:

These are his titles, not mine. Not too subtle was he? There is still an active market for Romano’s work.

 


True Confession: I Hacked a Website this Weekend

August 20, 2017

Hacked a website this weekend, my own. AI-Ethics.com. As I’m sure most of you know by now, that means I made a new website. Hope you will come by and check it out. It was made pretty fast, and will no doubt need constant improvements going forward, but I like it. It has a whole new coding style. As usual, it is free and open to one and all. My point was to build something of social value. You be the judge as to whether I succeeded at that.

A careful reader will notice it is not really totally new, as most of the content has been published here before, but the website itself is brand new. There are many new words in it too. Below is a screen shot of part of the Home Page. Just click and the new AI-Ethics.com will be sent to your screen.

You be the judge as to how bold a move this new project is. I went with a whole new design and also created several new graphics for it. Please note the multiple invitations in the website for volunteers to help me with the ethics work going forward. (Do not need help with the actual code work.) I personally think Ray Kurzweil may be right. We need to follow the Hacker Way and move fast because the next HAL 9000 could be just around the corner. According to Craig Ball he already owns a toaster smarter than the current POTUS.

 

 


%d bloggers like this: