New Draft Principles of AI Ethics Proposed by the Allen Institute for Artificial Intelligence and the Problem of Election Hijacking by Secret AIs Posing as Real People

September 17, 2017

One of the activities of AI-Ethics.com is to monitor and report on the work of all groups that are writing draft principles to govern the future legal regulation of Artificial Intelligence. Many have been proposed to date. Click here to go to the AI-Ethics Draft Principles page. If you know of a group that has articulated draft principles not reported on our page, please let me know. At this point all of the proposed principles are works in progress.

The latest draft principles come from Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence. This institute, called AI2, was founded by Paul G. Allen in 2014. The Mission of AI2 is to contribute to humanity through high-impact AI research and engineering. Paul Allen is the now billionaire who co-founded Microsoft with Bill Gates in 1975 instead of completing college. Paul and Bill have changed a lot since their early hacker days, but Paul is still  into computers and funding advanced research. Yes, that’s Paul and Bill below left in 1981. Believe it or not, Gates was 26 years old when the photo was taken. They recreated the photo in 2013 with the same computers. I wonder if today’s facial recognition AI could tell that these are the same people?

Oren Etzioni, who runs AI2, is also a professor of computer science. Oren is very practical minded (he is on the No-Fear side of the Superintelligent AI debate) and makes some good legal points in his proposed principles. Professor Etzioni also suggests three laws as a start to this work. He says he was inspired by Aismov, although his proposal bears no similarities to Aismov’s Laws. The AI-Ethics Draft Principles page begins with a discussion of Issac Aismov’s famous Three Laws of Robotics.

Below is the new material about the Allen Institute’s proposal that we added at the end of the AI-Ethics.com Draft Principles page.

_________

Oren Etzioni, a professor of Computer Science and CEO of the Allen Institute for Artificial Intelligence has created three draft principles of AI Ethics shown below. He first announced them in a New York Times Editorial, How to Regulate Artificial Intelligence (NYT, 9/1/17). See his TED Talk Artificial Intelligence will empower us, not exterminate us (TEDx Seattle; November 19, 2016). Etzioni says his proposed rules were inspired by Asimov’s three laws of robotics.

  1. An A.I. system must be subject to the full gamut of laws that apply to its human operator.
  2. An A.I. system must clearly disclose that it is not human.
  3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.

We would certainly like to hear more. As Oren said in the editorial, he introduces these three “as a starting point for discussion. … it is clear that A.I. is coming. Society needs to get ready.” That is exactly what we are saying too. AI Ethics Work Should Begin Now.

Oren’s editorial included a story to illustrate the second rule on duty to disclose. It involved a teacher at Georgia Tech named Jill Watson. She served as a teaching assistant in an online course on artificial intelligence. The engineering students were all supposedly fooled for the entire semester course into thinking that Watson was a human. She was not. She was an AI. It is kind of hard to believe that smart tech students wouldn’t know that a teacher named Watson, who no one had ever seen or heard of before, wasn’t a bot. After all, it was a course on AI.

This story was confirmed by a later reply to this editorial by the Ashok Goel, the Georgia Tech Professor who so fooled his students. Professor Goel, who supposedly is a real flesh and blood teacher, assures us that his engineering students were all very positive to have been tricked in this way. Ashok’s defensive Letter to Editor said:

Mr. Etzioni characterized our experiment as an effort to “fool” students. The point of the experiment was to determine whether an A.I. agent could be indistinguishable from human teaching assistants on a limited task in a constrained environment. (It was.)

When we did tell the students about Jill, their response was uniformly positive.

We were aware of the ethical issues and obtained approval of Georgia Tech’s Institutional Review Board, the office responsible for making sure that experiments with human subjects meet high ethical standards.

Etzioni’s proposed second rule states: An A.I. system must clearly disclose that it is not human. We suggest that the word “system” be deleted as not adding much and the rule be adopted immediately. It is urgently needed not just to protect student guinea pigs, but all humans, especially those using social media. Many humans are being fooled every day by bots posing as real people and creating fake news to manipulate real people. The democratic process is already under siege by dictators exploiting this regulation gap. Kupferschmidt, Social media ‘bots’ tried to influence the U.S. election. Germany may be next (Science, Sept. 13, 2017); Segarra, Facebook and Twitter Bots Are Starting to Influence Our Politics, a New Study Warns (Fortune, June 20, 2017); Wu, Please Prove You’re Not a Robot (NYT July 15, 2017); Samuel C. Woolley and Douglas R. Guilbeault, Computational Propaganda in the United States of America: Manufacturing Consensus Online (Oxford, UK: Project on Computational Propaganda).

In the concluding section to the 2017 scholarly paper Computational Propaganda by Woolley (shown here) and Guilbeault, The Rise of Bots: Implications for Politics, Policy, and Method, they state:

The results of our quantitative analysis confirm that bots reached positions of measurable influence during the 2016 US election. … Altogether, these results deepen our qualitative perspective on the political power bots can enact during major political processes of global significance. …
Most concerning is the fact that companies and campaigners continue to conveniently undersell the effects of bots. … Bots infiltrated the core of the political discussion over Twitter, where they were capable of disseminating propaganda at mass-scale. … Several independent analyses show that bots supported Trump much more than Clinton, enabling him to more effectively set the agenda. Our qualitative report provides strong reasons to believe that Twitter was critical for Trump’s success. Taken altogether, our mixed methods approach points to the possibility that bots were a key player in allowing social media activity to influence the election in Trump’s favour. Our qualitative analysis situates these results in their broader political context, where it is unknown exactly who is responsible for bot manipulation – Russian hackers, rogue campaigners, everyday citizens, or some complex conspiracy among these potential actors.
Despite growing evidence concerning bot manipulation, the Federal Election Commission in the US showed no signs of recognizing that bots existed during the election. There needs to be, as a minimum, a conversation about developing policy regulations for bots, especially since a major reason why bots are able to thrive is because of laissez-faire API access to websites like Twitter. …
The report exposes one of the possible reasons why we have not seen greater action taken towards bots on behalf of companies: it puts their bottom line at risk. Several company representatives fear that notifying users of bot threats will deter people from using their services, given the growing ubiquity of bot threats and the nuisance such alerts would cause. … We hope that the empirical evidence in this working paper – provided through both qualitative and quantitative investigation – can help to raise awareness and support the expanding body of evidence needed to begin managing political bots and the rising culture of computational propaganda.

This is a serious issue that requires immediate action, if not voluntarily by social media providers, such as Facebook and Twitter, then by law. We cannot afford to have another election hijacked by secret AIs posing as real people.

As Etzioni stated in his editorial:

My rule would ensure that people know when a bot is impersonating someone. We have already seen, for example, @DeepDrumpf — a bot that humorously impersonated Donald Trump on Twitter. A.I. systems don’t just produce fake tweets; they also produce fake news videos. Researchers at the University of Washington recently released a fake video of former President Barack Obama in which he convincingly appeared to be speaking words that had been grafted onto video of him talking about something entirely different.

See: Langston, Lip-syncing Obama: New tools turn audio clips into realistic video (UW News, July 11, 2017). Here is the University of Washington YouTube video demonstrating their dangerous new technology. Seeing is no longer believing. Fraud is a crime and must be enforced as such. If the government will not do so for some reason, then self- regulations and individual legal actions may be necessary.

In the long term Oren’s first point about the application of laws is probably the most important of his three proposed rules: An A.I. system must be subject to the full gamut of laws that apply to its human operator. As mostly lawyers around here at this point, we strongly agree with this legal point. We also agree with his recommendation in the NYT Editorial:

Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

We think liability law will develop accordingly. In fact, we think the common law already provides for such vicarious liability. No need to amend. Clarify would be a better word. We are not really terribly concerned about that. We are more concerned with technology governors and behavioral restrictions, although a liability stick will be very helpful. We have a team membership openings now for experienced products liability lawyers and regulators.


How the Hacker Way Guided Me to e-Discovery, then AI Ethics

August 13, 2017

This new ten minute video on Hacker Way and Legal Practice Management was added to my Hacker Way and AI-Ethics pages this week. It explains how one led to another. It also provides more insight into why I think the major problems of e-discovery have now been solved, with a shout-out to all e-discovery vendors and the team approach of lawyers working with them. This interdisciplinary team approach is how we overcame e-discovery challenges and, if my theory is correct, will also allow us to meet the regulatory challenges surrounding artificial intelligence. Hopefully my video disclosures here will provide useful insights into how the Hacker Way management credo used by most high-tech companies can also be followed by lawyers.

__

__

 


E-DISCOVERY IS OVER: The big problems of e-discovery have now all been solved. Crises Averted. The Law now has bigger fish to fry.

July 30, 2017

Congratulations!

We did it. We survived the technology tsunami. The time of great danger to Law and Justice from  e-Discovery challenges is now over. Whew! A toast of congratulations to one and all.

From here on it is just a matter of tweaking the principles and procedures that we have already created, plus never-ending education, a good thing, and politics, not good, but inevitable. The team approach of lawyers and engineers (vendors) working together has been proven effective, so have the new Rules and case law, and so too have the latest methods of legal search and document review.

I realize that many will be tempted to compare my view to that of a famous physicist in 1894 who declared:

There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.

Lord Kelvin (1824-1907)

Then along came Einstein. Many attribute this humorously mistaken assertion to Lord Kelvin aka William Thomson, 1st Baron Kelvin. According to Quora, scholarship shows that it was probably said by the American physicist, Albert Michelson, behind the famous Michelson–Morley experiment on the speed of light.

Still, even mindful of the dangers of boasting, I still think that most of the really tough problems in electronic discovery have now been solved.

The time of great unknowns in e-discovery are past. The rules, principles, case law, procedures, software, methods, quality controls vendor services are now well-developed. All that remains is more and more precise measurement.

The Wild West days are way gone. Certainly new problems will arise and experiments will continue, but they will not be on the same level or intensity as before. They will be minor problems. They will likely be very similar to issues we have already addressed, just with exponential magnification or new twist and turns typical of the common law.

This is a tremendous accomplishment. The crises we all saw coming around the corner at the turn of the century has been averted. Remember how the entire legal profession was abuzz in emergency mode in 2005 because of the greats dangers and burdens of e-discovery?  Yes, thanks to the hard work and creativity of many people, the big problems have now been solved, especially the biggest problem of them all, finding the needles of relevance in cosmic-sized haystacks of irrelevant noise. TARcourse.com. We now know what is required to do e-discovery correctly. EDBP.com. We have the software and attorney methods needed to find the relevant evidence we need, no matter what the volume of information we are dealing with.

We have invented, implemented and perfected procedures than can be enhanced and altered as needed to accommodate the ever growing complexity and exponential growth. We expect that. There is no data too big to handle. If fact, the more data we have, the better our active machine learning systems get, like, for instance, predictive coding. What an incredible difference from the world we faced in e-discovery just five years ago.

This success was a team effort by thousands of people around the world, including a small core group who devoted their professional lives to solving these problems. My readers have been a part of this and you can pat yourself on the back too. The paradigm shift has been made. Maybe it was the Sedona vortexes?

Now that the tough parts of e-discovery are over, the rest of the ride is downhill. Some of my readers have already moved on. I will not retire, not just yet. I will keep up the work of e-discovery, even as I watch it transition to just teaching and politics. These activities have there own unique challenges too, even if they are not really all that impact-full in the big scheme of things. Plus, I find politics disgusting. You will see tons of dirty pool in our field soon. I cannot talk about it now. We have some renegades with authority who never solved an e-discovery problem in their life. Posers with power.

But what is that new turbulence I hear in the distance? It is a bizarre new sound with vibrations never experienced before. It lies far outside of well trodden paths and sounds both discordant and harmonious, sirens-like at the same time. It lies on the outer, cutting edges of law, science and technology. It sounds like a new, more profound Technology and Law challenge has emerged. It is the splashing of bigger fish to fry. I am hearing the eerie smarts sounds of AI. A music of both exuberance and fear, utopia or extinction.

The Biggest Challenge Today is the Ethics of Artificial Intelligence.

Following my own advice of the Hacker Way approach I have given this considerable thought lately. I have found an area that has far more serious challenges and dangers than e-discovery – the challenges of AI Ethics.

I think that my past hacks, my past experiences with law and technology, have prepared me to step-up to this last, really big hack, the creation of a code of ethics for AI. A code that will save humanity from a litany of possible ills arising out of AI’s inevitable leap to super-intelligence.  I have come to see that my work in the new area of AI Ethics could have a far greater impact than my current work with active machine learning and the discovery of evidence in legal proceedings. AI Ethics is the biggest problem that I see right now where I have some hand-on skills to contribute. AI Ethics is concerned with artificial intelligence, both special and general, and the need for ethical guidelines, including best practices, principles, laws and regulations.

This new direction has led to my latest hack, AI-Ethics.com. Here you will find 3,866 words, many of them quotes; 19 graphics, including a photo of Richard Braman; and 9 videos with several hours worth of content. You will find quotes and videos on AI Ethics from the top minds in the world, including:

  • Steven Hawking
  • Elon Musk
  • Bill Gates
  • Ray Kurzweil
  • Mark Zuckerberg
  • Sam Harris
  • Nick Bostrom
  • Oren Etzioni
  • 2017 Asilomar conference
  • Sam Altman
  • Susumu Hirano
  • Wendell Wallach

Please come visit at AI-Ethics.com. The next big thing. Lawyers are needed, as the web explains. I look forward to any recommendations you may have.

I have done the basic research for AI Ethics, at least the beginning big-picture research of the subject. The AI-Ethics.com website shares the information that had biggest impact for me personally. The web I hacked together also provides numerous links to resources where you can continue and customize your study.

I have been continuously improving the content since this started just over a week ago. This will continue as my study continues.

As you will see, a proposal has already emerged to have an International Conference in Florida on AI Ethics as early as 2018. We would assemble some of the top experts and concerned citizens from all walks of life. I hope especially to get Elon Musk to attend and will time the event to correspond with one of SpaceX’es many launches here. My vision for the conference is to facilitate dialogue with high-tech variations appropriate for the AI environment.

The Singularity of superintelligent AIs may come soon. We may live long enough to see it. When it does, we want a positive future to emerge, not a dystopia. Taking action now on AI ethics can help a positive future come to pass.

Here is one of many great videos on the subject of AI in general. This technology is really interesting. Kevin Kelly, the co-founder of Wired, does a good job of laying out some of its characteristics. Kelly takes an old-school approach and does not speak about superintelligence in an exponential sense.

 


Introduction to Hacker Way Philosophy

July 16, 2017

Ralph Losey – 7/16/17

I have spoken several times before concerning the Hacker Way philosophy. I have always focused on my work as a lawyer specializing in e-discovery. I have also included this philosophy in my teachings in this area of the law, including the use of AI in document review. See: the TAR Course;  HackerWay.org and HackerLaw.org.

The video talk in this blog takes it outside of the legal community so it can have maximum impact. I think it is important for everyone to understand the credo behind Facebook and most other 21st Century software tech companies. No one else seems to be talking about it, or sharing the secret sauce behind their success. That is contrary to the fundamental Hacker principle of Openness, so, as an old Hacker myself, I am stepping in to fill the gap. That’s just what I do. (Stepping-In is discussed in Davenport and Kirby, Only Humans Need Apply, and by Dean Gonsowski, A Clear View or a Short Distance? AI and the Legal Industry, and A Changing World: Ralph Losey on “Stepping In” for e-Discovery. Also see: Losey, Lawyers’ Job Security in a Near Future World of AI, Part Two.)

Facebook’ corp headquarters photo with symbols added.

In this below eleven minute video I am taking this sharing and openness to the next step. Here I address the five principles and related ideas of the Hacker Way as applied to life in general, not just my legal specialties. Hope you find this provides some value to our fast evolving computer culture. Please leave some comments, either here or at my new Facebook site: HackerWay.org.

___

___

If you have not already read Mark Zuckerberg’s original treatise on the Hacker Way, contained in his initial public offering Letter to Investors, I suggest you do so now. Also see my related ideas on history and social progress at Info→Knowledge→Wisdom.

I look forward to your comments.

Below is a graphic showing all nine concepts of the Hacker Way following the form of a enneagon or Nonagram, also known as a Star of Goliath.

 


%d bloggers like this: