How the Hacker Way Guided Me to e-Discovery, then AI Ethics

August 13, 2017

This new ten minute video on Hacker Way and Legal Practice Management was added to my Hacker Way and AI-Ethics pages this week. It explains how one led to another. It also provides more insight into why I think the major problems of e-discovery have now been solved, with a shout-out to all e-discovery vendors and the team approach of lawyers working with them. This interdisciplinary team approach is how we overcame e-discovery challenges and, if my theory is correct, will also allow us to meet the regulatory challenges surrounding artificial intelligence. Hopefully my video disclosures here will provide useful insights into how the Hacker Way management credo used by most high-tech companies can also be followed by lawyers.

__

__

 


“The Hacker Way” – What the e-Discovery Industry Can Learn From Facebook’s Management Ethic

August 18, 2013

Facebook’s regulatory filing for its initial public stock offering included a letter to potential investors by 27 year old billionaire Mark Zuckerberg. The letter describes the culture and approach to management that he follows as CEO of Facebook. Zuckerberg calls it the Hacker Way. Mark did not invent this culture. In a way, it invented him. It molded him and made him and Facebook what they are today. This letter reveals the secrets of Mark’s success and establishes him as the current child prodigy of the Hacker Way.

Too bad most of the CEOs in the e-discovery industry have not read the letter, much less understand how Facebook operates. They are clueless about the management ethic it takes to run a high-tech company.

An editorial in Law Technology News explains why I think most of the CEOs in the e-discovery software industry are just empty suits. They do not understand modern software culture. They think the Hacker Way is a security threat. They are incapable of creating insanely great software. They cannot lead with the kind of inspired genius that the legal profession now desperately needs from its software vendors to survive the data deluge. From what I have seen most of the pointy-haired management types that now run e-discovery software companies should be thrown out. They should be replaced with Hacker savvy management before their once proud companies go the way of the Blackberry. The LTN article has more details on the slackers in silk suits. Vendor CEOs: Stop Being Empty Suits & Embrace the Hacker Way. This essay, a partial rerun from a prior blog, gives you the background on Facebook’s Hacker Way.

Hacker History

The Hacker Way tradition and way of thinking has been around since at least the sixties. It has little or nothing to do with illegal computer intrusions. Moreover, to be clear, NSA leaker Edward Snowden is no hacker. All he did was steal classified information, put it on a thumb drive, meet the press, and then flea the country, to communist dictatorships no less. That has nothing to do with the Hacker Way and everything to do with politics.

The Hacker Way – often called the hacker ethic – has nothing to do with politics. It did not develop in government like the Internet did, but in the hobby of model railroad building and MIT computer labs. This philosophy is well-known and has influenced many in the tech world, including the great Steve Jobs (who never fully embraced its openness doctrines), and Steve’s hacker friend, Steve Wozniak, the laughing Yoda of the Hacker Way. The Hacker approach is primarily known to software coders, but can apply to all kinds of work. Even a few lawyers know about the hacker work ethic and have been influenced by it.

Who is Mark Zuckerberg?

We have all seen a movie version of Mark Zuckerberg in The Social Network, who, by the way, will still own 56.9% voting control of Facebook after the public offering later this year. But who is Mark Zuckerberg really? His Facebook page may reveal some of his personal life and ideas, but how did he create a Hundred Billion Dollar company so fast?

How did he change the world at such a young age? There are now over 850 million people on Facebook with over 100 billion connections. On any one day there are over 500 million people using Facebook. These are astonishing numbers. How did this kind of creative innovation and success come about? What drove Mark and his hacker friends to labor so long, and so well? The letter to investors that Mark published  gives us a glimpse into the answer and a glimpse into the real Mark Zuckerberg. Do I have your full attention yet?

The Hacker Way philosophy described in the investor letter explains the methods used by Mark Zuckerberg’s and his team to change the world. Regardless of who Mark really is, greedy guy or saint (or like Steve Jobs, perhaps a strange combination of both), Mark’s stated philosophy is very interesting. It has applications to anyone who wants to change the world, including those of us trying to change the law and e-discovery.

Hacker Culture and Management

Mark’s letter to investors explains the unique culture and approach to management inherent in the Hacker Way that he and Facebook have adopted.

As part of building a strong company, we work hard at making Facebook the best place for great people to have a big impact on the world and learn from other great people. We have cultivated a unique culture and management approach that we call the Hacker Way.

The word `hacker’ has an unfairly negative connotation from being portrayed in the media as people who break into computers. In reality, hacking just means building something quickly or testing the boundaries of what can be done. Like most things, it can be used for good or bad, but the vast majority of hackers I’ve met tend to be idealistic people who want to have a positive impact on the world.

The Hacker Way is an approach to building that involves continuous improvement and iteration. Hackers believe that something can always be better, and that nothing is ever complete. They just have to go fix it — often in the face of people who say it’s impossible or are content with the status quo.

Hackers try to build the best services over the long term by quickly releasing and learning from smaller iterations rather than trying to get everything right all at once. To support this, we have built a testing framework that at any given time can try out thousands of versions of Facebook. We have the words `Done is better than perfect’ painted on our walls to remind ourselves to always keep shipping.

Hacking is also an inherently hands-on and active discipline. Instead of debating for days whether a new idea is possible or what the best way to build something is, hackers would rather just prototype something and see what works. There’s a hacker mantra that you’ll hear a lot around Facebook offices: `Code wins arguments.’

Hacker culture is also extremely open and meritocratic. Hackers believe that the best idea and implementation should always win — not the person who is best at lobbying for an idea or the person who manages the most people.

To encourage this approach, every few months we have a hackathon, where everyone builds prototypes for new ideas they have. At the end, the whole team gets together and looks at everything that has been built. Many of our most successful products came out of hackathons, including Timeline, chat, video, our mobile development framework and some of our most important infrastructure like the HipHop compiler.

To make sure all our engineers share this approach, we require all new engineers — even managers whose primary job will not be to write code — to go through a program called Bootcamp where they learn our codebase, our tools and our approach. There are a lot of folks in the industry who manage engineers and don’t want to code themselves, but the type of hands-on people we’re looking for are willing and able to go through Bootcamp.

So sayst Zuckerberg. Hands-on is the way.

Application of the Hacker Way to e-Discovery

E-discovery needs that same hands-on approach. E-discovery lawyers need to go through bootcamp too, even if they primarily just supervise others. Even senior partners should go, at least if they purport to manage and direct e-discovery work. Partners should, for example, know how to use the search and review software themselves, and from time to time, do it, not just direct junior partners, associates, and contact lawyers. You cannot manage others at a job unless you can actually do the job yourself. That is the hacker key to successful management.

Also, as I often say, to be a good e-discovery lawyer, you have to get your hands dirty in the digital mud. Look at the documents, don’t just theorize about them or what might be relevant. Bring it all down to earth. Test your keywords, don’t just negotiate them. Prove your search concept by the metrics of the search results. See what works. When it doesn’t, change the approach and try again. Plus, in the new paradigm of predictive coding, where keywords are just a start, the SMEs must get their hand dirty. They must use the software to train the machine. That is how the artificial intelligence aspects of predictive coding work. The days of hands-off theorists is over. Predictive coding work is the penultimate example of code wins arguments.

Iteration is king of ESI search and production. Phased production is the only way to do e-discovery productions. There is no one final, perfect production of ESI. As Voltaire said, perfect is the enemy of  good. For e-discovery to work properly it must be hacked. It needs lawyer hackers. It needs SMEs that can train the machine on what is relevant, on what evidence must be found to do justice. Are you up to the challenge?

Mark’s Explanation to Investors of the Hacker Way of Management

Mark goes on to explain in his letter to investors how the Hacker Way translates into the core values for Facebook management.

The examples above all relate to engineering, but we have distilled these principles into five core values for how we run Facebook:

Focus on Impact

If we want to have the biggest impact, the best way to do this is to make sure we always focus on solving the most important problems. It sounds simple, but we think most companies do this poorly and waste a lot of time. We expect everyone at Facebook to be good at finding the biggest problems to work on.

Move Fast

Moving fast enables us to build more things and learn faster. However, as most companies grow, they slow down too much because they’re more afraid of making mistakes than they are of losing opportunities by moving too slowly. We have a saying: “Move fast and break things.” The idea is that if you never break anything, you’re probably not moving fast enough.

Be Bold

Building great things means taking risks. This can be scary and prevents most companies from doing the bold things they should. However, in a world that’s changing so quickly, you’re guaranteed to fail if you don’t take any risks. We have another saying: “The riskiest thing is to take no risks.” We encourage everyone to make bold decisions, even if that means being wrong some of the time.

Be Open

We believe that a more open world is a better world because people with more information can make better decisions and have a greater impact. That goes for running our company as well. We work hard to make sure everyone at Facebook has access to as much information as possible about every part of the company so they can make the best decisions and have the greatest impact.

Build Social Value

Once again, Facebook exists to make the world more open and connected, and not just to build a company. We expect everyone at Facebook to focus every day on how to build real value for the world in everything they do.

________

Applying the Hacker Way of Management to e-Discovery

Hacker_pentagram

Focus on Impact

Law firms, corporate law departments, and vendors need to focus on solving the most important problems, the high costs of e-discovery and the lack of skills. The cost problem primarily arises from review expenses, so focus on that. The way to have the biggest impact here is to solve the needle in the haystack problem. Costs can be dramatically reduced by improving search. In that way we can focus and limit our review to the most important documents. This incorporates the search principles of Relevant Is Irrelevant and 7±2 that I addressed in Secrets of Search, Part III. My own work has been driven by this hacker focus on impact and led to my development of Bottom Line Driven Proportional Review and multimodal predictive coding search methods. Other hacker oriented lawyers and technologists have developed their own methods to give clients the most bang for their buck.

The other big problem in e-discovery is that most lawyers do not know how to do it, and so they avoid it altogether. This in turn drives up the costs for everyone because it means the vendors cannot yet realize large economies of scale. Again, many lawyers and vendors understand that lack of education and skill sets is a key problem and are focusing on it.

Move Fast

This is an especially challenging dictate for lawyers and law firms because they are overly fearful of making mistakes, of breaking things as Facebook puts it. They are afraid of looking bad and malpractice suits. But the truth is, professional malpractice suits are very rare in litigation. Such suits happen much more often in other areas of the law, like estates and trusts, property, and tax. As far as looking bad goes, they should be more afraid of the bad publicity from not moving fast enough, which is a much more common problem, one that we see daily in sanctions cases. Society is changing fast, if you aren’t too, you’re falling behind.

The problem of slow adoptions also afflicts the bigger e-discovery vendors who often drown in bureaucracy and are afraid to make big decisions. That is why you see individuals like me starting an online education program, while the big boys keep on debating. I have already changed my e-Discovery Team Training program six times since it went public almost two years ago. `Code wins arguments.’ Lawyers must be especially careful of the thinking Man’s disease, paralysis by analysis, if they want to remain competitive.

A few lawyers and e-discovery vendors understand this hacker maxim and do move fast. A few vendors appreciate the value of getting there first, but fewer law firms do. It seems hard for most of law firm management to understand that the risks of lost opportunities are far more dangerous and certain than the risks of a making a few mistakes along the way. The slower, too conservative law firms are already starting to see their clients move business to the innovators, the few law firms who are moving fast. These firms have more than just puffed-up websites claiming e-discovery expertise, they have dedicated specialists and, in e-discovery at least, they are now far ahead of the rest of the crowd. Will the slow and timid ever catch up, or will they simply dissolve like Heller Ehrman, LLP?

Be Bold

This is all about taking risks and believing in your visions. It is directly related to moving fast and embracing change; not for its own sake, but to benefit your clients. Good lawyers are experts in risk analysis. There is no such thing as zero-risk, but there is certainly a point of diminishing returns for every litigation activity that is designed to control risks. Good lawyers know when enough is enough and constantly consult with their clients on cost benefit analysis. Should we take more depositions? Should we do another round of document checks for privilege? Often lawyers err on the side of caution, without consulting with their clients on the costs involved. They follow an overly cautious approach wherein the lawyers profit by more fees. Who are they really serving when they do that?

The adoption of predictive coding provides a perfect example of how some firms and vendors understand technology and are bold, and others do not and are timid. The legal profession is like any other industry, it rewards the bold, the innovators who create new legal methods and law for the benefit of their clients. What client wants a wimpy lawyer who is over-cautious and just runs up bills? They want a bold lawyer, who at the same time remains reasonable, and involves them in the key risk-reward decisions inherent in any e-discovery project.

Be Open

In the world of e-discovery this is all about transparency and strategic lowering of the wall of work product. Transparency is a proven method for building trust in discovery. Select disclosure is what cooperation looks like. It is what is supposed to happen at Rule 26(f) conferences, but seldom does. The attorneys that use openness as a tool are saving their clients needless expense and disputes. They are protecting them from dreaded redos, where a judge finds that you did a review wrong and requires you to do it again, usually under very short timelines. There are limits to openness of course, and lawyers have an inviolate duty to preserve their client’s secrets. But that still leaves room for disclosure of information on your own methods of search and review when doing so will serve your client’s interests.

Build Social Value 

The law is not a business. It is a profession. Lawyers and law firms exist to do justice. That is their social value. We should never lose sight of that in our day-to-day work. Vendors who serve the legal profession must also support these lofty goals in order to provide value. In e-discovery we should serve the prime directive, the dictates of Rule 1, for just, speedy, and inexpensive litigation. We should focus on legal services that provide that kind of social value. Profits to the firm should be secondary. As Zuckerberg said in the letter to potential investors:

Simply put: we don’t build services to make money; we make money to build better services.

This social value model is not naive, it works. It eventually creates huge financial rewards, as a number of e-discovery vendors and law firms are starting to realize. But that should never be the main point.

Conclusion

Facebook and Mark Zuckerberg should serve as an example to everyone, including e-discovery lawyers and vendors. I admit it is odd that we should have to turn to our youth for management guidance, but you cannot argue with success. We should study Zuckerberg’s 21st Century management style and Hacker Way philosophy. We can learn from its tremendous success. Zuckerberg and Facebook have proven that these management principles work in the digital age. It is true if it works. That is the pragmatic tradition of American philosophy. We live in fast changing times. Embrace change that works. As the face of Facebook says: “The riskiest thing is to take no risks.”


Ethical Guidelines for Artificial Intelligence Research

November 7, 2017

The most complete set of AI ethics developed to date, the twenty-three Asilomar Principles, was created by the Future of Life Institute in early 2017 at their Asilomar Conference. Ninety percent or more of the attendees at the conference had to agree upon a principle for it to be accepted. The first five of the agreed-upon principles pertain to AI research issues.

Although all twenty-three principles are important, the research issues are especially time sensitive. That is because AI research is already well underway by hundreds, if not thousands of different groups. There is a current compelling need to have some general guidelines in place for this research. AI Ethics Work Should Begin Now. We still have a little time to develop guidelines for the advanced AI products and services expected in the near future, but as to research, the train has already left the station.

Asilomar Research Principles

Other groups are concerned with AI ethics and regulation, including research guidelines. See the Draft Principles page of AI-Ethics.com which lists principles from six different groups. The five draft principles developed by Asilomar are, however, a good place to start examining the regulation needed for research.

Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Principle One: Research Goal

The proposed first principle is good, but the wording? Not so much. The goal of AI research should be to create not undirected intelligence, but beneficial intelligence. This is a double-negative English language mishmash that only an engineer could love. Here is one way this principle could be better articulated:

Research Goal: The goal of AI research should be the creation of beneficial intelligence, not  undirected intelligence.

Researchers should develop intelligence that is beneficial for all of mankind. The Institute of Electrical and Electronics Engineers (IEEE) first general principle is entitled “Human Benefit.” The Asilomar first principle is slightly different. It does not really say human benefit. Instead it refers to beneficial intelligence. I think the intent is to be more inclusive, to include all life on earth, all of earth. Although IEEE has that covered too in their background statement of purpose to “Prioritize the maximum benefit to humanity and the natural environment.”

Pure research, where raw intelligence is created just for the hell of it, with no intended helpful “direction” of any kind, should be avoided. Because we can is not a valid goal. Pure, raw intelligence, with neither good intent, nor bad, is not the goal here. The research goal is beneficial intelligence. Asilomar is saying that Undirected intelligence is unethical and should be avoided. Social values must be built into the intelligence. This is subtle, but important.

The restriction to beneficial intelligence is somewhat controversial, but the other side of this first principle is not. Namely, that research should not be conducted to create intelligence that is hostile to humans.  No one favors detrimental, evil intelligence. So, for example, the enslavement of humanity by Terminator AIs is not an acceptable research goal. I don’t care how bad you think our current political climate is.

To be slightly more realistic, if you have a secret research goal of taking over the world, such as  Max Tegmark imagines in The Tale of the Omega Team in his book, Life 3.0, and we find out, we will shut you down (or try to). Even if it is all peaceful and well-meaning, and no one gets hurt, as Max visualizes, plotting world domination by machines is not a positive value. If you get caught researching how to do that, some of the more creative prosecuting lawyers around will find a way to send you to jail. We have all seen the cheesy movies, and so have the juries, so do not tempt us.

Keep a positive, pro-humans, pro-Earth, pro-freedom goal for your research. I do not doubt that we will someday have AI smarter than our existing world leaders, perhaps sooner than many expect, but that does not justify a machine take-over. Wisdom comes slowly and is different than intelligence.

Still, what about autonomous weapons? Is research into advanced AI in this area beneficial? Are military defense capabilities beneficial? Pro-security? Is the slaughter of robots not better than the slaughter of humans? Could robots be more ethical at “soldiering” than humans? As attorney Matt Scherer has noted, who is the editor of a good blog, LawAndAI.com and a Future of Life Institute member:

Autonomous weapons are going to inherently be capable of reacting on time scales that are shorter than humans’ time scales in which they can react. I can easily imagine it reaching the point very quickly where the only way that you can counteract an attack by an autonomous weapon is with another autonomous weapon. Eventually, having humans involved in the military conflict will be the equivalent of bringing bows and arrows to a battle in World War II.

At that point, you start to wonder where human decision makers can enter into the military decision making process. Right now there’s very clear, well-established laws in place about who is responsible for specific military decisions, under what circumstances a soldier is held accountable, under what circumstances their commander is held accountable, on what circumstances the nation is held accountable. That’s going to become much blurrier when the decisions are not being made by human soldiers, but rather by autonomous systems. It’s going to become even more complicated as machine learning technology is incorporated into these systems, where they learn from their observations and experiences in the field on the best way to react to different military situations.

Podcast: Law and Ethics of Artificial Intelligence (Future of Life, 3/31/17).

The question of beneficial or not can become very complicated, fast. Like it or not, military research into killer robots is already well underway, in both the public and private sector. Kalashnikov Will Make an A.I.-Powered Killer Robot: What could possibly go wrong? (Popular Mechanics, 7/19/17); Congress told to brace for ‘robotic soldiers’ (The Hill, 3/1/17); US military reveals it hopes to use artificial intelligence to create cybersoldiers and even help fly its F-35 fighter jet – but admits it is ALREADY playing catch up (Daily Mail, 12/15/15) (a little dated, and sensationalistic article perhaps, but easy read with several videos).

AI weapons are a fact, but they should still be regulated, in the same way that we have regulated nuclear weapons since WWII. Tom Simonite, AI Could Revolutionize War as Much as Nukes (Wired, 7/19/17); Autonomous Weapons: an Open Letter from AI & Robotics Researchers.

Principle Two: Research Funding

The second principle of Funding is more than an enforcement mechanism for the first, that you should only fund beneficial AI. It is also a recognition that ethical work requires funding too. This should be every lawyer’s favorite AI ethics principle. Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies. The principle then adds a list of five bullet-point examples.

How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked. The goal of avoiding the creation of AI systems that can be hacked, easily or not, is a good one. If a hostile power can take over and misuse an AI for evil end, then the built-in beneficence may be irrelevant. The example of a driverless car come to mind that could be hacked and crashed as a perverse joy-ride, kidnapping or terrorist act.

The economic issues raised by the second example are very important: How can we grow our prosperity through automation while maintaining people’s resources and purpose? We do not want a system that only benefits the top one percent, or top ten percent, or whatever. It needs to benefit everyone, or at least try to. Also see Asilomar Principle Fifteen: Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

Yoshua Bengio, Professor of Computer Science at the University of Montreal, had this important comment to make on the Asilomar principles during an interview at the end of the conference:

I’m a very progressive person so I feel very strongly that dignity and justice mean wealth is redistributed. And I’m really concerned about AI worsening the effects and concentration of power and wealth that we’ve seen in the last 30 years. So this is pretty important for me.

I consider that one of the greatest dangers is that people either deal with AI in an irresponsible way or maliciously – I mean for their personal gain. And by having a more egalitarian society, throughout the world, I think we can reduce those dangers. In a society where there’s a lot of violence, a lot of inequality, the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question.

Most everyone at the Asilomar Conference agreed with that sentiment, but I do not yet see a strong consensus in AI businesses. Time will tell if profit motives and greed will at least be constrained by enlightened self-interest. Hopefully capitalist leaders will have the wisdom to share the great wealth with all of society that AI is likley to create.

How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? The legal example is also a good one, with the primary tension we see so far between fair versus efficient. Just policing high crime areas might well be efficient, at least for reducing some type of crime, but would it be fair? Do we want to embed racial profiling into our AI? Neighborhood slumlord profiling? Religious, ethic profiling? No. Existing law prohibits that and for good reason. Still, predictive policing is already a fact of life in many cities and we need to be sure it has proper legal, ethical regulation.

We have seen the tension between “speedy” and “inexpensive” on the one hand, and “just” on the other in Rule One of the Federal Rules of Civil Procedure and e-discovery. When applied using active machine learning a technical solution was attained to these competing goals. The predictive coding methods we developed allowed for both precision (“speedy” and “inexpensive”) and recall (“just”). Hopefully this success can be replicated in other areas of the law where machine learning is under proportional control by experienced human experts.

The final example given is much more troubling: What set of values should AI be aligned with, and what legal and ethical status should it have? Whose values? Who is to say what is right and wrong? This is easy in a dictatorship, or a uniform, monochrome culture (sea of white dudes), but it is very challenging in a diverse democracy. This may be the greatest research funding challenge of all.

Principle Three: Science-Policy Link

This principle is fairly straightforward, but will in practice require a great deal of time and effort to be done right. A constructive and healthy exchange between AI researchers and policy-makers is necessarily a two-way street. It first of all assumes that policy-makers, which in most countries includes government regulators, not just industry, have a valid place at the table. It assumes some form of government regulation. That is anathema to some in the business community who assume (falsely in our opinion) that all government is inherently bad and essentially has nothing to contribute. The countervailing view of overzealous government controllers who just want to jump in, uninformed, and legislate, is also discouraged by this principle. We are talking about a healthy exchange.

It does not take an AI to know this kind of give and take and information sharing will involve countless meetings. It will also require a positive healthy attitude between the two groups. If it gets bogged down into an adversary relationship, you can multiply the cost of compliance (and number of meetings) by two or three. If it goes to litigation, we lawyers will smile in our tears, but no one else will. So researchers, you are better off not going there. A constructive and healthy exchange is the way to go.

Principle Four: Research Culture

The need for a good culture applies in spades to the research community itself. The Fourth Principal states: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI. This favors the open source code movement for AI, but runs counter to the trade-secret  business models of many corporations. See Eg.:OpenAI.com, Deep Mind Open Source; Liam , ‘One machine learning model to rule them all’: Google open-sources tools for simpler AI (ZDNet, 6/20/17).

This tension is likley to increase as multiple parties get close to a big breakthrough. The successful efforts for open source now, before superintelligence seems imminent, may help keep the research culture positive. Time will tell, but if not there could be trouble all around and the promise of full employment for litigation attorneys.

Principle Five: Race Avoidance

The Fifth Principle is a tough one, but very important: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards. Moving fast and breaking things may be the mantra of Silicon Valley, but the impact of bad AI could be catastrophic. Bold is one thing, but reckless is quite another. In this area of research there may not be leisure for constant improvements to make things right. HackerWay.org.
Not only will there be legal consequences, mass liability, for any group that screws up, but the PR blow alone from a bad AI mistake could destroy most companies. Loss of trust may never be regained by a wary public, even if Congress and Trial Lawyers do not overreact. Sure, move fast, but not too fast where you become unsafe. Striking the right balance is going to require an acute technical, ethical sensitivity. Keep it safe.

Last Word

AI ethics is hard work, but well worth the effort. The risks and rewards are very high. The place to start this work is to talk about the fundamental principles and try to reach consensus. Everyone involved in this work is driven by a common understanding of the power of the technology, especially artificial intelligence. We all see the great changes on the horizon and share a common vision of a better tomorrow.

During an interview at the end of the Asilomar conference, Dan Weld, Professor of Computer Science, University of Washington, provided a good summary of this common vision:

In the near term I see greater prosperity and reduced mortality due to things like highway accidents and medical errors, where there’s a huge loss of life today.

In the longer term, I’m excited to create machines that can do the work that is dangerous or that people don’t find fulfilling. This should lower the costs of all services and let people be happier… by doing the things that humans do best – most of which involve social and interpersonal interaction. By automating rote work, people can focus on creative and community-oriented activities. Artificial Intelligence and robotics should provide enough prosperity for everyone to live comfortably – as long as we find a way to distribute the resulting wealth equitably.

New Draft Principles of AI Ethics Proposed by the Allen Institute for Artificial Intelligence and the Problem of Election Hijacking by Secret AIs Posing as Real People

September 17, 2017

One of the activities of AI-Ethics.com is to monitor and report on the work of all groups that are writing draft principles to govern the future legal regulation of Artificial Intelligence. Many have been proposed to date. Click here to go to the AI-Ethics Draft Principles page. If you know of a group that has articulated draft principles not reported on our page, please let me know. At this point all of the proposed principles are works in progress.

The latest draft principles come from Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence. This institute, called AI2, was founded by Paul G. Allen in 2014. The Mission of AI2 is to contribute to humanity through high-impact AI research and engineering. Paul Allen is the now billionaire who co-founded Microsoft with Bill Gates in 1975 instead of completing college. Paul and Bill have changed a lot since their early hacker days, but Paul is still  into computers and funding advanced research. Yes, that’s Paul and Bill below left in 1981. Believe it or not, Gates was 26 years old when the photo was taken. They recreated the photo in 2013 with the same computers. I wonder if today’s facial recognition AI could tell that these are the same people?

Oren Etzioni, who runs AI2, is also a professor of computer science. Oren is very practical minded (he is on the No-Fear side of the Superintelligent AI debate) and makes some good legal points in his proposed principles. Professor Etzioni also suggests three laws as a start to this work. He says he was inspired by Aismov, although his proposal bears no similarities to Aismov’s Laws. The AI-Ethics Draft Principles page begins with a discussion of Issac Aismov’s famous Three Laws of Robotics.

Below is the new material about the Allen Institute’s proposal that we added at the end of the AI-Ethics.com Draft Principles page.

_________

Oren Etzioni, a professor of Computer Science and CEO of the Allen Institute for Artificial Intelligence has created three draft principles of AI Ethics shown below. He first announced them in a New York Times Editorial, How to Regulate Artificial Intelligence (NYT, 9/1/17). See his TED Talk Artificial Intelligence will empower us, not exterminate us (TEDx Seattle; November 19, 2016). Etzioni says his proposed rules were inspired by Asimov’s three laws of robotics.

  1. An A.I. system must be subject to the full gamut of laws that apply to its human operator.
  2. An A.I. system must clearly disclose that it is not human.
  3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.

We would certainly like to hear more. As Oren said in the editorial, he introduces these three “as a starting point for discussion. … it is clear that A.I. is coming. Society needs to get ready.” That is exactly what we are saying too. AI Ethics Work Should Begin Now.

Oren’s editorial included a story to illustrate the second rule on duty to disclose. It involved a teacher at Georgia Tech named Jill Watson. She served as a teaching assistant in an online course on artificial intelligence. The engineering students were all supposedly fooled for the entire semester course into thinking that Watson was a human. She was not. She was an AI. It is kind of hard to believe that smart tech students wouldn’t know that a teacher named Watson, who no one had ever seen or heard of before, wasn’t a bot. After all, it was a course on AI.

This story was confirmed by a later reply to this editorial by the Ashok Goel, the Georgia Tech Professor who so fooled his students. Professor Goel, who supposedly is a real flesh and blood teacher, assures us that his engineering students were all very positive to have been tricked in this way. Ashok’s defensive Letter to Editor said:

Mr. Etzioni characterized our experiment as an effort to “fool” students. The point of the experiment was to determine whether an A.I. agent could be indistinguishable from human teaching assistants on a limited task in a constrained environment. (It was.)

When we did tell the students about Jill, their response was uniformly positive.

We were aware of the ethical issues and obtained approval of Georgia Tech’s Institutional Review Board, the office responsible for making sure that experiments with human subjects meet high ethical standards.

Etzioni’s proposed second rule states: An A.I. system must clearly disclose that it is not human. We suggest that the word “system” be deleted as not adding much and the rule be adopted immediately. It is urgently needed not just to protect student guinea pigs, but all humans, especially those using social media. Many humans are being fooled every day by bots posing as real people and creating fake news to manipulate real people. The democratic process is already under siege by dictators exploiting this regulation gap. Kupferschmidt, Social media ‘bots’ tried to influence the U.S. election. Germany may be next (Science, Sept. 13, 2017); Segarra, Facebook and Twitter Bots Are Starting to Influence Our Politics, a New Study Warns (Fortune, June 20, 2017); Wu, Please Prove You’re Not a Robot (NYT July 15, 2017); Samuel C. Woolley and Douglas R. Guilbeault, Computational Propaganda in the United States of America: Manufacturing Consensus Online (Oxford, UK: Project on Computational Propaganda).

In the concluding section to the 2017 scholarly paper Computational Propaganda by Woolley (shown here) and Guilbeault, The Rise of Bots: Implications for Politics, Policy, and Method, they state:

The results of our quantitative analysis confirm that bots reached positions of measurable influence during the 2016 US election. … Altogether, these results deepen our qualitative perspective on the political power bots can enact during major political processes of global significance. …
Most concerning is the fact that companies and campaigners continue to conveniently undersell the effects of bots. … Bots infiltrated the core of the political discussion over Twitter, where they were capable of disseminating propaganda at mass-scale. … Several independent analyses show that bots supported Trump much more than Clinton, enabling him to more effectively set the agenda. Our qualitative report provides strong reasons to believe that Twitter was critical for Trump’s success. Taken altogether, our mixed methods approach points to the possibility that bots were a key player in allowing social media activity to influence the election in Trump’s favour. Our qualitative analysis situates these results in their broader political context, where it is unknown exactly who is responsible for bot manipulation – Russian hackers, rogue campaigners, everyday citizens, or some complex conspiracy among these potential actors.
Despite growing evidence concerning bot manipulation, the Federal Election Commission in the US showed no signs of recognizing that bots existed during the election. There needs to be, as a minimum, a conversation about developing policy regulations for bots, especially since a major reason why bots are able to thrive is because of laissez-faire API access to websites like Twitter. …
The report exposes one of the possible reasons why we have not seen greater action taken towards bots on behalf of companies: it puts their bottom line at risk. Several company representatives fear that notifying users of bot threats will deter people from using their services, given the growing ubiquity of bot threats and the nuisance such alerts would cause. … We hope that the empirical evidence in this working paper – provided through both qualitative and quantitative investigation – can help to raise awareness and support the expanding body of evidence needed to begin managing political bots and the rising culture of computational propaganda.

This is a serious issue that requires immediate action, if not voluntarily by social media providers, such as Facebook and Twitter, then by law. We cannot afford to have another election hijacked by secret AIs posing as real people.

As Etzioni stated in his editorial:

My rule would ensure that people know when a bot is impersonating someone. We have already seen, for example, @DeepDrumpf — a bot that humorously impersonated Donald Trump on Twitter. A.I. systems don’t just produce fake tweets; they also produce fake news videos. Researchers at the University of Washington recently released a fake video of former President Barack Obama in which he convincingly appeared to be speaking words that had been grafted onto video of him talking about something entirely different.

See: Langston, Lip-syncing Obama: New tools turn audio clips into realistic video (UW News, July 11, 2017). Here is the University of Washington YouTube video demonstrating their dangerous new technology. Seeing is no longer believing. Fraud is a crime and must be enforced as such. If the government will not do so for some reason, then self- regulations and individual legal actions may be necessary.

In the long term Oren’s first point about the application of laws is probably the most important of his three proposed rules: An A.I. system must be subject to the full gamut of laws that apply to its human operator. As mostly lawyers around here at this point, we strongly agree with this legal point. We also agree with his recommendation in the NYT Editorial:

Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

We think liability law will develop accordingly. In fact, we think the common law already provides for such vicarious liability. No need to amend. Clarify would be a better word. We are not really terribly concerned about that. We are more concerned with technology governors and behavioral restrictions, although a liability stick will be very helpful. We have a team membership openings now for experienced products liability lawyers and regulators.


%d bloggers like this: