WHY I LOVE PREDICTIVE CODING: Making Document Review Fun Again with Mr. EDR and Predictive Coding 4.0

December 3, 2017

Many lawyers and technologists like predictive coding and recommend it to their colleagues. They have good reasons. It has worked for them. It has allowed them to do e-discovery reviews in an effective, cost efficient manner, especially the big projects. That is true for me too, but that is not why I love predictive coding. My feelings come from the excitement, fun, and amazement that often arise from seeing it in action, first hand. I love watching the predictive coding features in my software find documents that I could never have found on my own. I love the way the AI in the software helps me to do the impossible. I really love how it makes me far smarter and skilled than I really am.

I have been getting those kinds of positive feelings consistently by using the latest Predictive Coding 4.0 methodology (shown right) and KrolLDiscovery’s latest eDiscovery.com Review software (“EDR”). So too have my e-Discovery Team members who helped me to participate in TREC 2015 and 2016 (the great science experiment for the latest text search techniques sponsored by the National Institute of Standards and Technology). During our grueling forty-five days of experiments in 2015, and again for sixty days in 2016, we came to admire the intelligence of the new EDR software so much that we decided to personalize the AI as a robot. We named him Mr. EDR out of respect. He even has his own website now, MrEDR.com, where he explains how he helped my e-Discovery Team in the 2015 and 2015 TREC Total Recall Track experiments.

Bottom line for us from this research was to prove and improve our methods. Our latest version 4.0 of Predictive Coding, Hybrid Multimodal IST Method is the result. We have even open-sourced this method, well most of it, and teach it in a free seventeen-class online program: TARcourse.com. Aside from testing and improving our methods, another, perhaps even more important result of TREC for us was our rediscovery that with good teamwork, and good software like Mr. EDR at your side, document review need never be boring again. The documents themselves may well be boring as hell, that’s another matter, but the search for them need not be.

How and Why Predictive Coding is Fun

Steps Four, Five and Six of the standard eight-step workflow for Predictive Coding 4.0 is where we work with the active machine-learning features of Mr. EDR. These are its predictive coding features, a type of artificial intelligence. We train the computer on our conception of relevance by showing it relevant and irrelevant documents that we have found. The software is designed to then go out and find all other relevant documents in the total dataset. One of the skills we learn is when we have taught enough and can stop the training and complete the document review. At TREC we call that the Stop decision. It is important to keep down the costs of document review.

We use a multimodal approach to find training documents, meaning we use all of the other search features of Mr. EDR to find relevant ESI, such as keyword searches, similarity and concept. We iterate the training by sample documents, both relevant and irrelevant, until the computer starts to understand the scope of relevance we have in mind. It is a training exercise to make our AI smart, to get it to understand the basic ideas of relevance for that case. It usually takes multiple rounds of training for Mr. EDR to understand what we have in mind. But he is a fast learner, and by using the latest hybrid multimodal IST (“intelligently spaced learning“) techniques, we can usually complete his training in a few days. At TREC, where we were moving fast after hours with the Ã-Team, we completed some of the training experiments in just a few hours.

After a while Mr. EDR starts to “get it,” he starts to really understand what we are after, what we think is relevant in the case. That is when a happy shock and awe type moment can happen. That is when Mr. EDR’s intelligence and search abilities start to exceed our own. Yes. It happens. The pupil then starts to evolve beyond his teachers. The smart algorithms start to see patterns and find evidence invisible to us. At that point we sometimes even let him train himself by automatically accepting his top-ranked predicted relevant documents without even looking at them. Our main role then is to determine a good range for the automatic acceptance and do some spot-checking. We are, in effect, allowing Mr. EDR to take over the review. Oh what a feeling to then watch what happens, to see him keep finding new relevant documents and keep getting smarter and smarter by his own self-programming. That is the special AI-high that makes it so much fun to work with Predictive Coding 4.0 and Mr. EDR.

It does not happen in every project, but with the new Predictive Coding 4.0 methods and the latest Mr. EDR, we are seeing this kind of transformation happen more and more often. It is a tipping point in the review when we see Mr. EDR go beyond us. He starts to unearth relevant documents that my team would never even have thought to look for. The relevant documents he finds are sometimes completely dissimilar to any others we found before. They do not have the same keywords, or even the same known concepts. Still, Mr. EDR sees patterns in these documents that we do not. He can find the hidden gems of relevance, even outliers and black swans, if they exist. When he starts to train himself, that is the point in the review when we think of Mr. EDR as going into superhero mode. At least, that is the way my young e-Discovery Team members likes to talk about him.

By the end of many projects the algorithmic functions of Mr. EDR have attained a higher intelligence and skill level than our own (at least on the task of finding the relevant evidence in the document collection). He is always lighting fast and inexhaustible, even untrained, but by the end of his training, he becomes a search genius. Watching Mr. EDR in that kind of superhero mode is what makes Predictive Coding 4.0 a pleasure.

The Empowerment of AI Augmented Search

It is hard to describe the combination of pride and excitement you feel when Mr. EDR, your student, takes your training and then goes beyond you. More than that, the super-AI you created then empowers you to do things that would have been impossible before, absurd even. That feels pretty good too. You may not be Iron Man, or look like Robert Downey, but you will be capable of remarkable feats of legal search strength.

For instance, using Mr. EDR as our Iron Man-like suits, my e-discovery Ã-Team of three attorneys was able to do thirty different review projects and classify 17,014,085 documents in 45 days. See 2015 TREC experiment summary at Mr. EDR. We did these projects mostly at nights, and on weekends, while holding down our regular jobs. What makes this crazy impossible, is that we were able to accomplish this by only personally reviewing 32,916 documents. That is less than 0.2% of the total collection. That means we relied on predictive coding to do 99.8% of our review work. Incredible, but true.

Using traditional linear review methods it would have taken us 45 years to review that many documents! Instead, we did it in 45 days. Plus our recall and precision rates were insanely good. We even scored 100% precision and 100% recall in one TREC project in 2015 and two more in 2016. You read that right. Perfection. Many of our other projects attained scores in the high and mid nineties. We are not saying you will get results like that. Every project is different, and some are much more difficult than others. But we are saying that this kind of AI-enhanced review is not only fast and efficient, it is effective.

Yes, it’s pretty cool when your little AI creation does all the work for you and makes you look good. Still, no robot could do this without your training and supervision. We are a team, which is why we call it hybrid multimodal, man and machine.

Having Fun with Scientific Research at TREC 2015 and 2016

During the 2015 TREC Total Recall Track experiments my team would sometimes get totally lost on a few of the really hard Topics. We were not given legal issues to search, as usual. They were arcane technical hacker issues, political issues, or local news stories. Not only were we in new fields, the scope of relevance of the thirty Topics was never really explained. (We were given one to three word explanations in 2015, in 2016 we got a whole sentence!) We had to figure out intended relevance during the project based on feedback from the automated TREC document adjudication system. We would have some limited understanding of relevance based on our suppositions of the initial keyword hints, and so we could begin to train Mr. EDR with that. But, in several Topics, we never had any real understanding of exactly what TREC thought was relevant.

This was a very frustrating situation at first, but, and here is the cool thing, even though we did not know, Mr. EDR knew. That’s right. He saw the TREC patterns of relevance hidden to us mere mortals. In many of the thirty Topics we would just sit back and let him do all of the driving, like a Google car. We would often just cheer him on (and each other) as the TREC systems kept saying Mr. EDR was right, the documents he selected were relevant. The truth is, during much of the 45 days of TREC we were like kids in a candy store having a great time. That is when we decided to give Mr. EDR a cape and superhero status. He never let us down. It is a great feeling to create an AI with greater intelligence than your own and then see it augment and improve your legal work. It is truly a hybrid human-machine partnership at its best.

I hope you get the opportunity to experience this for yourself someday. The TREC experiments in 2015 and 2016 on recall in predictive coding are over, but the search for truth and justice goes on in lawsuits across the country. Try it on your next document review project.

Do What You Love and Love What You Do

Mr. EDR, and other good predictive coding software like it, can augment our own abilities and make us incredibly productive. This is why I love predictive coding and would not trade it for any other legal activity I have ever done (although I have had similar highs from oral arguments that went great, or the rush that comes from winning a big case).

The excitement of predictive coding comes through clearly when Mr. EDR is fully trained and able to carry on without you. It is a kind of Kurzweilian mini-singularity event. It usually happens near the end of the project, but can happen earlier when your computer catches on to what you want and starts to find the hidden gems you missed. I suggest you give Predictive Coding 4.0 and Mr. EDR a try. To make it easier I open-sourced our latest method and created an online course. TARcourse.com. It will teach anyone our method, if they have the right software. Learn the method, get the software and then you too can have fun with evidence search. You too can love what you do. Document review need never be boring again.

Caution

One note of caution: most e-discovery vendors, including the largest, do not have active machine learning features built into their document review software. Even the few that have active machine learning do not necessarily follow the Hybrid Multimodal IST Predictive Coding 4.0 approach that we used to attain these results. They instead rely entirely on machine-selected documents for training, or even worse, rely entirely on random selected documents to train the software, or have elaborate unnecessary secret control sets.

The algorithms used by some vendors who say they have “predictive coding” or “artificial intelligence” are not very good. Scientists tell me that some are only dressed-up concept search or unsupervised document clustering. Only bona fide active machine learning algorithms create the kind of AI experience that I am talking about. Software for document review that does not have any active machine learning features may be cheap, and may be popular, but they lack the power that I love. Without active machine learning, which is fundamentally different from just “analytics,” it is not possible to boost your intelligence with AI. So beware of software that just says it has advanced analytics. Ask if it has “active machine learning“?

It is impossible to do the things described in this essay unless the software you are using has active machine learning features.  This is clearly the way of the future. It is what makes document review enjoyable and why I love to do big projects. It turns scary to fun.

So, if you tried “predictive coding” or “advanced analytics” before, and it did not work for you, it could well be the software’s fault, not yours. Or it could be the poor method you were following. The method that we developed in Da Silva Moore, where my firm represented the defense, was a version 1.0 method. Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182, 183 (S.D.N.Y. 2012). We have come a long way since then. We have eliminated unnecessary random control sets and gone to continuous training, instead of train then review. This is spelled out in the TARcourse.com that teaches our latest version 4.0 techniques.

The new 4.0 methods are not hard to follow. The TARcourse.com puts our methods online and even teaches the theory and practice. And the 4.0 methods certainly will work. We have proven that at TREC, but only if you have good software. With just a little training, and some help at first from consultants (most vendors with bona fide active machine learning features will have good ones to help), you can have the kind of success and excitement that I am talking about.

Do not give up if it does not work for you the first time, especially in a complex project. Try another vendor instead, one that may have better software and better consultants. Also, be sure that your consultants are Predictive Coding 4.0 experts, and that you follow their advice. Finally, remember that the cheapest software is almost never the best, and, in the long run will cost you a small fortune in wasted time and frustration.

Conclusion

Love what you do. It is a great feeling and sure fire way to job satisfaction and success. With these new predictive coding technologies it is easier than ever to love e-discovery. Try them out. Treat yourself to the AI high that comes from using smart machine learning software and fast computers. There is nothing else like it. If you switch to the 4.0 methods and software, you too can know that thrill. You can watch an advanced intelligence, which you helped create, exceed your own abilities, exceed anyone’s abilities. You can sit back and watch Mr. EDR complete your search for you. You can watch him do so in record time and with record results. It is amazing to see good software find documents that you know you would never have found on your own.

Predictive coding AI in superhero mode can be exciting to watch. Why deprive yourself of that? Who says document review has to be slow and boring? Start making the practice of law fun again.

Here is the PDF version of this article, which you may download and distribute, so long as you do not revise it or charge for it.

 

 


Ethical Guidelines for Artificial Intelligence Research

November 7, 2017

The most complete set of AI ethics developed to date, the twenty-three Asilomar Principles, was created by the Future of Life Institute in early 2017 at their Asilomar Conference. Ninety percent or more of the attendees at the conference had to agree upon a principle for it to be accepted. The first five of the agreed-upon principles pertain to AI research issues.

Although all twenty-three principles are important, the research issues are especially time sensitive. That is because AI research is already well underway by hundreds, if not thousands of different groups. There is a current compelling need to have some general guidelines in place for this research. AI Ethics Work Should Begin Now. We still have a little time to develop guidelines for the advanced AI products and services expected in the near future, but as to research, the train has already left the station.

Asilomar Research Principles

Other groups are concerned with AI ethics and regulation, including research guidelines. See the Draft Principles page of AI-Ethics.com which lists principles from six different groups. The five draft principles developed by Asilomar are, however, a good place to start examining the regulation needed for research.

Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Principle One: Research Goal

The proposed first principle is good, but the wording? Not so much. The goal of AI research should be to create not undirected intelligence, but beneficial intelligence. This is a double-negative English language mishmash that only an engineer could love. Here is one way this principle could be better articulated:

Research Goal: The goal of AI research should be the creation of beneficial intelligence, not  undirected intelligence.

Researchers should develop intelligence that is beneficial for all of mankind. The Institute of Electrical and Electronics Engineers (IEEE) first general principle is entitled “Human Benefit.” The Asilomar first principle is slightly different. It does not really say human benefit. Instead it refers to beneficial intelligence. I think the intent is to be more inclusive, to include all life on earth, all of earth. Although IEEE has that covered too in their background statement of purpose to “Prioritize the maximum benefit to humanity and the natural environment.”

Pure research, where raw intelligence is created just for the hell of it, with no intended helpful “direction” of any kind, should be avoided. Because we can is not a valid goal. Pure, raw intelligence, with neither good intent, nor bad, is not the goal here. The research goal is beneficial intelligence. Asilomar is saying that Undirected intelligence is unethical and should be avoided. Social values must be built into the intelligence. This is subtle, but important.

The restriction to beneficial intelligence is somewhat controversial, but the other side of this first principle is not. Namely, that research should not be conducted to create intelligence that is hostile to humans.  No one favors detrimental, evil intelligence. So, for example, the enslavement of humanity by Terminator AIs is not an acceptable research goal. I don’t care how bad you think our current political climate is.

To be slightly more realistic, if you have a secret research goal of taking over the world, such as  Max Tegmark imagines in The Tale of the Omega Team in his book, Life 3.0, and we find out, we will shut you down (or try to). Even if it is all peaceful and well-meaning, and no one gets hurt, as Max visualizes, plotting world domination by machines is not a positive value. If you get caught researching how to do that, some of the more creative prosecuting lawyers around will find a way to send you to jail. We have all seen the cheesy movies, and so have the juries, so do not tempt us.

Keep a positive, pro-humans, pro-Earth, pro-freedom goal for your research. I do not doubt that we will someday have AI smarter than our existing world leaders, perhaps sooner than many expect, but that does not justify a machine take-over. Wisdom comes slowly and is different than intelligence.

Still, what about autonomous weapons? Is research into advanced AI in this area beneficial? Are military defense capabilities beneficial? Pro-security? Is the slaughter of robots not better than the slaughter of humans? Could robots be more ethical at “soldiering” than humans? As attorney Matt Scherer has noted, who is the editor of a good blog, LawAndAI.com and a Future of Life Institute member:

Autonomous weapons are going to inherently be capable of reacting on time scales that are shorter than humans’ time scales in which they can react. I can easily imagine it reaching the point very quickly where the only way that you can counteract an attack by an autonomous weapon is with another autonomous weapon. Eventually, having humans involved in the military conflict will be the equivalent of bringing bows and arrows to a battle in World War II.

At that point, you start to wonder where human decision makers can enter into the military decision making process. Right now there’s very clear, well-established laws in place about who is responsible for specific military decisions, under what circumstances a soldier is held accountable, under what circumstances their commander is held accountable, on what circumstances the nation is held accountable. That’s going to become much blurrier when the decisions are not being made by human soldiers, but rather by autonomous systems. It’s going to become even more complicated as machine learning technology is incorporated into these systems, where they learn from their observations and experiences in the field on the best way to react to different military situations.

Podcast: Law and Ethics of Artificial Intelligence (Future of Life, 3/31/17).

The question of beneficial or not can become very complicated, fast. Like it or not, military research into killer robots is already well underway, in both the public and private sector. Kalashnikov Will Make an A.I.-Powered Killer Robot: What could possibly go wrong? (Popular Mechanics, 7/19/17); Congress told to brace for ‘robotic soldiers’ (The Hill, 3/1/17); US military reveals it hopes to use artificial intelligence to create cybersoldiers and even help fly its F-35 fighter jet – but admits it is ALREADY playing catch up (Daily Mail, 12/15/15) (a little dated, and sensationalistic article perhaps, but easy read with several videos).

AI weapons are a fact, but they should still be regulated, in the same way that we have regulated nuclear weapons since WWII. Tom Simonite, AI Could Revolutionize War as Much as Nukes (Wired, 7/19/17); Autonomous Weapons: an Open Letter from AI & Robotics Researchers.

Principle Two: Research Funding

The second principle of Funding is more than an enforcement mechanism for the first, that you should only fund beneficial AI. It is also a recognition that ethical work requires funding too. This should be every lawyer’s favorite AI ethics principle. Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies. The principle then adds a list of five bullet-point examples.

How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked. The goal of avoiding the creation of AI systems that can be hacked, easily or not, is a good one. If a hostile power can take over and misuse an AI for evil end, then the built-in beneficence may be irrelevant. The example of a driverless car come to mind that could be hacked and crashed as a perverse joy-ride, kidnapping or terrorist act.

The economic issues raised by the second example are very important: How can we grow our prosperity through automation while maintaining people’s resources and purpose? We do not want a system that only benefits the top one percent, or top ten percent, or whatever. It needs to benefit everyone, or at least try to. Also see Asilomar Principle Fifteen: Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

Yoshua Bengio, Professor of Computer Science at the University of Montreal, had this important comment to make on the Asilomar principles during an interview at the end of the conference:

I’m a very progressive person so I feel very strongly that dignity and justice mean wealth is redistributed. And I’m really concerned about AI worsening the effects and concentration of power and wealth that we’ve seen in the last 30 years. So this is pretty important for me.

I consider that one of the greatest dangers is that people either deal with AI in an irresponsible way or maliciously – I mean for their personal gain. And by having a more egalitarian society, throughout the world, I think we can reduce those dangers. In a society where there’s a lot of violence, a lot of inequality, the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question.

Most everyone at the Asilomar Conference agreed with that sentiment, but I do not yet see a strong consensus in AI businesses. Time will tell if profit motives and greed will at least be constrained by enlightened self-interest. Hopefully capitalist leaders will have the wisdom to share the great wealth with all of society that AI is likley to create.

How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? The legal example is also a good one, with the primary tension we see so far between fair versus efficient. Just policing high crime areas might well be efficient, at least for reducing some type of crime, but would it be fair? Do we want to embed racial profiling into our AI? Neighborhood slumlord profiling? Religious, ethic profiling? No. Existing law prohibits that and for good reason. Still, predictive policing is already a fact of life in many cities and we need to be sure it has proper legal, ethical regulation.

We have seen the tension between “speedy” and “inexpensive” on the one hand, and “just” on the other in Rule One of the Federal Rules of Civil Procedure and e-discovery. When applied using active machine learning a technical solution was attained to these competing goals. The predictive coding methods we developed allowed for both precision (“speedy” and “inexpensive”) and recall (“just”). Hopefully this success can be replicated in other areas of the law where machine learning is under proportional control by experienced human experts.

The final example given is much more troubling: What set of values should AI be aligned with, and what legal and ethical status should it have? Whose values? Who is to say what is right and wrong? This is easy in a dictatorship, or a uniform, monochrome culture (sea of white dudes), but it is very challenging in a diverse democracy. This may be the greatest research funding challenge of all.

Principle Three: Science-Policy Link

This principle is fairly straightforward, but will in practice require a great deal of time and effort to be done right. A constructive and healthy exchange between AI researchers and policy-makers is necessarily a two-way street. It first of all assumes that policy-makers, which in most countries includes government regulators, not just industry, have a valid place at the table. It assumes some form of government regulation. That is anathema to some in the business community who assume (falsely in our opinion) that all government is inherently bad and essentially has nothing to contribute. The countervailing view of overzealous government controllers who just want to jump in, uninformed, and legislate, is also discouraged by this principle. We are talking about a healthy exchange.

It does not take an AI to know this kind of give and take and information sharing will involve countless meetings. It will also require a positive healthy attitude between the two groups. If it gets bogged down into an adversary relationship, you can multiply the cost of compliance (and number of meetings) by two or three. If it goes to litigation, we lawyers will smile in our tears, but no one else will. So researchers, you are better off not going there. A constructive and healthy exchange is the way to go.

Principle Four: Research Culture

The need for a good culture applies in spades to the research community itself. The Fourth Principal states: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI. This favors the open source code movement for AI, but runs counter to the trade-secret  business models of many corporations. See Eg.:OpenAI.com, Deep Mind Open Source; Liam , ‘One machine learning model to rule them all’: Google open-sources tools for simpler AI (ZDNet, 6/20/17).

This tension is likley to increase as multiple parties get close to a big breakthrough. The successful efforts for open source now, before superintelligence seems imminent, may help keep the research culture positive. Time will tell, but if not there could be trouble all around and the promise of full employment for litigation attorneys.

Principle Five: Race Avoidance

The Fifth Principle is a tough one, but very important: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards. Moving fast and breaking things may be the mantra of Silicon Valley, but the impact of bad AI could be catastrophic. Bold is one thing, but reckless is quite another. In this area of research there may not be leisure for constant improvements to make things right. HackerWay.org.
Not only will there be legal consequences, mass liability, for any group that screws up, but the PR blow alone from a bad AI mistake could destroy most companies. Loss of trust may never be regained by a wary public, even if Congress and Trial Lawyers do not overreact. Sure, move fast, but not too fast where you become unsafe. Striking the right balance is going to require an acute technical, ethical sensitivity. Keep it safe.

Last Word

AI ethics is hard work, but well worth the effort. The risks and rewards are very high. The place to start this work is to talk about the fundamental principles and try to reach consensus. Everyone involved in this work is driven by a common understanding of the power of the technology, especially artificial intelligence. We all see the great changes on the horizon and share a common vision of a better tomorrow.

During an interview at the end of the Asilomar conference, Dan Weld, Professor of Computer Science, University of Washington, provided a good summary of this common vision:

In the near term I see greater prosperity and reduced mortality due to things like highway accidents and medical errors, where there’s a huge loss of life today.

In the longer term, I’m excited to create machines that can do the work that is dangerous or that people don’t find fulfilling. This should lower the costs of all services and let people be happier… by doing the things that humans do best – most of which involve social and interpersonal interaction. By automating rote work, people can focus on creative and community-oriented activities. Artificial Intelligence and robotics should provide enough prosperity for everyone to live comfortably – as long as we find a way to distribute the resulting wealth equitably.

Moravec’s Paradox of Artificial Intelligence and a Possible Solution by Hiroshi Yamakawa with Interesting Ethical Implications

October 29, 2017

Have you heard of Moravec’s Paradox? This is a principle discovered by AI robotics expert Hans Moravec in the 1980s. He discovered that, contrary to traditional assumptions, high-level reasoning requires relatively little computation power, whereas low-level sensorimotor skills require enormous computational resources. The paradox is sometimes simplified by the phrase: Robots find the difficult things easy and the easy things difficult. Moravec’s Paradox explains why we can now create specialized AI, such as predictive coding software to help lawyers find evidence, or AI software that can beat the top human experts at complex games such as Chess, Jeopardy and Go, but we cannot create robots as smart as dogs, much less as smart as gifted two-year-olds like my granddaughter. Also see the possible economic, cultural implications of this paradox as described, for instance, by Robots will not lead to fewer jobs – but the hollowing out of the middle class (The Guardian, 8/20/17).

Hans Moravec is a legend in the world of AI. An immigrant from Austria, he is now serving as a research professor in the Robotics Institute of Carnegie Mellon University. His work includes attempts to develop a fully autonomous robot that is capable of navigating its environment without human intervention. Aside from his paradox discovery, he is well-known for a book he wrote in 1990, Mind Children: The Future of Robot and Human Intelligence. This book has become a classic, well-known and admired by most AI scientists. It is also fairly easy for non-experts to read and understand, which is a rarity in most fields.

Moravec is also a futurist with many of his publications and predictions focusing on transhumanism, including Robot: Mere Machine to Transcendent Mind (Oxford U. Press, 1998). In Robot he predicted that Machines will attain human levels of intelligence by the year 2040, and by 2050 will have far surpassed us. His prediction may still come true, especially if exponential acceleration of computational power following Moore’s Law continues. But for now, we still have a long was to go. The video below gives funny examples of this in a compilation of robots falling down during a DARPA competition.

But then just a few weeks after this blog was originally published, we are shown how far along robots have come. This November 16, 2017, video of the latest Boston Dynamics robot is a dramatic example of accelerating, exponential change.

Yamakawa on Moravec’s Paradox

A recent interview of Horoshi Yamakawa, a leading researcher in Japan working on Artificial General Intelligence (AGI), sheds light on the Moravec Paradox.  See the April 5, 2017 interview of Dr. Hiroshi Yamakawa, by a host of AI Experts, Eric Gastfriend, Jason Orlosky, Mamiko Matsumoto, Benjamin Peterson, and Kazue Evans. The interview is published by the Future of Life Institute where you will find the full transcript and more details about Yamakawa.

In his interview Horoshi explains the Moravec Paradox and the emerging best hope for its solution by deep learning.

The field of AI has traditionally progressed with symbolic logic as its center. It has been built with knowledge defined by developers and manifested as AI that has a particular ability. This looks like “adult” intelligence ability. From this, programming logic becomes possible, and the development of technologies like calculators has steadily increased. On the other hand, the way a child learns to recognize objects or move things during early development, which corresponds to “child” AI, is conversely very difficult to explain. Because of this, programming some child-like behaviors is very difficult, which has stalled progress. This is also called Moravec’s Paradox.

However, with the advent of deep learning, development of this kind of “child” AI has become possible by learning from large amounts of training data. Understanding the content of learning by deep learning networks has become an important technological hurdle today. Understanding our inability to explain exactly how “child” AI works is key to understanding why we have had to wait for the appearance of deep learning.

Horoshi Yamakawa calls his approach to deep learning the Whole Brain Architecture approach.

The whole brain architecture is an engineering-based research approach “To create a human-like artificial general intelligence (AGI) by learning from the architecture of the entire brain.”  … In short, the goal is brain-inspired AI, which is essentially AGI. Basically, this approach to building AGI is the integration of artificial neural networks and machine-learning modules while using the brain’s hard wiring as a reference. However, even though we are using the entire brain as a building reference, our goal is not to completely understand the intricacies of the brain. In this sense, we are not looking to perfectly emulate the structure of the brain but to continue development with it as a coarse reference.

Yamakawa sees at least two advantages to this approach.

The first is that since we are creating AI that resembles the human brain, we can develop AGI with an affinity for humans. Simply put, I think it will be easier to create an AI with the same behavior and sense of values as humans this way. Even if superintelligence exceeds human intelligence in the near future, it will be comparatively easy to communicate with AI designed to think like a human, and this will be useful as machines and humans continue to live and interact with each other. …

The second merit of this unique approach is that if we successfully control this whole brain architecture, our completed AGI will arise as an entity to be shared with all of humanity. In short, in conjunction with the development of neuroscience, we will increasingly be able to see the entire structure of the brain and build a corresponding software platform. Developers will then be able to collaboratively contribute to this platform. … Moreover, with collaborative development, it will likely be difficult for this to become “someone’s” thing or project. …

Act Now for AI Safety?

As part of the interview Yamakawa was asked whether he thinks it would be productive to start working on AI Safety now? As readers here know, one of the major points of the AI-Ethics.com organization I started is that we need to begin work know on such regulations. Fortunately, Yamakawa agrees. His promising Whole Brained Architecture approach to deep learning as a way to overcome Moravec’s Paradox thus will likley have a strong ethics component. Here is Horoshi Yamakawa full, very interesting answer to this question.

I do not think it is at all too early to act for safety, and I think we should progress forward quickly. Technological development is accelerating at a fast pace as predicted by Kurzweil. Though we may be in the midst of this exponential development, since the insight of humans is relatively linear, we may still not be close to the correct answer. In situations where humans are exposed to a number of fears or risks, something referred to as “normalcy bias” in psychology typically kicks in. People essentially think, “Since things have been OK up to now, they will probably continue to be OK.” Though this is often correct, in this case, we should subtract this bias.

If possible, we should have several methods to be able to calculate the existential risk brought about by AGI. First, we should take a look at the Fermi Paradox. This is a type of estimation process that proposes that we can estimate the time at which intelligent life will become extinct based on the fact that we have not yet met with alien life and on the probability that alien life exists. However, using this type of estimation would result in a rather gloomy conclusion, so it doesn’t really serve as a good guide as to what we should do. As I mentioned before, it probably makes sense for us to think of things from the perspective of increasing decision making bodies that have increasing power to bring about the destruction of humanity.

 


The Great Debate in AI Ethics Surfaces on Social Media: Elon Musk v. Mark Zuckerberg

August 6, 2017

I am a great admirer of both Mark Zuckerberg and Elon Musk. That is one reason why the social media debate last week between them concerning artificial intelligence, a subject also near and dear, caused such dissonance. How could they disagree on such an important subject? This blog will lay out the “great debate.”

It is far from a private argument between Elon and Mark.  It is a debate that percolates throughout scientific and technological communities concerned with AI. My sister AI-Ethics.com web begins with this debate. If you have not already visited this web, I hope you will do so after reading this blog. It begins by this same debate review. You will also see at AI-Ethics.com that I am seeking volunteers to help: (1) prepare a scholarly article on the AI Ethics Principles already created by other groups; and, (2) research the viability of sponsoring an interdisciplinary conference on AI Principles. For more background on these topics see the library of suggested videos found at AI-Ethics Videos. They provide interesting, easy to follow (for the most part), reliable information on artificial intelligence. This is something that everybody should know at least something about if they want to keep up with ever advancing technology. It is a key topic.

The Debate Centers on AI’s Potential for Superintelligence

The debate arises out of an underlying agreement that artificial intelligence has the potential to become smarter than we are, superintelligent. Most experts agree that super-evolved AI could become a great liberator of mankind that solves all problems, cures all diseases, extends life indefinitely and frees us from drudgery. Then out of that common ebullient hope arises a small group that also sees a potential dystopia. These utopia party-poopers fear that a super-evolved AI could doom us all to extinction, that is, unless we are not careful. So both sides of the future prediction scenarios agree that many good things are possible, but, one side insists that some very bad things are also possible, that the dark side risks even include extinction of the human species.

The doomsday scenarios are a concern to some of the smartest people alive today, including Stephen Hawking, Elon Musk and Bill Gates. They fear that superintelligent AIs could run amuck without appropriate safeguards. As stated, other very smart people strongly disagree with all doomsday fears, including Mark Zuckerberg.

Mark Zuckerberg’s company, Facebook, is a leading researcher in the field of general AI. In a backyard video that Zuckerberg made live on Facebook on July 24, 2017, with six million of his friends watching on, Mark responded to a question from one: “I watched a recent interview with Elon Musk and his largest fear for future was AI. What are your thoughts on AI and how it could affect the world?”

Zuckerberg responded by saying:

I have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic. And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.

In the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives.

Zuckerberg said AI is already helping diagnose diseases and that the AI in self-driving cars will be a dramatic improvement that saves many lives. Zuckerberg elaborated on his statement as to naysayers like Musk being irresponsible.

Whenever I hear people saying AI is going to hurt people in the future, I think yeah, you know, technology can generally always be used for good and bad, and you need to be careful about how you build it and you need to be careful about what you build and how it is going to be used.

But people who are arguing for slowing down the process of building AI, I just find that really questionable. I have a hard time wrapping my head around that.

Mark’s position is understandable when you consider his Hacker Way philosophy where Fast and Constant Improvements are fundamental ideas. He did, however, call Elon Musk “pretty irresponsible” for pushing AI regulations. That prompted a fast response from Elon the next day on Twitter. He responded to a question he received from one of his followers about Mark’s comment and said: I’ve talked to Mark about this. His understanding of the subject is limited. Elon Musk has been thinking and speaking up about this topic for many years. Elon also praises AI, but thinks that we need to be careful and consider regulations.

The Great AI Debate

In 2014 Elon Musk referred to developing general AI as summoning the demon. He is not alone in worrying about advanced AI. See eg. Open-AI.com and CSER.org. Steven Hawking, usually considered the greatest genius of our time, has also commented on the potential danger of AI on several occasions. In speech he gave in 2016 at Cambridge marking the opening of the Center for the Future of Intelligence, Hawking said: “In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.” Here is Hawking’s full five minute talk on video:

Elon Musk warned state governors on July 15, 2017 at the National Governors Association Conference about the dangers of unregulated Artificial Intelligence. Musk is very concerned about any advanced AI that does not have some kind of ethics programmed into its DNA. Musk said that “AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that.” He went on to urge the governors to begin investigating AI regulation now: “AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

Bill Gates agrees. He said back in January 2015 that

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Elon Musk and Bill Gates spoke together on the Dangers of Artificial Intelligence at an event in China in 2015. Elon compared work on the AI to work on nuclear energy and said it was just as dangerous as nuclear weapons. He said the right emphasis should be on AI safety, that we should not be rushing into something that we don’t understand. Statements like that makes us wonder what Elon Musk knows that Mark Zuckerberg does not?

Bill Gates at the China event responded by agreeing with Musk. Bill also has some amusing, interesting statements about human wet-ware, our slow brain algorithms. He spoke of our unique human ability to take experience and turn it into knowledge. See: Examining the 12 Predictions Made in 2015 in “Information → Knowledge → Wisdom. Bill Gates thinks that as soon as machines gain this ability, they will almost immediately move beyond the human level of intelligence. They will read all the books and articles online, maybe also all social media and private mail. Bill has no patience for skeptics of the inherent danger of AI: How can they not see what a huge challenge this is?

Gates, Musk and Hawking are all concerned that a Super-AI using computer connections, including the Internet, could take actions of all kinds, both global and micro. Without proper standards and safeguards they could modify conditions and connections before we even knew what they were doing. We would not have time to react, nor the ability to react, unless certain basic protections are hardwired into the AI, both in silicon form and electronic algorithms. They all urge us to take action now, rather than wait and react.

To close out the argument for those who fear advanced AI and urge regulators to start thinking about how to restrain it now, consider the Ted Talk by Sam Harris on October 19, 2016, Can we build AI without losing control over it? Sam, a neuroscientist and writer, has some interesting ideas on this.

On the other side of the debate you will find most, but not all, mainstream AI researchers. You will also find many technology luminaries, such as Mark Zuckerberg and Ray Kurzweil. They think that the doomsday concerns are pretty irresponsible. Oren Etzioni, No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity (MIT Technology Review, 9/20/16); Ben Sullivan, Elite Scientists Have Told the Pentagon That AI Won’t Threaten Humanity (Motherboard 1/19/17).

You also have famous AI scholars and researchers like Pedro Domingos who are skeptical of all superintelligence fears, even of AI ethics in general. Domingos stepped into the Zuckerberg v. Musk social media dispute by siding with Zuckerberg. He told Wired on July 17, 2017 that:

Many of us have tried to educate him (meaning Musk) and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent.

Tom Simonite, Elon Musk’s Freak-Out Over Killer Robots Distracts from Our Real AI Problems, (Wired, 7/17/17).

Domingos also famously said in his book, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, a book which we recommend:

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.

We can relate with that. On the question of AI ethics Professor Domingos said in a 2017 University of Washington faculty interview:

But Domingos says that when it comes to the ethics of artificial intelligence, it’s very simple. “Machines are not independent agents—a machine is an extension of its owner—therefore, whatever ethical rules of behavior I should follow as a human, the machine should do the same. If we keep this firmly in mind,” he says, “a lot of things become simplified and a lot of confusion goes away.” …

It’s only simple so far as the ethical spectrum remains incredibly complex, and, as Domingos will be first to admit, everybody doesn’t have the same ethics.

“One of the things that is starting to worry me today is that technologists like me are starting to think it’s their job to be programming ethics into computers, but I don’t think that’s our job, because there isn’t one ethics,” Domingos says. “My job isn’t to program my ethics into your computer; it’s to make it easy for you to program your ethics into your computer without being a programmer.”

We agree with that too. No one wants technologists alone to be deciding ethics for the world. This needs to be a group effort, involving all disciplines, all people. It requires full dialogue on social policy, ultimately leading to legal codifications.

The Wired article of Jul 17, 2017, also states Domingos thought it would be better not to focus on far-out superintelligence concerns, but instead:

America’s governmental chief executives would be better advised to consider the negative effects of today’s limited AI, such as how it is giving disproportionate market power to a few large tech companies.

The same Wired article states that Iyad Rahwan, who works on AI and society at MIT, doesn’t deny that Musk’s nightmare scenarios could eventually happen, but says attending to today’s AI challenges is the most pragmatic way to prepare. “By focusing on the short-term questions, we can scaffold a regulatory architecture that might help with the more unpredictable, super-intelligent AI scenarios.” We agree, but are also inclined to think we should at least try to do both at the same time. What if Musk, Gates and Hawking are right?

The Wired article also quotes, Ryan Callo, a Law Professor at the University of Washington, as saying in response to the Zuckerberg v Musk debate:

Artificial intelligence is something policy makers should pay attention to, but focusing on the existential threat is doubly distracting from it’s potential for good and the real-world problems it’s creating today and in the near term.

Simonite, Elon Musk’s Freak-Out Over Killer Robots Distracts from Our Real AI Problems, (Wired, 7/17/17).

But how far-out from the present is superintelligence? For a very pro-AI view, one this is not concerned with doomsday scenarios, consider the ideas of Ray Kurzweil, Google’s Director of Engineering. Kurzweil thinks that AI will attain human level intelligence by 2019, but will then mosey along and not attain super-intelligence, which he calls the Singularity, until 2045.

2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.

Kurzweil is not worried about the impact of super-intelligent AI. To the contrary, he looks forward to the Singularity and urges us to get ready to merge with the super-AIs when this happens. He looks at AI super-intelligence as an opportunity for human augmentation and immortality. Here is a video interview in February 2017 where Kurzweil responds to fears by Hawking, Gates, and Musk about the rise of strong A.I.

Note Ray conceded the concerns are valid, but thinks they miss the point that AI will be us, not them, that humans will enhance themselves to super-intelligence level by integrating with AI – the Borg approach (our words, not his).

Getting back to the more mainstream defenses of super-intelligent AI, consider Oren Etzioni’s Ted Talk on this topic.

Oren Etzioni thinks AI has gotten a bad rap and is not an existential threat to the human race. As the video shows, however, even Etzioni is concerned about autonomous weapons and immediate economic impacts. He invited everyone to join him and advocate for the responsible use of AI.

Conclusion

The responsible use of AI is a common ground that we can all agree upon. We can build upon and explore that ground with others at many venues, including the new one I am trying to put together at AI-Ethics.com. Write me if you would like to be a part of that effort. Our first two projects are: (1) to research and prepare a scholarly paper of the many principles proposed for AI Ethics by other groups; and (2) put on a conference dedicated to dialogue on AI Ethics principles, not a debate. See AI-Ethics.com for more information on these two projects. Ultimately we hope to mediate model recommendations for consideration by other groups and regulatory bodies.

AI-Ethics.com is looking forward to working with non-lawyer technologists, scientists and others interested in AI ethics. We believe that success in this field depends on diversity. It has to be very interdisciplinary to succeed. Lawyers should be included in this work, but we should remain a minority. Diversity is key here. We will even allows AIs, but first they must pass a little test you may have heard of.  When it comes to something as important all this, all faces should be in the book, including all colors, races, sexes, nationalities, education, from all interested companies, institutions, foundations, governments, agencies, firms and teaching institutions around the globe. This is a human effort for a good AI future.

 

 


%d