The Great Debate in AI Ethics Surfaces on Social Media: Elon Musk v. Mark Zuckerberg

August 6, 2017

I am a great admirer of both Mark Zuckerberg and Elon Musk. That is one reason why the social media debate last week between them concerning artificial intelligence, a subject also near and dear, caused such dissonance. How could they disagree on such an important subject? This blog will lay out the “great debate.”

It is far from a private argument between Elon and Mark.  It is a debate that percolates throughout scientific and technological communities concerned with AI. My sister AI-Ethics.com web begins with this debate. If you have not already visited this web, I hope you will do so after reading this blog. It begins by this same debate review. You will also see at AI-Ethics.com that I am seeking volunteers to help: (1) prepare a scholarly article on the AI Ethics Principles already created by other groups; and, (2) research the viability of sponsoring an interdisciplinary conference on AI Principles. For more background on these topics see the library of suggested videos found at AI-Ethics Videos. They provide interesting, easy to follow (for the most part), reliable information on artificial intelligence. This is something that everybody should know at least something about if they want to keep up with ever advancing technology. It is a key topic.

The Debate Centers on AI’s Potential for Superintelligence

The debate arises out of an underlying agreement that artificial intelligence has the potential to become smarter than we are, superintelligent. Most experts agree that super-evolved AI could become a great liberator of mankind that solves all problems, cures all diseases, extends life indefinitely and frees us from drudgery. Then out of that common ebullient hope arises a small group that also sees a potential dystopia. These utopia party-poopers fear that a super-evolved AI could doom us all to extinction, that is, unless we are not careful. So both sides of the future prediction scenarios agree that many good things are possible, but, one side insists that some very bad things are also possible, that the dark side risks even include extinction of the human species.

The doomsday scenarios are a concern to some of the smartest people alive today, including Stephen Hawking, Elon Musk and Bill Gates. They fear that superintelligent AIs could run amuck without appropriate safeguards. As stated, other very smart people strongly disagree with all doomsday fears, including Mark Zuckerberg.

Mark Zuckerberg’s company, Facebook, is a leading researcher in the field of general AI. In a backyard video that Zuckerberg made live on Facebook on July 24, 2017, with six million of his friends watching on, Mark responded to a question from one: “I watched a recent interview with Elon Musk and his largest fear for future was AI. What are your thoughts on AI and how it could affect the world?”

Zuckerberg responded by saying:

I have pretty strong opinions on this. I am optimistic. I think you can build things and the world gets better. But with AI especially, I am really optimistic. And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.

In the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives.

Zuckerberg said AI is already helping diagnose diseases and that the AI in self-driving cars will be a dramatic improvement that saves many lives. Zuckerberg elaborated on his statement as to naysayers like Musk being irresponsible.

Whenever I hear people saying AI is going to hurt people in the future, I think yeah, you know, technology can generally always be used for good and bad, and you need to be careful about how you build it and you need to be careful about what you build and how it is going to be used.

But people who are arguing for slowing down the process of building AI, I just find that really questionable. I have a hard time wrapping my head around that.

Mark’s position is understandable when you consider his Hacker Way philosophy where Fast and Constant Improvements are fundamental ideas. He did, however, call Elon Musk “pretty irresponsible” for pushing AI regulations. That prompted a fast response from Elon the next day on Twitter. He responded to a question he received from one of his followers about Mark’s comment and said: I’ve talked to Mark about this. His understanding of the subject is limited. Elon Musk has been thinking and speaking up about this topic for many years. Elon also praises AI, but thinks that we need to be careful and consider regulations.

The Great AI Debate

In 2014 Elon Musk referred to developing general AI as summoning the demon. He is not alone in worrying about advanced AI. See eg. Open-AI.com and CSER.org. Steven Hawking, usually considered the greatest genius of our time, has also commented on the potential danger of AI on several occasions. In speech he gave in 2016 at Cambridge marking the opening of the Center for the Future of Intelligence, Hawking said: “In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.” Here is Hawking’s full five minute talk on video:

Elon Musk warned state governors on July 15, 2017 at the National Governors Association Conference about the dangers of unregulated Artificial Intelligence. Musk is very concerned about any advanced AI that does not have some kind of ethics programmed into its DNA. Musk said that “AI is a fundamental existential risk for human civilization, and I don’t think people fully appreciate that.” He went on to urge the governors to begin investigating AI regulation now: “AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

Bill Gates agrees. He said back in January 2015 that

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Elon Musk and Bill Gates spoke together on the Dangers of Artificial Intelligence at an event in China in 2015. Elon compared work on the AI to work on nuclear energy and said it was just as dangerous as nuclear weapons. He said the right emphasis should be on AI safety, that we should not be rushing into something that we don’t understand. Statements like that makes us wonder what Elon Musk knows that Mark Zuckerberg does not?

Bill Gates at the China event responded by agreeing with Musk. Bill also has some amusing, interesting statements about human wet-ware, our slow brain algorithms. He spoke of our unique human ability to take experience and turn it into knowledge. See: Examining the 12 Predictions Made in 2015 in “Information → Knowledge → Wisdom. Bill Gates thinks that as soon as machines gain this ability, they will almost immediately move beyond the human level of intelligence. They will read all the books and articles online, maybe also all social media and private mail. Bill has no patience for skeptics of the inherent danger of AI: How can they not see what a huge challenge this is?

Gates, Musk and Hawking are all concerned that a Super-AI using computer connections, including the Internet, could take actions of all kinds, both global and micro. Without proper standards and safeguards they could modify conditions and connections before we even knew what they were doing. We would not have time to react, nor the ability to react, unless certain basic protections are hardwired into the AI, both in silicon form and electronic algorithms. They all urge us to take action now, rather than wait and react.

To close out the argument for those who fear advanced AI and urge regulators to start thinking about how to restrain it now, consider the Ted Talk by Sam Harris on October 19, 2016, Can we build AI without losing control over it? Sam, a neuroscientist and writer, has some interesting ideas on this.

On the other side of the debate you will find most, but not all, mainstream AI researchers. You will also find many technology luminaries, such as Mark Zuckerberg and Ray Kurzweil. They think that the doomsday concerns are pretty irresponsible. Oren Etzioni, No, the Experts Don’t Think Superintelligent AI is a Threat to Humanity (MIT Technology Review, 9/20/16); Ben Sullivan, Elite Scientists Have Told the Pentagon That AI Won’t Threaten Humanity (Motherboard 1/19/17).

You also have famous AI scholars and researchers like Pedro Domingos who are skeptical of all superintelligence fears, even of AI ethics in general. Domingos stepped into the Zuckerberg v. Musk social media dispute by siding with Zuckerberg. He told Wired on July 17, 2017 that:

Many of us have tried to educate him (meaning Musk) and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent.

Tom Simonite, Elon Musk’s Freak-Out Over Killer Robots Distracts from Our Real AI Problems, (Wired, 7/17/17).

Domingos also famously said in his book, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, a book which we recommend:

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.

We can relate with that. On the question of AI ethics Professor Domingos said in a 2017 University of Washington faculty interview:

But Domingos says that when it comes to the ethics of artificial intelligence, it’s very simple. “Machines are not independent agents—a machine is an extension of its owner—therefore, whatever ethical rules of behavior I should follow as a human, the machine should do the same. If we keep this firmly in mind,” he says, “a lot of things become simplified and a lot of confusion goes away.” …

It’s only simple so far as the ethical spectrum remains incredibly complex, and, as Domingos will be first to admit, everybody doesn’t have the same ethics.

“One of the things that is starting to worry me today is that technologists like me are starting to think it’s their job to be programming ethics into computers, but I don’t think that’s our job, because there isn’t one ethics,” Domingos says. “My job isn’t to program my ethics into your computer; it’s to make it easy for you to program your ethics into your computer without being a programmer.”

We agree with that too. No one wants technologists alone to be deciding ethics for the world. This needs to be a group effort, involving all disciplines, all people. It requires full dialogue on social policy, ultimately leading to legal codifications.

The Wired article of Jul 17, 2017, also states Domingos thought it would be better not to focus on far-out superintelligence concerns, but instead:

America’s governmental chief executives would be better advised to consider the negative effects of today’s limited AI, such as how it is giving disproportionate market power to a few large tech companies.

The same Wired article states that Iyad Rahwan, who works on AI and society at MIT, doesn’t deny that Musk’s nightmare scenarios could eventually happen, but says attending to today’s AI challenges is the most pragmatic way to prepare. “By focusing on the short-term questions, we can scaffold a regulatory architecture that might help with the more unpredictable, super-intelligent AI scenarios.” We agree, but are also inclined to think we should at least try to do both at the same time. What if Musk, Gates and Hawking are right?

The Wired article also quotes, Ryan Callo, a Law Professor at the University of Washington, as saying in response to the Zuckerberg v Musk debate:

Artificial intelligence is something policy makers should pay attention to, but focusing on the existential threat is doubly distracting from it’s potential for good and the real-world problems it’s creating today and in the near term.

Simonite, Elon Musk’s Freak-Out Over Killer Robots Distracts from Our Real AI Problems, (Wired, 7/17/17).

But how far-out from the present is superintelligence? For a very pro-AI view, one this is not concerned with doomsday scenarios, consider the ideas of Ray Kurzweil, Google’s Director of Engineering. Kurzweil thinks that AI will attain human level intelligence by 2019, but will then mosey along and not attain super-intelligence, which he calls the Singularity, until 2045.

2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.

Kurzweil is not worried about the impact of super-intelligent AI. To the contrary, he looks forward to the Singularity and urges us to get ready to merge with the super-AIs when this happens. He looks at AI super-intelligence as an opportunity for human augmentation and immortality. Here is a video interview in February 2017 where Kurzweil responds to fears by Hawking, Gates, and Musk about the rise of strong A.I.

Note Ray conceded the concerns are valid, but thinks they miss the point that AI will be us, not them, that humans will enhance themselves to super-intelligence level by integrating with AI – the Borg approach (our words, not his).

Getting back to the more mainstream defenses of super-intelligent AI, consider Oren Etzioni’s Ted Talk on this topic.

Oren Etzioni thinks AI has gotten a bad rap and is not an existential threat to the human race. As the video shows, however, even Etzioni is concerned about autonomous weapons and immediate economic impacts. He invited everyone to join him and advocate for the responsible use of AI.

Conclusion

The responsible use of AI is a common ground that we can all agree upon. We can build upon and explore that ground with others at many venues, including the new one I am trying to put together at AI-Ethics.com. Write me if you would like to be a part of that effort. Our first two projects are: (1) to research and prepare a scholarly paper of the many principles proposed for AI Ethics by other groups; and (2) put on a conference dedicated to dialogue on AI Ethics principles, not a debate. See AI-Ethics.com for more information on these two projects. Ultimately we hope to mediate model recommendations for consideration by other groups and regulatory bodies.

AI-Ethics.com is looking forward to working with non-lawyer technologists, scientists and others interested in AI ethics. We believe that success in this field depends on diversity. It has to be very interdisciplinary to succeed. Lawyers should be included in this work, but we should remain a minority. Diversity is key here. We will even allows AIs, but first they must pass a little test you may have heard of.  When it comes to something as important all this, all faces should be in the book, including all colors, races, sexes, nationalities, education, from all interested companies, institutions, foundations, governments, agencies, firms and teaching institutions around the globe. This is a human effort for a good AI future.

 

 



What Information Theory Tell Us About e-Discovery and the Projected ‘Information → Knowledge → Wisdom’ Transition

May 28, 2016

Ralph_and_LexieThis is an article on Information Theory, the Law, e-Discovery, Search and the evolution of our computer technology culture from Information → Knowledge → Wisdom. The article as usual assumes familiarity with writings on AI and the Law, especially active machine learning types of Legal Search. The article also assumes some familiarity with the scientific theory of Information as set forth in James Gleick’s book, The Information: a history, a theory, a flood (2011). I will begin the essay with several good instructional videos on Gleick’s book and Information Theory, including a bit about the life and work of the founder of Information Theory, Claude Shannon. Then I will provide my personal recapitulation of this theory and explore the application to two areas of my current work:

  1. The search for needles of relevant evidence in large, chaotic, electronic storage systems, such as email servers and email archives, in order to find the truth, the whole truth, and nothing but the truth needed to resolve competing claims of what happened – the facts – in the context of civil and criminal law suits and investigations.
  2. The articulation of a coherent social theory that makes sense of modern technological life, a theory that I summarize with the phrase: Information → Knowledge → Wisdom. See Information → Knowledge → Wisdom: Progression of Society in the Age of Computers and the more recent, How The 12 Predictions Are Doing That We Made In “Information → Knowledge → Wisdom.”

I essentially did the same thing in my blog last week applying Chaos Theories. What Chaos Theory Tell Us About e-Discovery and the Projected ‘Information → Knowledge → Wisdom’ Transition. This essay will, to some extent, build upon the last and so I suggest you read it first.

Information Theory

Gleick_The_InformationGleick’s The Information: a history, a theory, a flood covers the history of cybernetics, computer science, and the men and women involved with Information Theory over the last several decades. Gleick explains how these information scientists today think that everything is ultimately information. The entire Universe, matter and energy, life itself, is made up of information. Information in turn is ultimately binary, zeros and ones, on and off, yes and no. It is all bits and bytes.

Here are three videos, including two interviews of James Gleick, to provide a refresher on Information Theory for those who have not read his book recently. Information Wants to Have Meaning. Or Does It? (3:40, Big Think, 2014).

The Story of Information (3:47, 4th Estate Books, 2012).

Shannon_ClaudeThe generally accepted Father of Information Theory is Claude Shannon (1916-2001). He is a great visionary engineer whose ideas and inventions led to our current computer age. Among other things, he coined the word Bit in 1948 as the basic unit of information. He was also one of the first MIT hackers, in the original sense of the word as a tinkerer, who was always building new things. The following is a half-hour video by University of California Television (2008) that explains his life’s work and theories. It is worth taking the time to watch it.

Shannon was an unassuming genius, and like Mandelbrot, very quirky and interested in many different things in a wide variety of disciplines. Aside from being a great mathematician, Bell Labs engineer, and MIT professor, Shannon also studied game theory. He went beyond theory and devised several math based probability methods to win at certain games of chance, including card counting at blackjack. He collaborated with a friend at MIT, another mathematician, Edward Thorp, who became a professional gambler.

Shannon_movie_21_SpaceyShannon, his wife, and Thorp travelled regularly to Las Vegas for a couple of years in the early sixties where they constantly won at the tables using their math tricks, including card counting.  Shannon wanted to beat the roulette wheel too, but the system he and Thorp developed to do that required probability calculations beyond what he could do in his head. To solve this problem in 1961 he invented a small, concealable computer, the world’s first wearable computer, to help him calculate the odds. It was the size of a cigarette pack. His Law Vegas exploits became the very loose factual basis for a 2008 movie “21“, where Kevin Spacey played Shannon. (Poor movie, not worth watching.)

Shannon made even more money by applying his math abilities in the stock market. The list of his eclectic genius goes on and on, including his invention in 1950 of an electromechanical mouse named Theseus that could teach itself how to escape from a maze. Shannon’s mouse appears to have been the first artificial learning device. All that, and he was also an ardent juggler and builder/rider of little bitty unicycles (you cannot make this stuff up). Here is another good video of his life, and yet another to celebrate 2016 as the 100th year after his birth, The Shannon Centennial: 1100100 years of bits by the IEEE Information Theory Society.

claude_shannon_bike_juggle

_______

For a different view loosely connected with Information Theory I recommend that you listen to an interesting Google Talk by Gleick.“The Information: A History, a Theory, a Flood” – Talks at Google (53:45, Google, 2011). It pertains to news and culture and the tension between a humanistic and mechanical approach, a difference that mirrors the tension between Information and Knowledge. This is a must read for all news readers, especially NY Times readers, and for everyone who consumes, filters, creates and curates Information (a Google term). This video has  a good dialogue concerning modern culture and search.

As you can see from the above Google Talk, a kind of Hybrid Multimodal approach seems to be in use in all advanced search. At Google they called it a “mixed-model.” The search tools are designed to filter identity-consonance in favor of diverse-harmonies. Crowd sourcing and algorithms function as curation authority to facilitate Google search. This is a kind of editing by omission that human news editors have been doing for centuries.

The mixed-model approach implied here has both human and AI editors working together to create new kinds of interactive search. Again, good search depends upon a combination of AI and human intelligence. Neither side should work alone and commercial interests should not be allowed to take control. Both humans and machines should create bits and transmit them. People should use AI software to refine their own searches as an ongoing process. This should be a conversation, an interactive Q&A. This should provide a way out of Information to Knowledge.

Lexington - IT lex

Personal Interpretation of Information Theory

My takeaway from the far out reaches of Information theories is that everything is information, even life. All living entities are essentially algorithms of information, including humans. We are intelligent programs capable of deciding yes or no, capable of conscious, intelligent action, binary code. Our ultimate function is to transform information, to process and connect otherwise cold, random data. That is the way most Information Theorists and their philosophers see it, although I am not sure I agree.

Life forms like us are said to stand as the counter-pole to the Second Law of Thermodynamics. The First Law you will remember is that energy cannot be created or destroyed. The Second Law is that the natural tendency of any isolated system is to degenerate into a more disordered state. The Second Law is concerned with the observed one-directional nature of all energy processes. For example, heat always flows spontaneously from hotter to colder bodies, and never the reverse, unless external work is performed on the system. The result is that entropy always increases with the flow of time.

Ludwig_BoltzmannThe Second Law is causality by multiplication, not a zig-zag Mandelbrot fractal division. See my last blog on Chaos Theory. Also see: the work of the Austrian Physicist, Ludwig Boltzmann (1844–1906) on gas-dynamical equations, and his famous H-theorem: the entropy of a gas prepared in a state of less than complete disorder must inevitably increase, as the gas molecules are allowed to collide. Boltzman’s theorem-proof assumed “molecular chaos,” or, as he put it, the Stosszahlansatz, where all particle velocities were completely uncorrelated, random, and did not follow from Newtonian dynamics. His proof of the Second Law was attacked based on the random state assumption and the so called Loschmidt’s paradox. The attacks from pre-Chaos, Newtonian dominated scientists, many of whom still did not even believe in atoms and molecules, contributed to Boltzman’s depression and, tragically, he hanged himself at age 62.

My personal interpretation of Information Theory is that humans, like all of life, counter-act and balance the Second Law. We do so by an organizing force called negentropy that balances out entropy. Complex algorithms like ourselves can recognize order in information, can make sense of it. Information can have meaning, but only by our apprehension of it. We hear the falling tree and thereby make it real.

This is what I mean by the transition from Information to Knowledge. Systems that have ability to process information, to bring order out of chaos, and attach meaning to information, embody that transition. Information is essentially dead, whereas Knowledge is living. Life itself is a kind of Information spun together and integrated into meaningful Knowledge.

privacy-vs-googleWe humans have the ability to process information, to find connections and meaning. We have created machines to help us to do that. We now have information systems – algorithms – that can learn, both on their own and with our help.  We humans also have the ability find things. We can search and filter to perceive the world in such a way as to comprehend its essential truth. To see through appearances, It is an essential survival skill. The unseen tiger is death. Now, in the Information Age, we have created machines to help us find things, help us see the hidden patterns.

We can create meaning, we can know the truth. Our machines, our robot friends, can help us in these pursuits. They can help us attain insights into the hidden order behind chaotic systems of otherwise meaningless information. Humans are negentropic to a high degree, probably more so than any other living system on this planet. With the help of our robot friends, humans can quickly populate the world with meaning and move beyond a mere Information Age. We can find order, process the binary yes-or-no choices and generate Knowledge. This is similar is the skilled editor’s function discussed in Gleick’s Talks at Google (53:45, Google, 2011), but one whose abilities are greatly enhanced by AI analytics and crowdsourcing. The arbitration of truth as they put it in the video is thereby facilitated.

With the help of computers our abilities to create Knowledge are exploding. We may survive the Information flood. Some day our Knowledge may evolve even further, into higher-level integrations – into Wisdom.

James GleickWhen James Gleick was interviewed by Publishers Weekly in 2011 about his book, The Information: a history, a theory, a floodhe touched upon the problem with Information:

By the technical definition, all information has a certain value, regardless of whether the message it conveys is true or false. A message could be complete nonsense, for example, and still take 1,000 bits. So while the technical definition has helped us become powerful users of information, it also instantly put us on thin ice, because everything we care about involves meaning, truth, and, ultimately, something like wisdom. And as we now flood the world with information, it becomes harder and harder to find meaning. That paradox is the final tension in my book.

Application of Information Theory to e-Discovery and Social Progress

Information-mag-glassIn responding to lawsuits we must search through information stored in computer systems. We are searching for information relevant to a dispute. This dispute always arises after the information was created and stored. We do not order and store information according to issues in a dispute or litigation that has not yet happened. This means that for purposes of litigation all information storage systems are inherently entropic, chaotic. They are always inadequately ordered, as far as the lawsuit is concerned. Even if the ESI storage is otherwise well-ordered, which in practice is very rare (think random stored PST files and personal email accounts), it is never well-ordered for a particular lawsuit.

As forensic evidence finders we must always sort through meaningless, irrelevant noise to find the meaningful, relevant information we need. The information we search is usually not completely random. There is some order to it, some meaning. There are, for instance, custodian and time parameters that assist our search for relevance. But the ESI we search is never presented to us arranged in an order that tracks the issues raised by the new lawsuit. The ESI we search is arranged according to other logic, if any at all.

It is our job to bring order to the chaos, meaning to the information, by separating the relevant information from the irrelevant information. We search and find the documents that have meaning for our case. We use sampling, metrics, and iteration to achieve our goals of precision and recall. Once we separate the relevant documents from the irrelevant, we attain some knowledge of the total dataset. We have completed First Pass Review, but our work is not finished. All of the relevant information found in the First Pass is not produced.

Additional information refinement is required. More yes-no decisions must be made in what is called Second Pass Review. Now we consider whether a relevant document is privileged and thus excluded from production, or whether portions of it must be redacted to protect confidentiality.

Even after our knowledge is so further enhanced by confidentiality sorting, and a production set is made, the documents produced, our work is still incomplete. There is almost always far too much information in the documents produced for them to be useful. The information must be further processed. Relevancy itself must be ranked. The relevant documents must be refined down to the 7 +/- 2 documents that will persuade the judge and jury to rule our way, to reach the yes or no decision we seek. The vast body of knowledge, relevant evidence, must become wisdom, must become persuasive evidence.

Knowledge_Information_Wisdom

In a typical significant lawsuit the metrics of this process are as follows: from trillions, to thousands, to a handful. (You can change the numbers if you want to fit the dispute, but what counts here are the relative proportions.)

In a typical lawsuit today we begin with an information storage system that contains trillions of computer files. A competent e-discovery team is able to reduce this down to tens of thousands of files, maybe less, that are relevant. The actual count depends on many things, including issue complexity, cooperation and Rule 26(b)(1) factors. The step from trillions of files, to tens of thousands of relevant files, is the step from information to knowledge. Many think this is what e-discovery is all about: find the relevant evidence, convert Information to Knowledge. But it is not. It is just the first step: from 1 to 2. The next step, 2 to 3, the Wisdom step, is more difficult and far more important.

The tens of thousands of relevant evidence, the knowledge of the case, is still too vast to be useful. After all, the human brain can, at best, only keep seven items in mind at a time. Miller, The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information, Psychological Review 63 (2): 81–97. Tens of thousands of documents, or even thousands of documents, are not helpful to jurors. It may all be relevant, but is not all important. All trial lawyers will tell you that trials are won or lost by only five to nine documents. The rest is just noise, or soon forgotten foundation. Losey, Secrets of Search – Part III (5th secret).

The final step of information processing in e-discovery is only complete when the tens of thousands of files are winnowed down to 5 or 9 documents, or less. That is the final step of Information’s journey, the elevation from Knowledge to Wisdom.

Our challenge as e-discovery team members is to take raw information and turn it into wisdom – the five to nine documents with powerful meaning that will produce the favorable legal rulings that we seek. Testimony helps too of course, but without documents, it is difficult to test memory accuracy, much less veracity. This evidence journey mirrors the challenge of our whole culture, to avoid drowning in too-much-information, to rise above, to find Knowledge and, with luck, a few pearls of Wisdom.

Conclusion

Ralph_green2From trillions to a handful, from mere information to practical wisdom — that is the challenge of our culture today. On a recursive self-similar level, that is also the challenge of justice in the Information Age, the challenge of e-discovery. How to meet the challenges? How to self-organize from out of the chaos of too much information? The answer is iterative, cooperative, interactive, interdisciplinary team processes that employ advanced hybrid, multimodal technologies and sound human judgment. See What Chaos Theory Tell Us About e-Discovery and the Projected ‘Information → Knowledge → Wisdom’ Transition.

The micro-answer for cyber-investigators searching for evidence is fast becoming clear. It depends on a balanced hybrid application of human and artificial intelligence. What was once a novel invention, TAR, or technology assisted review, is rapidly becoming an obvious solution accepted in courts around the world. Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125 (S.D.N.Y. 2015); Pyrrho Investments v MWB PropertyEWHC 256 (Ch) (2/26/16). That is how information works. What was novel one day, even absurd, can very quickly become commonplace. We are creating, transmitting and processing information faster than ever before. The bits are flying at a rate that even Claude Shannon would never have dreamed possible.

The pace of change quickens as information and communication grows. New information flows and inventions propagate. The encouragement of negentropic innovation – ordered bits – is the basis of our property laws and commerce. The right information at the right time has great value.

Just ask a trial lawyer armed with five powerful documents — five smoking guns. These essential core documents are what make or break a case. The rest is just so much background noise, relevant but unimportant. The smoking hot Wisdom is what counts, not Information, not even Knowledge, although they are, of course, necessary prerequisites. There is a significant difference between inspiration and wisdom. Real wisdom does not just appear out of thin air. It arises out of True Information and Knowledge.

The challenge of Culture, including Law and Justice in our Information Age, is to never lose sight of this fundamental truth, this fundamental pattern: Information → Knowledge → Wisdom. If we do, we will get lost in the details. We will drown in a flood of meaningless information. Either that, or we will progress, but not far enough. We will become lost in knowledge and suffer paralysis by analysis. We will know too much, know everything, except what to do. Yes or No. Binary action. The tree may fall, but we never hear it, so neither does the judge or jury. The power of the truth is denied,

There is deep knowledge to be gained from both Chaos and Information Theories that can be applied to the challenges. Some of the insights can be applied in legal search and other cyber investigations. Others can be applied in other areas. As shown in this essay, details are important, but never lose sight of the fundamental pattern. You are looking for the few key facts. Like the Mandelbrot Set they remain the same, or at least similar, over different scales of magnitude, from the small county court case, to the largest complex multinational actions. Each case is different, yet the same. The procedures ties them all together.

Meaning is the whole point of Information. Justice is whole point of the Law.

You find the truth of a legal controversy by finding the hidden order that ties together all of the bits of evidence together. You find the hidden meaning behind all of the apparent contradictory clues, a fractal link of the near infinite strings of bits and bytes.

What really happened? What is the just response, the equitable remedy? That is the ultimate meaning of e-discovery, to find the few significant, relevant facts in large chaotic systems, the facts that make or break your case, so that judges and juries can make the right call. Perhaps this is the ultimate meaning of many of life’s challenges? I do not have the wisdom yet to know, but, as Cat Stevens says, I’m on the road to find out.


e-Discovery Team’s Best Practices Education Program

May 8, 2016

EDBP_BANNER

EDBP                   Mr.EDR         Predictive Coding 3.0
59 TAR Articles
Doc Review  Videos

_______

TEAM_TRAINING_screen_shot

e-Discovery Team Training

Information → Knowledge → Wisdom

Ralph_4-25-16Education is the clearest path from Information to Knowledge in all fields of contemporary culture, including electronic discovery. The above links take you to the key components of the best-practices teaching program I have been working on since 2006. It is my hope that these education programs will help move the Law out of the dangerous information flood, where it is now drowning, to a safer refuge of knowledge. Information → Knowledge → Wisdom: Progression of Society in the Age of Computers; and How The 12 Predictions Are Doing That We Made In “Information → Knowledge → Wisdom.” For more of my thoughts on e-discovery education, see the e-Discovery Team School Page.

justice_guage_negligenceThe best practices and general educational curriculum that I have developed over the years focuses on the legal services provided by attorneys. The non-legal, engineering and project management practices of e-discovery vendors are only collaterally mentioned. They are important too, but students have the EDRM and other commercial organizations and certifications for that. Vendors are part of any e-Discovery Team, but the programs I have developed are intended for law firms and corporate law departments.

LIFE_magazine_Losey_acceleratesThe e-Discovery Team program, both general educational and legal best-practices, is online and available 24/7. It uses lots of imagination, creative mixes, symbols, photos, hyperlinks, interactive comments, polls, tweets, posts, news, charts, drawings, videos, video lectures, slide lectures, video skits, video slide shows, music, animations, cartoons, humor, stories, cultural themes and analogies, inside baseball references, rants, opinions, bad jokes, questions, homework assignments, word-clouds, links for further research, a touch of math, and every lawyer’s favorite tools: words (lots of them), logic, arguments, case law and precedent.

All of this to try to take the e-Discovery Team approach from just information to knowledge →. In spite of these efforts, most of the legal community still does not know e-discovery very well. What they do know is often misinformation. Scenes like the following in a law firm lit-support department are all too common.

supervising-tipsThe e-Discovery Team’s education program has an emphasis on document review. That is because the fees for lawyers reviewing documents is by far the most expensive part of e-discovery, even when contract lawyers are used. The lawyer review fees, and review supervision fees, including SME fees, have always been much more costly than all vendor costs and expenses put together. Still, the latest AI technologies, especially active machine learning using our Predictive Coding 3.0 methods, are now making it possible to significantly reduce review fees. We believe this is a critical application of best practices. The three steps we identify for this area in the EDBP chart are shown in green, to signify money. The reference to C.A. Review is to Computer Assisted Review or CAR, using our Hybrid Multimodal methods.

EDBP_detail_LARGE

____

Predictive Coding 3.0 Hybrid Multimodal Document Search and Review

Control-SetsOur new version 3.0 techniques for predictive coding makes it far easier than ever before to include AI in a document review project. The secret control set has been eliminated, so too has the seed set and SMEs wasting their time reviewing random samples of mostly irrelevant junk. It is a much simpler technique now, although we still call it Hybrid Multimodal.

robot-friendHybrid is a reference to the Man/Machine interactive nature of our methods. A skilled attorney uses a type of continuous active learning to train an AI to help them to find the documents they are looking for. This Hybrid method greatly augments the speed and accuracy of the human attorneys in charge. This leads to cost savings and improved recall. A lawyer with an AI helper at their side is far more effective than lawyers working on their own. This means that every e-discovery team today could use a robot like Kroll Ontrack’s Mr. EDR to help them to do document review.

Search_pyramidMultimodal is a reference to the use of a variety of search methods to find target documents, including, but not limited to, predictive coding type ranked searches. We encourage humans in the loop running a variety of searches of their own invention, especially at the beginning of a project. This always makes for a quick start in finding relevant and hot documents. Why the ‘Google Car’ Has No Place in Legal Search. The multimodal approach also makes for precise, efficient reviews with broad scope. The latest active machine learning software when fully integrated with a full suite of other search tools is attaining higher levels of recall than ever before. That is one reason Why I Love Predictive Coding.

Mr_EDRI have found that Kroll Ontrack’s EDR software is ideally suited for these Hybrid, Multimodal techniques. Try using it on your next large project and see for yourself. The Kroll Ontrack consultant specialists in predictive coding, Jim and Tony, have been trained in this method (and many others). They are well qualified to assist you in every step of the way and their rates are reasonable. With you calling the shots on relevancy, they can do most of the search work for you and still save your client’s money. If the matter is big and important enough, then, if I have a time opening, and it clears my firm’s conflicts, I can also be brought in for a full turn-key operation. Whether you want to include extra time for training your best experts is your option, but our preference.

Team_TREC_2

__________

Embrace e-Discovery Team Education to Escape Information Overload

____


%d bloggers like this: