I want to start off Part 2 with a quote from Andrew Woods in the Introduction to his article, Robophobia, 93 U. Colo. L. Rev. 51 (Winter, 2022). Footnotes omitted.
Deciding where to deploy machine decision-makers is one of the most important policy questions of our time. The crucial question is not whether an algorithm has any flaws, but whether it outperforms current methods used to accomplish a task. Yet this view runs counter to the prevailing reactions to the introduction of algorithms in public life and in legal scholarship. Rather than engage in a rational calculation of who performs a task better, we place unreasonably high demands on robots. This is robophobia – a bias against robots, algorithms, and other nonhuman deciders.
Robophobia is pervasive. In healthcare, patients prefer human diagnoses to computerized diagnoses, even when they are told that the computer is more effective. In litigation, lawyers are reluctant to rely on – and juries seem suspicious of – [*56]computer-generated discovery results, even when they have been proven to be more accurate than human discovery results. . . .
In many different domains, algorithms are simply better at performing a given task than people. Algorithms outperform humans at discrete tasks in clinical health, psychology, hiring and admissions, and much more. Yet in setting after setting, we regularly prefer worse-performing humans to a robot alternative, often at an extreme cost.
Woods, Id. at pgs. 55-56
Bias Against AI in Electronic Discovery
Electronic discovery is a good example of the regular preference of worse-performing humans to a robot alternative, often at an extreme cost. There can be no question now that any decent computer assisted method will significantly outperform human review. We have made great progress in the law through the outstanding leadership of many lawyers and scientists in the field of ediscovery, but there is still a long way to go to convince non-specialists. Professor Woods understands this well and cites many of the leading legal experts on this topic at footnotes 137 to 148. Even though I am not included in his footnotes of authorities (what do you expect, the article was written by a mere human, not an AI), I reproduce them below in the order cited as a grateful shout-out to my esteemed colleagues.
Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient than Exhaustive Manual Review, 17 Rich. J.L. & Tech. 1 (2011).
Sam Skolnik, Lawyers Aren’t Taking Full Advantage of AI Tools, Survey Shows, Bloomberg L. (May 14, 2019) (reporting results of a survey of 487 lawyers finding that lawyers have not well utilized useful new tools).
Moore v. Publicis Groupe, 287 F.R.D. 182, 191 (S.D.N.Y. 2012) (“Computer-assisted review appears to be better than the available alternatives, and thus should be used in appropriate cases.”) Judge Andrew Peck.
Bob Ambrogi, Latest ABA Technology Survey Provides Insights on E-Discovery Trends, Catalyst: E-Discovery Search Blog (Nov. 10, 2016) (noting that “firms are failing to use advanced e-discovery technologies or even any e-discovery technology”).
Doug Austin, Announcing the State of the Industry Report 2021, eDiscovery Today (Jan. 5, 2021),
David C. Blair & M. E. Maron, An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System, 28 Commc’ns ACM 289 (1985).
Thomas E. Stevens & Wayne C. Matus, Gaining a Comparative Advantage in the Process, Nat’l L.J. (Aug. 25, 2008) (describing a “general reluctance by counsel to rely on anything but what they perceive to be the most defensible positions in electronic discovery, even if those solutions do not hold up any sort of honest analysis of cost or quality”).
Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 127 (S.D.N.Y. 2015). Judge Andrew Peck.
See The Sedona Conference, The Sedona Conference Best Practices Commentary on the Use of Search & Information Retrieval Methods in E-Discovery, 15 Sedona Conf. J. 217, 235-26 (2014) (“Some litigators continue to primarily rely upon manual review of information as part of their review process. Principal rationales [include] . . . the perception that there is a lack of scientific validity of search technologies necessary to defend against a court challenge . . . .”).
Doug Austin, Learning to Trust TAR as Much as Keyword Search: eDiscovery Best Practices, eDiscovery Today (June 28, 2021).
Robert Ambrogi, Fear Not, Lawyers, AI Is Not Your Enemy, Above Law (Oct. 30, 2017).
Robophobia Article Is A First
Robophobia is the first piece of legal scholarship to address our misjudgment of algorithms head-on. Professor Woods makes this assertion up front and I believe it. The Article catalogs different ways that we now misjudge poor algorithms. The evidence of our robophobia is overwhelming, but, before Professor Woods work, it had all been in silos and was not seriously considered. He is the first to bring it all together and consider the legal implications.
His article goes on to suggests several reforms, also a first. But before I get to that, a more detailed overview is in order. The Article is in six parts. Part I provides several examples of robophobia. Although a long list, he says it is far from exhaustive. Part II distinguishes different types of robophobia. Part III considers potential explanations for robophobia. Part IV makes a strong, balanced case for being wary of machine decision-makers, including our inclination to, in some situations, over rely on machines. Part V outlines the components of his case against robophobia. The concluding Part VI offers “tentative policy prescriptions for encouraging rational thinking – and policy making – when it comes to nonhuman deciders.“
Part II of the Article – Types of Robophobia
Professor Woods identifies five different types of robophobia.
Elevated Performance Standards: we expect algorithms to greatly outperform the human alternatives and often demand perfection.
Elevated Process Standards: we demand algorithms explain their decision-making processes clearly and fully; the reasoning must be plain and understandable to human reviewers.
Harsher Judgments: algorithmic mistakes are routinely judges more severely than human errors. A corollary of elevated performance standards.
Distrust: our confidence in automated decisions is week and fragile. Would you rather get into an empty AI Uber, or one driven by a scruffy looking human?
Prioritizing Human Decisions: We must keep “humans in the loop” and give more weight to human input than algorithmic.
Part III – Explaining Robophobia
Professor Woods considers seven different explanations for robophobia.
Fear of the Unknown
Transparency Concerns
Loss of Control
Job Anxiety
Disgust
Gambling for Perfect Decisions
Overconfidence in Human Decisions
I’m limiting my review here, since the explanations for most of these should be obvious by now and I want to limit the length of my blog. But the disgust explanation was not one I expected and a short quote by Andrew Woods might be helpful, along with the robot photo I added.
Uncannily Creepy Robot
[T]he more that robots become humanlike, the more they can trigger feelings of disgust. In the 1970s, roboticist Masahiro Mori hypothesized that people would be more willing to accept robots as the machines became more humanlike, but only up to a point, and then human acceptance of nearly-human robots would decline.[227] This decline has been called the “uncanny valley,” and it has turned out to be a profound insight about how humans react to nonhuman agents. This means that as robots take the place of humans with increasing frequency—companion robots for the elderly, sex robots for the lonely, doctor robots for the sick—reports of robots’ uncanny features will likely increase.
For interesting background on the uncanny valley, see these You Tube videos and experience robot disgust for yourself. Uncanny Valley by Popular Science 2008 (old, but pretty disgusting). Here’s a more recent and detailed one, pretty good, by a popular twenty-something with pink hair. Why is this image creepy? by TUV 2022.
Parts IV and V – The Cases For and Against Robophobia
Part IV lays out all the good reasons to be suspect of delegating decision to algorithms. Part V is the new counter-argument, one we have not heard before, why robophobia is bad for us. This is probably the heart of the article and suggest you read this part for sure.
Here is a good quote at the end of Part IV to put the pro versus anti-robot positions into perspective:
Pro-robot bias is no better than antirobot bias. If we are inclined both to over- and underrely on robots, then we need to correct both problems—the human fear of robots is one piece of the larger puzzle of how robots and humans should coexist. The regulatory challenge vis-à-vis human-robot interactions then is not merely minimizing one problem or the other but rather making a rational assessment of the risks and rewards offered by nonhuman decision-makers. This requires a clear sense of the key variables along which to evaluate decision-makers.
In the first two paragraphs of Part V of his article Professor Woods deftly summarizes the case against robophobia.
We are irrational in our embrace of technology, which is driven more by intuition than reasoned debate. Sensible policy will only come from a thoughtful and deliberate—and perhaps counterintuitive—approach to integrating robots into our society. This is a point about the policymaking process as much as it is about the policies themselves. And at the moment, we are getting it wrong—most especially with the important policy choice of where to transfer control from a human decider to a robot decider.
Specifically, in most domains, we should accept much more risk from algorithms than we currently do. We should assess their performance comparatively—usually by comparing robots to the human decider they would replace—and we should care about rates of improvement. This means we should embrace robot decision-makers whenever they are better than human decision-makers. We should even embrace robot decision-makers when they are less effective than humans, as long as we have a high level of confidence that they will soon become better than humans. Implicit in this framing is a rejection of deontological claims—some would say a “right”—to having humans do certain tasks instead of robots.[255] But, this is not to say that we should prefer robots to humans in general. Indeed, we must be just as vigilant about the risks of irrationally preferring robots over humans, which can be just as harmful.[256]
The concluding Part Three of my review of Robophobia is coming soon.In the meantime, take a break and think about Professor Woods policy-based perspective. That is something practicing lawyers like me do not do often enough. Also, it is of value to consider Andrew’s reference to “deontology“, not a word previously in my vocabulary. It is a good ethics term to pick up. Thank you Immanuel Kant.
This blog is the first part of my review of one of the most interesting law review articles I’ve read in a long time, Robophobia. Woods, Andrew K., Robophobia, 93 U. Colo. L. Rev. 51 (Winter, 2022). Robophobia provides the first in-depth analysis of human prejudice against smart computer technologies and its policy implications. Robophobia is the next generation of technophobia, now focusing on the human fear of replacing human decision makers with robotic ones. For instance, I love technology, but am still very reluctant to let an AI drive my car. My son, on the other hand, loves to let his Tesla take over and do the driving, and watch while my knuckles go white. Then he plays the car’s damn fart noises and other joke features and I relax. Still, I much prefer a human at the wheel. This kind of anxiety about advanced technology decision making is at the heart of the law review article.
Technophobia and its son, robophobia, are psychological anxieties that electronic discovery lawyers know all too well. Often it is from first-hand experience with working with other lawyers. This is especially true for those who work with active machine learning. Ediscovery lawyers tire of hearing that keyword search and predictive coding are not to be trusted, that humans reviewing every document is the gold standard. Professor Woods goes into AI and ediscovery a little bit in Robophobia. He cites our friends Judge Andrew Peck, Maura Grossman, Doug Austin and others. But that is only a small part of this interesting technology policy paper. It argues that a central question now facing humanity is when and where to delegate decision-making authority to machines. This question should be made based on the facts and reason, not on emotions and unconscious prejudices.
Ralph and Robot
To answer this central question we need to recognize and overcome our negative stereotypes and phobias about AI. Robots are not all bad. Neither are people. Both have special skills and abilities and both make mistakes. As should be mentioned right away, Professor Woods in Robophobia uses the term “robot” very broadly to include all kinds of smart algorithms, not just actual robots. We need to overcome our robot phobias. Algorithms are already better than people at a huge array of tasks, yet we reject them for not being perfect. This must change.
Robophobia is a decision-making bias. It interferes with our ability to make sensible policy choices. The law should help society to decide when and what kind of decisions should be delegated to the robots, to balance the risk of using a robot compared to the risk of not using one. Robophobia is a decision-making bias that interferes with our ability to make sensible policy choices. In my view, we need to overcome this bias now, to delegate responsibly, so that society can survive the current danger of misinformation overload. See eg. my blog, Can Justice Survive the Internet? Can the World? It’s Not a Sure Thing. Look Up!
This meta review article (review of a law review) is written in three parts, each fairly short (for me), largely because the Robophobia article itself is over 16,000 words and has 308 footnotes. My meta-review will focus on the parts I know best, the use of artificial intelligence in electronic discovery. The summary will include my typical snarky remarks to keep you somewhat amused, and several cool quotes of Woods, all in an attempt to entice some of you to take the deep dive and read Professor Woods’ entire article. Robophobia is all online and free to access at the University of Colorado Law Review website.
Professor Andrew Woods
Professor Andrew Woods
Andrew Keane Woods is an Professor of Law at the University of Arizona College of Law. He is a young man with an impressive background. First the academics, since, after all, he is a Professor:
Brown University, A.B. in Political Science, magna cum laude, 2002;
Harvard Law School, J.D., cum laude (2007);
University of Cambridge, Ph.D. in Politics and International Studies (2012);
Stanford University, Postdoctoral Fellow in Cybersecurity (2012—2014).
As to writing, he has at least twenty law review articles and book chapters to his credit. Aside from Robophobia, some of the most interesting ones I see on his resume are:
Artificial Intelligence and Sovereignty, DATA SOVEREIGNTY ALONG THE SILK ROAD (Anupam Chander & Haochen Sun eds., Oxford University Press, forthcoming);
Keeping the Patient at the Center of Machine Learning in Healthcare, 20 AMERICAN JOURNAL OF BIOETHICS 54 (2020) (w/ Chris Robertson, Jess Findley, Marv Slepian);
Mutual Legal Assistance in the Digital Age, THE CAMBRIDGE HANDBOOK OF SURVEILLANCE LAW (Stephen Henderson & David Gray eds., Cambridge University Press, 2020);
Litigating Data Sovereignty, 128 YALE LAW JOURNAL 328 (2018).
Bottom line, Woods is a good researcher (of course he had help from a zillion law students, whom he names and thanks), and a deep thinker on AI, technology, privacy, politics and social policies. His opinions deserve our careful consideration. In my language, his insights can help us to move beyond mere information to genuine knowledge, perhaps even some wisdom. See eg. my prior blogs, Information → Knowledge → Wisdom: Progression of Society in the Age of Computers (2015); AI-Ethics: Law, Technology and Social Values (website).
Quick Summary of Robophobia
Bad Robot?
Robots – machines, algorithms, artificial intelligence – already play an important role in society. Their influence is growing very fast. Robots are already supplementing or even replacing some human judgments. Many are concerned with the fairness, accuracy, and humanity of these systems. This is rightly so. But, at this point, the anxiety about machine bias is crazy high. The concerns are important, but they almost always run in one direction. We worry about robot bias against humans. We do not worry about human bias against robots. Professor Woods shows that this is a critical mistake.
It is not an error because robots somehow inherently deserve to be treated fairly, although that may someday be true. It is an error because our bias against nonhuman deciders is bad for us humans. A great example Professor Woods provides is self-driving cars. It would be an obvious mistake to reject all self-driving cars merely because one causes a single fatal accident. Yet this is what happened, for a while at least, when an Uber self-driving car crashed into a pedestrian in Phoenix. See eg. FN 71 of Robophobia: Ryan Randazzo, Arizona Gov. Doug Ducey Suspends Testing of Uber Self-Driving Cars, Ariz. Republic, (Mar. 26, 2018). This kind of one-sided perfection bias ignores the fact that humans cause forty thousand traffic fatalities a year, with an average of three deaths every day in Arizona alone. We tolerate enormous risk from our fellow humans, but almost none from machines. That is flawed, biased thinking. Yet, even rah-rah techno promoters like me suffer from it.
Ralph hoping a human driver shows up soon.
Professor Woods shows that there is a substantial literature concerned with algorithmic bias, but until now, its has been ignored by scholars. This suggests that we routinely prefer worse-performing humans over better-performing robots. Woods points out that we do this on our roads, in our courthouses, in our military, and in our hospitals. As he puts it in his Highlights section, that precede the Robophobia article itself, which I am liberally paraphrasing in this Quick Summary: “Our bias against robots is costly, and it will only get more so as robots become more capable.“
Robophobia not only catalogs the many different forms of anti-robot bias that already exist, which he calls a taxonomy of robophobia, it also suggests reforms to curtail the harmful effects of that bias. Robophobia provides many good reasons to be less biased against robots. We should not be totally trusting mind you, but less biased. It is in our own best interests to do so. As Professor Woods puts it, “We are entering an age when one of the most important policy questions will be how and where to deploy machine decision-makers.“
Note About “Robot” Terminology
Before we get too deep into Robophobia, we need to be clear about what Professor Woods means here. We need to define our terms. Woods does this in the first footnote where he explains as follows (HAL image added):
The article is concerned with human judgment of automated decision-makers, which include “robots,” “machines,” “algorithms,” or “AI.” There are meaningful differences between these concepts and important line-drawing debates to be had about each one. However, this Article considers them together because they share a key feature: they are nonhuman deciders that play an increasingly prominent role in society. If a human judge were replaced by a machine, that machine could be a robot that walks into the courtroom on three legs or an algorithm run on a computer server in a faraway building remotely transmitting its decisions to the courthouse. For present purposes, what matters is that these scenarios represent a human decider being replaced by a nonhuman one. This is consistent with the approach taken by several others. See, e.g., Eugene Volokh, Chief Justice Robots, 68 DUKE L.J. 1135 (2019) (bundling artificial intelligence and physical robots under the same moniker, “robots”); Jack Balkin, 2016 Sidley Austin Distinguished Lecture on Big Data Law and Policy: The Three Laws of Robotics in the Age of Big Data, 78 OHIO ST. L.J. 1217, 1219 (2017) (“When I talk of robots … I will include not only robots – embodied material objects that interact with their environment – but also artificial intelligence agents and machine learning algorithms.”); Berkeley Dietvorst & Soaham Bharti, People Reject Algorithms in Uncertain Decision Domains Because They Have Diminishing Sensitivity to Forecasting Error, 31 PSYCH. SCI. 1302, 1314 n.1 (2020) (“We use the term algorithm to describe any tool that uses a fixed step-by-step decision-making process, including statistical models, actuarial tables, and calculators.”). This grouping contrasts scholars who have focused explicitly on certain kinds of nonhuman deciders. See, e.g., Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 CALIF. L. REV. 513, 529 (2015) (focusing on robots as physical, corporeal objects that satisfy the “sense-think-act” test as compared to, say, a “laptop with a camera”).
I told you Professor Woods was a careful scholar, but wanted you to see for yourself by a full quote of footnote one. I promise to exclude footnotes and his many string cites going forward in this blog article, but I do intend to frequently quote his insightful, policy packed language. Did you note his citation to Chief Justice Roberts in his explanation of “robophobia”? I will end this first part of my review of Robophobia with a side excursion into the Justice Robert cite. It provides a good example of irrational robot fears and insight into the Chief Justice himself, which is something I’ve been considering a lot lately. See eg. my recent article The Words of Chief Justice Roberts on JUDICIAL INTEGRITY Suggest the Supreme Court Should Step Away from the Precipice and Not Overrule ‘Roe v Wade’.
Chief Justice Roberts Told High School Graduates in 2018 to “Beware the Robots”
The Chief Justice gave a very short speech at his daughter’s private high school graduation. There he demonstrated a bit of robot anxiety, but did so in an interesting manner. It bears some examination before we get into the substance of Woods’ Robophobia article. For more background on the speech see eg.Debra Cassens Weiss, ‘Beware the robots,’ chief justice tells high school graduates (June 6, 2018). Here are the excerpted words of Chief Justice John Roberts:
Beware the robots! My worry is not that machines will start thinking like us. I worry that we will start thinking like machines. Private companies use artificial intelligence to tell you what to read, to watch and listen to, based on what you’ve read, watched and listened to. Those suggestions can narrow and oversimplify information, stifling individuality and creativity.
Any politician would find it very difficult not to shape his or her message to what constituents want to hear. Artificial intelligence can change leaders into followers. You should set aside some time each day to reflect. Do not read more, do not research more, do not take notes. Put aside books, papers, computers, telephones. Sit, perhaps just for a half hour, and think about what you’re learning. Acquiring more information is less important than thinking about the information you have.”
Aside from the robot fear part, which was really just an attention grabbing speech thing, I could not agree more with his main point. We should move beyond mere information, we should take time to process the information and subject it to critical scrutiny. We should transform from mere information gatherers, into knowledge makers. My point exactly in Information → Knowledge → Wisdom: Progression of Society in the Age of Computers (2015). You could also compare this progression with an ediscovery example, moving from just keyword search to predictive coding.
Part Two of my review of Robophobia is coming soon.In the meantime, take a break and think about any fears you may have about AI. Everyone has some. Would you let the AI drive your car? Select your documents for production? Are our concerns about killer robots really justified, or maybe just the result of media hype? For more thoughts on this, see AI-Ethics.com. And yes, I’ll be Baaack.
It’s Mueller Time! I predict we will be hearing this call around the world for decades, including boardrooms. Organizations will decide to investigate themselves on sensitive issues before the government does, or before someone sues them and triggers formal discovery. Not always, but sometimes, they will do so by appointing their own independent counsel to check on concerns. The Boards of tomorrow will not look the other way. If Robert Muller himself later showed up at their door, they would be ready. They would thank their G.C. that they had already cleaned house.
Most companies who decide it is Mueller Time, will probably not investigate themselves in the traditional “full calorie” Robert Muller way, as good as that is. Instead, they will order a less expensive, AI based investigation, a Mueller Lite. The “full calorie” traditional legal investigation is very expensive, slow and leaky. It involves many people and linear document review. The AI Assisted alternative, the Mueller Lite, will be more attractive because of its lower cost. It will still be an independent investigation, but will rely primarily on internal data and artificial intelligence, not expensive attorneys.
I call this E-Vestigations, for electronic investigations. It is a new type of legal service made possible by a specialized type of AI called “Predictive Coding” and newly perfected Hybrid Multimodal methods of machine training.
Mueller Lite E-Vestigations Save Money
Robert Mueller investigations typically cost millions and involves large teams of expensive professionals. AI Assisted investigations are cheap by comparison. That is because they emphasize company data and AI search of the data, mostly the communications, and need so few people to carry out. This new kind of investigation allows a company to quietly look into and take care of its own problems. The cost savings from litigation avoidance, and bad publicity, can be compelling. Plus it is the right thing to do..
E-Vestigations will typically be a quarter the cost of a traditional Mueller style, paper investigations. It may even be far less than that. Project fees depend on the data itself (volume and “messiness”) and the “information need” of the client (simple or complex). The competitive pricing of the new service is one reason I predict it will explode in popularity. This kind of dramatic savings is possible because most of the time consuming relevance sorting and document ranking work is delegated to the AI.
The computer “reads” or reviews at nearly the speed of light and is 100% consistent. But it has no knowledge on its own. An idiot savant. The AI cannot do anything without its human handlers and trainers. It is basically a learning machine designed to sort large collections of texts into binary sets, typically relevant or irrelevant.
The human investigators read much slower and sometimes make mistakes (plus they like to get compensated), but they are absolutely indispensable. Someday the team of humans may get even smaller, but we are already down to around seven or fewer people per investigation. Compare that to the hundreds involved in a traditional Muller style document review.
Proactive “Peace of Mind” Investigations
This new legal service allows concerned management to proactively investigate upon the first indications of possible wrong-doing. It allows you to have greater assurance that you really know what is going on in your organization. Management or the Board then retains an independent team of legal experts to conduct the quick E-Vestigation. The team provides subject matter expertise on the suspected problem and uses active machine learning to quickly search and analyze the data. They search for preliminary indications of what happened, if anything. This kind of search is ideal for sensitive legal inquiries. It gives management the information needed without breaking the bank or publicizing the results.
This New Legal Service Is Built Around AI
E-Vestigations are a pre-litigation legal service that relies heavily on artificial intelligence, but not entirely. Investigations like this are very complex. They are nowhere near a fully automated process, and as mentioned the AI is really just a learning machine that knows nothing except how to learn document relevance. The service still needs legal experts, but a much smaller team
AI assisted investigations such as E-Vestigations have five compelling positive traits:
Cost
Speed
Stealthiness
Confidentiality
Accuracy.
This article introduces the new service, discusses these five positive traits and provides background for my prediction that many organizations will order AI assisted investigations in the coming years. In fact, due to the disappearing trial, I predict that E-Vestigations will someday take the lead from Litigation in many law firms. This prediction of the future, like most, requires a preliminary journey into the past, to see the longer causal line of events. That comes next, but feel to skip the next three sections until you come to the heading, What is an E-Vestigation?
King Litigation Is Dead
The glory days of litigation are over. All trial lawyers who, like me, have lived through the last forty years of legal practice, have seen it change dramatically. Litigation has moved from a trial and discovery practice, where we saw each other daily in court, to a discovery, motion and mediation practice where we communicate by email and occasional calls.
Although some “trial dogs” will not admit it, we all know that the role of trials has greatly diminished in modern practice. Everything settles. Ninety-nine percent (99%) of federal court civil cases settle without trial. Although my current firm is a large specialty practice, and so is an exception, in most law firms trials are very rare. A so-called “Trial Practice” of a major firm could go years without having an actual trial. I have seen it happen in many law firms. Good lawyers for sure, but they do not really “do trials,” they do trial preparation.
For example, when I started practicing law in 1980 “dispute resolution” was king in most law firms. It was called the “Litigation Department” and usually attracted the top legal talent. It brought in strong revenue and big clients. Every case in the top firms was either a “Bet the Farm” type, or a little case for kiddie lawyer training, we had no form-practice. Friedmann & Brown, “Bet the Farm” Versus “Law Factory”: Which One Works?(Geeks and Law, 2011).
The opposite, “Commodity Litigation,” was rare; typically just something for some divorce lawyers, PI lawyers, criminal lawyers and bankruptcy lawyers. These were not the desired specialties in the eighties, to put it mildly. Factory like practices like that did not pay that well (honest ones anyway) and were boring to most graduates of decent law schools. This has not changed much until recently, when AI has made certain Commodity practices far more interesting and desirable. SeeJoshua Kubicki, The Emerging Competitive Frontier in Biglaw is Practice Venturing (Medium, 1/24/19).
Aside from the less desirable Commodity practice law firms, most litigators in the eighties would routinely take a case to trial. Fish or Cut Bait was a popular saying. Back then Mediation was virtually unknown. Although a majority of cases did eventually settle, a large minority did not. That meant physically going to court, wearing suits and ties every day, and verbal sparing. Lots of arguments and talk about evidence. Sometimes it meant some bullying and physical pushing too, if truth be told. It was a rough and tumble legal world in the eighties, made up in many parts of the U.S. almost entirely of white men. Many were smokers, including the all-white bench.
Ah, the memories. Some of the Litigation attorneys were real jerks, to put it mildly. But only a few were suspected crooked and could not be trusted. Most were honest and could be. We policed our own and word got around about a bad apple pretty fast. Their careers in town were then soon over, one way or the other. Many would just move away or, if they had roots, become businessmen. There were trials a plenty in both the criminal and civil sides.The trials could be dramatic spectacles. The big cases were intense.
Emergence of Mediation
But the times were a changing. In the nineties and first decade of the 21st Century, trials quickly disappeared. Instead, Mediation started to take over. I know, I was in the first group of lawyers ever to be certified as a Mediator of High Technology disputes in 1989. All types of cases began to settle earlier and with less preparation. I have seen cases settle at Mediation where none of the attorneys knew the facts. They just knew what their clients told them. Even more often, only one side was prepared a knew the facts. The other was just “shooting from the hip.”
At trial the unprepared were quickly demolished by the facts, the evidence. At Mediation you can get away with it. The evidence is often just one side’s contention. Why bother to learn the record when you can just BS your way through a mediation? The truth is what I say it is, nothing more. There is no cross-exam. Mediation is a “liars heaven,” although a good mediator can plow through that.
What happened to all the Trial Lawyers you might ask? Many became Mediators, including several of my good friends. A few started specializing in Mediation advocacy, where psychodrama and math are king (typically division). Mediation has become the everyday “Commodity” practice and trials are now the “Bet the Farm” rarity.
With less than one-percent of federal cases going to trial, it is a complete misnomer to keep calling ourselves Trial Lawyers. I know I have stopped calling myself that. Like it or not, that is reality. Our current system is designed to settle. It has become a relativistic opinion fest. It is not designed to determine final, binding objective truth. It is not designed to make findings of fact. It is instead designed to mediate ever more ingenious ways to split the baby.We no longer focus on the evidence, on the objective truth of what happened. We have lost our way.
Justice without Truth is Destabilizing
Justice without Truth is a mockery of Justice, a Post-Modern mockery at that, one where everything is relative. This is called Subjectivism, where one person’s truth is as good as another’s. All is opinion.
This relativistic kind of thinking was, and still is in most Universities, the dominant belief among academics. Truth is supposed to be relative and subjective, not objective, unless it happens to be science. Hard science is supposed to have a monopoly on objectivity. Unfortunately, this relativistic way of thinking has had some unintended consequences. It has led to the kind of political instability that we see in the U.S. today. That is the basic insight of a new book by Pulitzer Prize winner, Michiko Kakutani. The Death of Truth: Notes on Falsehood in the Age of Trump (Penguin, 2018). Also see Hanlin, Postmodernism didn’t cause Trump. It explains him. (Washington Post, 9/2/18).
Truth is truth. It is not just what the company with the biggest wallet says it is. It is not an opinion. Objective truth, the facts based on hard evidence, is real. It is not just an opinion. This video ad below by CNN was cited by Kakutani in her Death of Truth. It makes the case for objectivity in a simple, common sense manner. The political overtones are obvious.
There is a place for the insights of Post-Modern Subjectivism, especially as it concerns religion. But for now the objective-subjective pendulum has swung too far into the subjective. The pause between directions is over and it is starting to swing back. Facts and truth are becoming important again. This point in legal history will, I predict, be marked by the Mueller investigation. Evidence is once again starting to sing in our justice system. It is singing the body electric. The era of E-Vestigations has begun!
What are E-Vestigations?
E-Vestigations are confidential, internal investigations that focus on search of client data and metadata. They uses Artificial Intelligence to search and retrieve information relative to the client’s requested investigation, their information need. We use an AI machine training method that we call Hybrid Multimodal Predictive Coding 4.0. The basic search method is explained in the open-sourced TAR Course, but the Course does not detail how the method can be used in this kind of investigation.
E-Vestigation is done outside of Litigation and court involvement, usually to try to anticipate and avoid Litigation. Are the rumors true, or are the allegations just a bogus attempt to extort a settlement? E-Vestigations are by nature private, confidential investigations, not responses to formal discovery. AI Assisted investigations rely primarily on what the data says, not the rumors and suspicions, or even what some people say. The analysis of vast volumes of ESI is possible, even with millions of files, because e-Vestigations use Artificial Intelligence, both passive and active machine learning. Otherwise, the search of large volumes of ESI takes too long and is too prone to inaccuracies. That is the main reason this approach is far less expensive than traditional “full calorie” Muller type investigations.
The goal of E-Vestigation is to find quick answers based on the record. Interviews may not be required in many investigations and when they are, they are quick and, to the interviewee, mysterious. The answers to the information needs of a client are sometimes easily found. Sometimes you may just find the record is silent as to the issue at hand, but that silence itself often speaks volumes.
The findings and report made at the end of the E-Vestigation may clear up suspicion, or it may trigger a deeper, more detailed investigation. Sometimes the communications and other information found may require an immediate, more drastic response. One way or another, knowing provides the client with legitimate peace of mind.
The electronic evidence is most cases will be so overwhelming (we know what you said, to whom and when) that testimony will be superfluous, a formality. (We have your communications, we know what you did, we just need you to clear up a few details and help us understand how it ties into guys further up the power chain. That help will earn you a lenient plea deal.) This is what is happening right now, January 2019, with the investigation of Robert Mueller.
Defendants in criminal cases will still plea out, but based on the facts, on truth, not threats. Defendants in civil cases will do the same. So will the plaintiff in civil cases who makes unsubstantiated allegations. Facts and truth protect the innocent. Most of that information will be uncovered in computer systems. In the right hands, E-Vestigations can reveal all. It is a proactive alternative to Litigation with expensive settlements. The AI data review features of E-Vestigations make it far less expensive than a Muller investigations. Is it Mueller Time for your organization?
Robert Mueller never need ask a question of a witness to which he does not already know the answer based on the what the record said. The only real question is whether the witness will further compound their problems by lying. They often do. I have seen that several times in depositions of parties in civil cases. It is sheer joy and satisfaction for the questioner to watch the ethically challenged party sink into the questioner’s hidden traps. The “exaggerating witness” will often smile, just slightly, thinking they have you fooled, just like their own attorney. You smile back knowing their lies are now of record and they have just pounded another nail into their coffin.
E-Vestigations may lead to confrontation, even arrest, if the investigation confirms suspicions. In civil matters it may lead to employee discharge or accusations against a competitor. It may lead to an immediate out-of-court settlement. In criminal matters it may lead to indictment and an informed plea and sentencing. It may also lead to Litigation in civil matters with formal, more comprehensive discovery, but at least the E-Vestigating party will have a big head start. They will know the facts. They will know what specific information to ask for from the opposing side.
Eventually, civil suits will not be filed that often, except to memorialize a party’s agreement, such as a consent to a judgment. It will, instead, be a world where information needs are met in a timely manner and Litigation is thereby avoided. A world where, if management needs to know something, such as whether so and so is a sexual predator, they can find out, fast. A world where AI in the hands of a skilled legal team can mine internal data-banks, such as very large collections of employee emails and texts, and find hidden patterns. It may find what was suspected or may lead to surprise discoveries.
The secret mining of data, otherwise known as “reading other people’s emails without their knowledge” may seem like an egregious breach of privacy, but it is not, at least not in the U.S. under the computer use policies of most groups. Employees typically consent to this search as a condition of employment or computer use. Usually the employer owns all of the equipment searched. The employee has no ownership, nor privacy rights in the business related communications of the employer.
The use of AI assistants in investigations limits the exposure of irrelevant information to humans. First, only a few people are involved in the investigation at all because the AI does the heavy lifting. Second, the human reviewers are outside of the organization. Third, the AI does almost all of the document review. Only the AI reads all of the communications, not the lawyers. The humans look at far less than one percent of the data searched in most projects. They spend most of their time in study of the documents the AI has already identified as likely relevant.
The approach of limited investigations, of going in and out of user data only to search in separate, discreet investigations, provides maximum confidentiality to the users. The alternative, which some organizations have already adopted, is constant surveillance by AI of all communications. You can predict future behavior that way, to a point and within statistical limitations of accuracy. The police in some cities are already using constant AI surveillance to predict crimes and allocate resources accordingly.
I find this kind of constant monitoring to be distasteful. For me, it is too Big Brother and oppressive to have AI looking at my every email. It stifles creativity and, I imagine, if this was in place, would make me overly cautious in my communications. Plus, I would be very concerned about software error. If some baby AI is always on, always looking for suspicious patterns, it could make mistakes. The programming of the software almost certainly contains a number of hidden biases of the programmers, typically young white dudes.
The one-by-one investigation approach advocated here provides for more privacy protection. With E-Vestigations the surveillance is focused and time limited. It is not general and ongoing.
Five Virtues of E-Vestigations
Although I am not going to go into the proprietary details here of our E-Vestigations service (contact me through my law firm if you want to know more), I do want to share what I think are the five most important traits of our AI (robotic) assisted reviews: economics, confidentiality, stealth, speed and accuracy.
Confidentiality:
Complete Secrecy.
Artificial Intelligence means fewer people are required.
Employee Privacy Rights Respected.
Data need never leave corporate premises using specialized tools from our vendor.
Attorney-Client Privilege & Work Product protected.
Stealthiness:
Under the Radar Investigation.
Only some in client IT need know.
Sensitive projects. Discreet.
Stealth forensic copy and review of employee data.
Attorneys review off-site, unseen, via encrypted online connection.
Private interviews; only where appropriate.
Speed:
Techniques designed for quick results, early assessments.
Informal, high-level investigations. Not Litigation Discovery.
High Speed Document Review with AI help.
Example: Study of Clinton’s email server (62,320 files, 30,490 disclosed – 55,000 pgs.) is, at most, a one-week project with a first report after one day.
Accuracy:
Objective Findings and Analysis.
Independent Position.
Specialized Expertise.
Answers provided with probability range limitations.
Known Unknowns (Rumsfeld).
Clients are impressed with the cost of E-Vestigations, as compared to traditional investigations. That is important, of course, but the speed of the work is what impresses many. We produce results, use a flat fee to get there, and do so very FAST.
Certainly we can move much faster than the FBI reviewing email using its traditional methods of expert linear review. The Clinton email investigations took forever by our standards. Yet, Clinton’s email server had only 62,320 files, of which 30,490 were disclosed (around 55,000 pages.) This is, at most, a one-week E-Vestigations project with a first report after one day. Our projects are much larger. They involve review of hundreds of thousands of emails, or hundreds of millions. It does not make a big difference in cost because the AI, who works for free, is doing the heavy lifting of actual studying of all this text.
Most federal agencies, including the FBI, do not have the software, the search knowledge, nor attorney skills for this new type of AI assisted investigation. They also do not have the budget to acquire good AI for assist. Take a look at this selection from the official FBI collection of Clinton email and note that the FBI and US Attorneys office in Alexandra Virginia were communicating by fax in September 2015!
State and federal government agencies are not properly funded and cannot compete with private industry compensation. The NSA may well have an A-Team for advanced search, but not the other agencies. As we know, the NSA has their hands full just trying to keep track of the Russians and other enemies interfering with our elections, not to mention the criminals and terrorists.
Unintended Consequence of Mediation Was to Insert Subjectivism into the Law
As discussed, the rise and commoditization of Mediation over the last twenty years has had unintended consequences. The move from the courtroom to the mediator’s office in turn caused the Law to move from objective to subjective opinion. Discussion of the consequences of mediation, and the subjectivist attitude it brings, complicates my analysis of the death of Litigation, but is necessary. Litigation did not turn into private investigation work. One did not flow into another. Litigation is not changing directly into private Investigations, AI assisted or not. Mediation, and its unexpected consequences, is the intervening stage.
1. Litigation → 2. Mediation → 3. AI Assisted Investigations
Mediation brought down Litigation, at least the all important Trial part of Litigation, not AI or private investigations. There is never a judge making rulings at a mediation. There are only attorneys and assertions of what. Somebody must be lying, but with Mediation you never know who. Lawyers found they could settle cases without all that. They did not need the judge at all. At mediation there are no findings of fact, no rulings of law, just droll agreements as to who will pay how much to whom.
The next stage I predict of AI Assisted Investigations is filling a gap caused by the unintended consequence of Mediation. Mediation was never intended to spawn AI Assisted Investigations, no such thing even existed. It was not possible. We did not have the technology to do something like this. The forces driving the advent of AI Assisted Investigations, which I call E-Vestigations, have little to do with Mediation directly, but are instead the result of rapid advances in technology.
Mediation was intended to encourage settlement and reduce expensive trials. It has been wildly successful at that; exceeded all expectations. But this surprise success has also led to unexpected negative consequences. It has led to a new subjectivistic attitude in Litigation. It has led to the decline of evidence and an over-relativistic attitude where Truth was dethroned.
Most of my Mediator friends strongly disagree, but I have never heard a compelling argument to the contrary. The death of the trial is a stunning development. But mediation has had another impact. One that I have not seen discussed previously. It has not only killed trials, it has killed the whole notion of objective truth. It has led to a mediation mind-set where the “merits” are just a matter of opinion. Where cost of defense and the time value of money are the main items of discussion.
That foreseeable defect has led to the unforeseeable development of an AI Assisted alternative to Litigation. It is led to E-Vestigations. AI can now be used to help lawyers investigate and quickly find out the true facts of a situation.
Many lawyers who litigate today do not care what “really happened.” Very post-modern of them, but come on? A few lawyers just blindly believe whatever damned fool thing their client tells them. Most just say we will never know the absolute truth anyway, so let us just try our best to resolve the dispute and do what’s fair without any test of the evidence. They try to do justice with just a passing nod to the evidence, to the truth of what happened. I am not a fan. It goes against all of my core teachings as a young commercial litigation attorney who prepared and tried cases. It goes against my core values and belief. My opinion is that it is not all just opinion, that there is truth.
I object to that mediation, relativistic approach. After a life in the Law chasing smoking guns and taking depositions, I know for a fact that witnesses lie, that their memories are unreliable, all too human. But I also know that the writings made by and to these same witness often expose the lies, or, more charitably put, expose the errors in human memory. Fraudsters are human and almost always make mistakes. It is an investigator’s job to check the record to find the slip-ups in the con. (I dread the day when I have to try to trace a AI fraudster!)
I have been chasing and exposing con-men most of my adult life. I defended a few too. In my experience the truth has a way of finding its way out.
This is not an idealistic dream in today’s world of information floods. There is so much information, the real difficulty is in finding the important bits, the smoking guns, the needles. The evidence is usually there, but not yet found. The real challenge today is not in gathering the evidence, it is in searching for the key documents, finding the signal in the noise.
Conclusion
Objective accounts of what happened in the past are not only possible, they are probable in today’s Big Data world. Your Alexa or Google speakers may have part of the record. So too may your iWatch or Fitbit. Soon your refrigerator will too. Data is everywhere. Privacy is often an illusion. (Sigh.) The opportunity of liars and other scoundrels to “get away with it” and fool people is growing smaller every day. Fortunately, if lawyers can just learn a few new evidence search skills, they can use AI to help them find the information they need.
Juries and judges, for the most part, believe in objective truth. They are quite capable of sorting through competing versions and getting at the truth. Good judges and lawyers (and jurors) can make sure that happens.
As mentioned, many academics and sophisticates believe otherwise, that there is no such a thing as objective truth. They believe instead in Relativism. They are wrong.
The postmodernist argument that all truths are partial (and a function of one’s perspective) led to the related argument that there are many legitimate ways to understand or represent an event. . . .
Without commonly agreed-upon facts — not Republican facts and Democratic facts; not the alternative facts of today’s silo-world — there can be no rational debate over policies, no substantive means of evaluating candidates for political office, and no way to hold elected officials accountable to the people. Without truth, democracy is hobbled. The founders recognized this, and those seeking democracy’s survival must recognize it today.
It is possible to find the truth, objective truth. All is not just opinion and allegations. Accurate forensic reconstruction is possible today in ways that we could never have imagined before. So is AI assisted search. The record of what is happening grows larger every day. That record written electronically at the time of the events in question is far more reliable than our memories. We can find the truth, but for that need to look primarily to the documents, not the testimony. That is not new. That is wisdom upon which almost all trial lawyers agree.
The truth is attainable, but requires dedication and skilled efforts by everyone on a legal team to find it. It requires knowledge of course, and a proven method, but also impartiality, discipline, intelligence and a sense of empathy. It requires experience with what the AI can do, and just as important, what it cannot do. It requires common sense. Lawyers have that. Jurors have that.
Surely only a weak-minded minority are fooled by today’s televised liars. Most competent trial lawyers could persuade a sequestered jury to convict them. And convict they will, but that still will not cause of rebirth of Litigation. Its’ glory days are over. So too is its killer, Mediation, although its death will take longer (Mediation may not even have peaked yet).
Evidence speaks louder than any skilled mediator. Let the truth be told. Let the chips fall where they may. King Litigation is dead. Long live the new King, confidential, internal AI assisted E-Vestigations.
A Salt Lake City Court braved the TAR pits to decide a “transparency” issue. Entrata, Inc. v. Yardi Systems, Inc., Case No. 2:15-cv-00102 (D.C. Utah, 10/29/18). The results were predictable for TAR, which is usually dark. The requesting party tried to compel the respondent to explain their TAR. Tried to force them to disclose the hidden metrics of Recall and Richness. The motion was too little, too late and was denied. The TAR pits of Entrata remain dark. Maybe TAR was done well, maybe not. For all we know the TAR was done by Sponge Bob Square Pants using Bikini Bottom software. We may never know.
Due to the Sponge Bobby type motion leading to an inevitable denial, the requesting party, Yardi Systems, Inc., remains in dark TAR. Yardi still does not know whether the respondent, Entrata,Inc., used active machine learning? Maybe they used a new kind of Bikini Bottom software nobody has ever heard of? Maybe they used KL’s latest software? Or Catalyst? Maybe they did keyword search and passive analytics and never used machine training at all? Maybe they threw darts for search and used Adobe for review? Maybe they ran a series of random and judgmental samples for quality control and assurance? Maybe the results were good? Maybe not?
The review by Entrata could have been a very well managed project. It could have had many built-in quality control activities. It could have been an exemplar of Hybrid Multimodal Predictive Coding 4.0. You know, the method we perfected at NIST’s TREC Total Recall Track? The one that uses the more advanced IST, instead of simple CAL? I am proud to talk about these methods all day and how it worked out on particular projects. The whole procedure is transparent, even though disclosure of all metrics and tests is not. These measurements are anyway secondary to method. Yardi’s motion to compel disclosure should not have been so focused on a recall and richness number. It should instead of focused on methods. The e-Discovery Team methods are spelled out in detail in the TAR Course. Maybe that is what Entrata followed? Probably not. Maybe, God forbid, Entrata used random driven CAL? Maybe the TAR was a classic Sponge Bob Square Pants production of shame and failure? Now Yardi will never know. Or will they?
Yardi’s Quest for True Facts is Not Over
About the only way the requesting party, Yardi, can possibly get TAR disclosure in this case now is by proving the review and production made by Entrata was negligent, or worse, done in bad faith. That is a difficult burden. The requesting party has to hope they find serious omissions in the production to try to justify disclosure of method and metrics. (At the time of this order production by Entrata had not been made.) If expected evidence is missing, then this may suggest a record cleansing, or it may prove that nothing like that ever happened. Careful investigation is often required to know the difference between a non-existent unicorn and a rare, hard to find albino.
Remember, the producing party here, the one deep in the secret TAR, was Entrata, Inc. They are Yardi Systems, Inc. rival software company and defendant in this case. This is a bitter case with history. It is hard for attorneys not to get involved in a grudge match like this. Looks like strong feelings on both sides with a plentiful supply of distrust. Yardi is, I suspect, highly motivated to try to find a hole in the ESI produced, one that suggests negligent search, or worse, intentional withholding by the responding party, Entrata, Inc. At this point, after the motion to compel TAR method was denied, that is about the only way that Yardi might get a second chance to discover the technical details needed to evaluate Entrata’s TAR. The key question driven by Rule 26(g) is whether reasonable efforts were made. Was Entrata’s TAR terrible or terrific? Yardi may never know.
What about Yardi’s discovery? Do they have clean hands? Did Yardi do as good a job at ESI search as Entrata? (Assuing that Yardi used TAR too.) How good was Yardi’s TAR? (Had to ask that!) Was Yardi’s TAR as tardy as its motion? What were the metrics of Yardi’s TAR? Was it dark too? The opinion does not say what Yardi did for its document productions. To me that matters a lot. Cooperation is a mutual process. It is not capitulation. The same goes for disclosure. Do not come to me demanding disclosure but refusing to reciprocate.
How to Evaluate a Responding Party’s TAR?
Back to the TAR at issue. Was Entrata’s TAR riddled with errors? Did they oppose Yardi’s motion because they did a bad job? Was this whole project a disaster? Did Entrata know they had driven into a TAR pit? Who was the vendor? What software was used? Did it have active machine learning features? How were they used? Who was in charge of the TAR? What were their qualifications? Who did the hands-on review? What problems did they run into? How were these problems addressed? Did the client assist? Did the SMEs?
Perhaps the TAR was sleek and speedy and produced the kind of great results that many of us expect from active machine learning. Did sampling suggest low recall? Or high recall? How was the precision? How did this change over the rounds of training. The machine training was continuous, correct? The “seed-set nonsense” was not used, was it? You did not rely on a control set to measure results, did you? You accounted for natural concept drift, didn’t you, where the understanding of relevance changes over the course of the review? Did you use ei-Recall statistical sampling at the end of the project to test your work? Was a “Zero Error” policy followed for the omission of Highly Relevant documents as I recommend?. Are corrective supplemental searches now necessary to try to find missing evidence that is important to the outcome of the case? Do we need to force them to use an expert? Require that they use the state of the art standard, the e-Discovery Team’s Predictive Coding 4.0 Hybrid Multimodal IST?
Yardi’s motion was weak and tardy so Entrata, Inc. could defend its process simply by keeping it secret. This is the work-product defense approach. This is NOT how I would have defended a TAR process. Or rather, not the only way. I would have objected to interference, but also made controlled, limited disclosures. I would have been happy, even proud to show what state of the art search looks like. I would introduce our review team, including our experts, and provide an overview of the methods, the work-flow.
I would also have demanded reciprocal disclosures. What method, what system did you use? TAR is an amazing technology, if used correctly. If used improperly, TAR can be a piece of junk. How did the Subject Matter Experts in this case control the review? Train the machine? Is that a scary ghost in the machine or just a bad SMI?
How did Entrata do it? How for that matter did the requesting party, Yardi, do it? Did it use TAR as part of its document search? Is Yardi proud of its TAR? Or is Yardi’s TAR as dark and hardy har har as Entrata’s TAR. Are all the attorneys and techs walking around with their heads down and minds spinning with secret doc review failures?
e-Discovery Team reviews routinely exceed minimal reasonable efforts; we set the standards of excellence. I would have made reasonable reassurances by disclosure of method. That builds trust. I would have pointed them to the TAR Course and the 4.0 methods. I would have sent them the below eight-step work-flow diagram. I would have told then that we follow these eight steps or if any deviations were expected, explained why.
I would have invited opposing counsel to participate in the process with any suggested keywords, hot documents to use to train. I would even allow them to submit model fictitious training documents. Let them create fake documents to help us to find any real ones that might be like it, no matter what specific words are used. We are not trying to hide anything. We are trying to find all relevant documents. All relevant documents will be produced, good or bad. Repeat that often. Trust is everything. You can never attain real cooperation without it. Trust but verify. And clarify by talk. Talk to your team, your client, witnesses, opposing counsel and the judge. That is always the first step.
Of course, I would not spend unlimited time going over everything. I dislike meetings and talking to people who have little or no idea what I am saying. Get your own expert. Do the work. These big document review projects often go on for weeks and you could waste and spend a small fortune with too much interaction and disclosures. I don’t want people looking over my shoulder and I don’t want to reveal all of my tricks and work-product secrets, just the general stuff you could get by study of my books. I would have drawn some kind of line of secrecy in the sand, hopefully not quicksand, so that our disclosures were reasonable and not over-burdensome. In Entrata the TAR masters doing the search did not want reveal much of anything. They were very distrustful of Yardi and perhaps sensed a trap. More on that later. Or maybe Entrata did have something to hide? How do we get at the truth of this question without looking at all of the documents ourselves? That is very difficult, but one way to get at the truth is to look at the search methods used, the project metadata.
The dark TAR defense worked for Entrata, but do not count on it working in your case. The requesting party might not be tardy like Yardi. They might make a far better argument.
Order Affirming Denial of Motion to Compel Disclosure of TAR
The well-written opinion in Entrata, Inc. v. Yardi Systems, Inc., (D.C. Utah, 10/29/18) was by United States District Judge Clark Waddoups. Many other judges have gone over this transparency issue before and Judge Waddoups has a good summary of the ones cited to him by the moving party. I remember tackling these transparency issues with Judge Andrew Peck in Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012), which is one of the cases that Judge Waddoups cites. At that time, 2012, there was no precedent even allowing Predictive Coding, much less discussing details of its use, including disclosure best-practices. We made strong efforts of cooperation on the transparency issues after Judge Peck approved predictive coding. Judge Peck was an expert in TAR and very involved in the process. That kind of cooperation that can be encouraged by a very active judge did not happen in Entrata. The cooperative process failed. That led to a late motion by the Plaintiff to force disclosure of the TAR.
The plaintiff, Yardi Systems, Inc, is the party who requested ESI from defendants in this software infringement case. It wanted to know how the defendant was using TAR to respond to their request. Plaintiff’s motion to compel focused on disclosure of the statistical analysis of the results, Recall and Prevalence (aka Richness). That was another mistake. Statistics alone can be meaningless and misleading, especially if range is not considered, including the binomial adjustment for low prevalence. This is explained and covered by my ei-Recall test. Introducing “ei-Recall” – A New Gold Standard for Recall Calculations in Legal Search – Part One, Part Two and Part Three (e-Discovery Team, 2015). Also see:In Legal Search Exact Recall Can Never Be Known.
Disclosure of the whole process, the big picture, is the best Defense Of Process evidence, not just a couple of random sample test results. Looks like the requesting party here might have just been seeking “gotcha material” by focusing so much on the recall numbers. That may be another unstated reason both the Magistrate and District Court Judges denied their late request for disclosure. That could why the attorneys for Entrata kept their TAR dark, even though they were not negligent or in bad faith. Maybe they were proud of their efforts, but were tired of bad faith hassling by the requesting party. Hard to know based on this opinion alone.
After Chief Magistrate Judge Warner denied Yardi’s motion to compel, Yardi appealed to the District Court Judge Waddoups and argued that the Magistrate’s order was “clearly erroneous and contrary to law.” Yardi argued that “the Federal Rules of Civil Procedure and case law require Entrata, in the first instance, to provide transparent disclosures as a requirement attendant to its use of TAR in its document review.”
Please, that is not an accurate statement of the governing legal precedent. It was instead “wishful thinking” on the part of plaintiff’s counsel. Sounds like a Sponge Bob Square Pants move to me. Judge Waddoups considered taxing fees against the plaintiff under Rule 37(a)(5)(B) because of this near frivolous argument, but ultimately let them off by finding the position was not “wholly meritless.”
Judge Waddoups had no choice but to deny a motion like this filed under these procedures. Here is a key paragraph explaining his reasoning for denial.
The Federal Rules of Civil Procedure assume cooperation in discovery. Here, the parties never reached an agreement regarding search methodology. In the court’s view, the lack of any agreement regarding search methodology is a failure on the part of both parties. Nevertheless, Yardi knew, as early as May of 2017, that Entrata intended to use TAR. (See ECF No. 257-1 at 2.) The Magistrate Court’s September 20, 2017 Order stated, in part, that “[i]f the parties are unable to agree on . . . search methodology within 30 days of the entry of this Order, the parties will submit competing proposals . . . .” (ECF No. 124 at 2.) Yardi, as early as October 2, 2017, knew that “Entrata [was] refus[ing] to provide” “TAR statistics.” (See ECF No. 134 at 3.) In other words, Yardi knew that the parties had not reached an agreement regarding search methodology well before the thirty day window closed. Because Yardi knew that the parties had not reached an agreement on search methodology, it should have filed a proposal with the Magistrate Court. This would have almost certainly aided in resolving this dispute long before it escalated. But neither party filed any proposal with the Magistrate Court within 30 days of entry of its Order. Yardi has not pointed to any Federal Rule of Civil Procedure demonstrating that the Magistrate Court’s Order was contrary to law. This court rejects Yardi’s argument relating to the Federal Rules of Civil Procedure.
Conclusion
The requesting party in Entrata did not meet the high burden needed to reverse a magistrate,s discovery ruling as clearly erroneous and contrary to law. If you are ever going to win on a motion like this, it will likely be on a Magistrate level. Seeking to overturn a denial and meet this burden to reverse is extremely difficult, perhaps impossible in cases seeking to compel TAR disclosure. The whole point is that there is no clear law on the topic yet. We are asking judges to make new law, to establish new standards of transparency. You must be open and honest to attain this kind of new legal precedent. You must use great care to be accurate in any representations of Fact or Law made to a court. Tell them it is a case of first impression when the precedent is not on point as was the situation in Entrata, Inc. v. Yardi Systems, Inc., Case No. 2:15-cv-00102 (D.C. Utah, 10/29/18). Tell them the good and the bad. There was never a perfect case and there always has to be a first for anything. Legal precedent moves slowly, but it moves continuously. It is our job as practicing attorneys to try to guide that change.
The requesting party seeking disclosure of TAR methods in Entrata doomed their argument by case law misstatements and in-actions. They might have succeeded by making full disclosures themselves, both of the law and their own TAR. The focus of their argument should be on the benefits of doing TAR right and the dangers of doing it wrong. They should have talked more about what TAR – Technology Assisted Review – really means. They should have stressed cooperation and reciprocity.
To make new precedent in this area you must first recognize and explain away a number of opposing principles, including especially The Sedona Conference Principle Six. That says responding parties always know best and requesting parties should stay out of their document reviews. I have written about this Principle and why it should be updated. Losey, Protecting the Fourteen Crown Jewels of the Sedona Conference in the Third Revision of its Principles (e-Discovery Team, 2//2/17). The Sedona Principle Six argument is just one of many successful defenses that can be used to protect against forced TAR disclosure. There are also good arguments based on the irrelevance of this search information to claims or defenses under Rule 26(b)(2) and under work-product confidentiality protection.
Any party who would like to force another to make TAR disclosure should make such voluntary disclosures themselves. Walk your talk to gain credibility. The disclosure argument will only succeed, at least for the first time (the all -important test case), in the context of proportional cooperation. An extended 26(f) conference is a good setting and time. Work-product confidentiality issues should be raised in the first days of discovery, not the last day. Timing is critical.
The 26(f) discovery conference dialogue should be directed towards creating a uniform plan for both sides. This means the TAR disclosures should be reciprocal. The ideal test case to make this law would be a situation where the issue is decided early at a Rule 16(b) hearing. It would involve a situation where one side is willing to disclose, but the other is not, or where the scope of disclosures is disputed. At the 16(b) hearing, which usually takes place in the first three months, the judge is supposed to consider the parties’ Rule 26(f) report and address any discovery issues raised, such as TAR method and disclosures.
The first time disclosure is forced by a judge it will almost certainly be a mutual obligation. Each side should will be required to assume the same disclosure obligations. This could include a requirement for statistical sampling and disclosure of certain basic metrics such as Recall range, Prevalence and Precision? Sampling tests like this can be run no matter what search method is used, even little old keyword search.
It is near impossible to come into court when both sides have extensive ESI and demand that your opponent do something that you yourself refuse to do. If you expect to be able to force someone to use TAR, or to disclose basic TAR methods and metrics, then you had better be willing to do that yourself. If you are going to try to force someone to disclose work-product protected information, such as an attorney’s quality control tests for Recall range in document review, then you had better make such a limited waiver yourself.
Ralph Losey is a Friend of AI with over 740,000 LLM Tokens, Writer, Commentator, Journalist, Lawyer, Arbitrator, Special Master, and Practicing Attorney as a partner in LOSEY PLLC. Losey is a high tech oriented law firm started by Ralph's son, Adam Losey. We handle major "bet the company" type litigation, special tech projects, deals, IP of all kinds all over the world, plus other tricky litigation problems all over the U.S. For more details of Ralph's background, Click Here
All opinions expressed here are his own, and not those of his firm or clients. No legal advice is provided on this web and should not be construed as such.
Ralph has long been a leader of the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world. Ralph has written over two million words on e-discovery and tech-law subjects, including seven books.
Ralph has been involved with computers, software, legal hacking and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: Commercial Litigation; E-Discovery and Information Management Law; Information Technology Law; and, Employment Law - Management.
Ralph is the proud father of two children, Eva Losey Grossman, and Adam Losey, a lawyer with incredible litigation and cyber expertise (married to another cyber expert lawyer, Catherine Losey), and best of all, husband since 1973 to Molly Friedman Losey, a mental health counselor in Winter Park.