Robophobia: Great New Law Review Article – Part 2

May 26, 2022
Professor Andrew Woods

This article is Part Two of my review of Robophobia by Professor Andrew Woods. See here for Part 1.

I want to start off Part 2 with a quote from Andrew Woods in the Introduction to his article, Robophobia, 93 U. Colo. L. Rev. 51  (Winter, 2022). Footnotes omitted.

Deciding where to deploy machine decision-makers is one of the most important policy questions of our time. The crucial question is not whether an algorithm has any flaws, but whether it outperforms current methods used to accomplish a task. Yet this view runs counter to the prevailing reactions to the introduction of algorithms in public life and in legal scholarship. Rather than engage in a rational calculation of who performs a task better, we place unreasonably high demands on robots. This is   robophobia – a bias against robots, algorithms, and other nonhuman deciders. 

Robophobia is pervasive. In healthcare, patients prefer human diagnoses to computerized diagnoses, even when they are told that the computer is more effective.  In litigation, lawyers are reluctant to rely on – and juries seem suspicious of – [*56] computer-generated discovery results, even when they have been proven to be more accurate than human discovery results. . . .

In many different domains, algorithms are simply better at performing a given task than people. Algorithms outperform humans at discrete tasks in clinical health, psychology, hiring and admissions, and much more. Yet in setting after setting, we regularly prefer worse-performing humans to a robot alternative, often at an extreme cost. 

Woods, Id. at pgs. 55-56

Bias Against AI in Electronic Discovery

Electronic discovery is a good example of the regular preference of worse-performing humans to a robot alternative, often at an extreme cost. There can be no question now that any decent computer assisted method will significantly outperform human review. We have made great progress in the law through the outstanding leadership of many lawyers and scientists in the field of ediscovery, but there is still a long way to go to convince non-specialists. Professor Woods understands this well and cites many of the leading legal experts on this topic at footnotes 137 to 148. Even though I am not included in his footnotes of authorities (what do you expect, the article was written by a mere human, not an AI), I reproduce them below in the order cited as a grateful shout-out to my esteemed colleagues.

  • Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient than Exhaustive Manual Review, 17 Rich. J.L. & Tech. 1 (2011).
  • Sam Skolnik, Lawyers Aren’t Taking Full Advantage of AI Tools, Survey Shows, Bloomberg L. (May 14, 2019) (reporting results of a survey of 487 lawyers finding that lawyers have not well utilized useful new tools).
  • Moore v. Publicis Groupe, 287 F.R.D. 182, 191 (S.D.N.Y. 2012) (“Computer-assisted review appears to be better than the available alternatives, and thus should be used in appropriate cases.”) Judge Andrew Peck.
  • Bob Ambrogi, Latest ABA Technology Survey Provides Insights on E-Discovery Trends, Catalyst: E-Discovery Search Blog (Nov. 10, 2016) (noting that “firms are failing to use advanced e-discovery technologies or even any e-discovery technology”).
  • Doug Austin, Announcing the State of the Industry Report 2021, eDiscovery Today (Jan. 5, 2021),
  • David C. Blair & M. E. Maron, An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System, 28 Commc’ns ACM 289 (1985).
  • Thomas E. Stevens & Wayne C. Matus, Gaining a Comparative Advantage in the Process, Nat’l L.J. (Aug. 25, 2008) (describing a “general reluctance by counsel to rely on anything but what they perceive to be the most defensible positions in electronic discovery, even if those solutions do not hold up any sort of honest analysis of cost or quality”).
  • Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 127 (S.D.N.Y. 2015). Judge Andrew Peck.
  • See The Sedona Conference, The Sedona Conference Best Practices Commentary on the Use of Search & Information Retrieval Methods in E-Discovery, 15 Sedona Conf. J. 217, 235-26 (2014) (“Some litigators continue to primarily rely upon manual review of information as part of their review process. Principal rationales [include] . . . the perception that there is a lack of scientific validity of search technologies necessary to defend against a court challenge . . . .”).
  • Doug Austin, Learning to Trust TAR as Much as Keyword Search: eDiscovery Best Practices, eDiscovery Today (June 28, 2021).
  • Robert Ambrogi, Fear Not, Lawyers, AI Is Not Your Enemy, Above Law (Oct. 30, 2017).

Robophobia Article Is A First

Robophobia is the first piece of legal scholarship to address our misjudgment of algorithms head-on. Professor Woods makes this assertion up front and I believe it. The Article catalogs different ways that we now misjudge poor algorithms. The evidence of our robophobia is overwhelming, but, before Professor Woods work, it had all been in silos and was not seriously considered. He is the first to bring it all together and consider the legal implications.

His article goes on to suggests several reforms, also a first. But before I get to that, a more detailed overview is in order. The Article is in six parts. Part I provides several examples of robophobia. Although a long list, he says it is far from exhaustive. Part II distinguishes different types of robophobia. Part III considers potential explanations for robophobia. Part IV makes a strong, balanced case for being wary of machine decision-makers, including our inclination to, in some situations, over rely on machines. Part V outlines the components of his case against robophobia. The concluding Part VI offers “tentative policy prescriptions for encouraging rational thinking – and policy making – when it comes to nonhuman deciders.

Part II of the Article – Types of Robophobia

Professor Woods identifies five different types of robophobia.

  • Elevated Performance Standards: we expect algorithms to greatly outperform the human alternatives and often demand perfection.
  • Elevated Process Standards: we demand algorithms explain their decision-making processes clearly and fully; the reasoning must be plain and understandable to human reviewers.
  • Harsher Judgments: algorithmic mistakes are routinely judges more severely than human errors. A corollary of elevated performance standards.
  • Distrust: our confidence in automated decisions is week and fragile. Would you rather get into an empty AI Uber, or one driven by a scruffy looking human?
  • Prioritizing Human Decisions: We must keep “humans in the loop” and give more weight to human input than algorithmic.

Part III – Explaining Robophobia

Professor Woods considers seven different explanations for robophobia.

  • Fear of the Unknown
  • Transparency Concerns
  • Loss of Control
  • Job Anxiety
  • Disgust
  • Gambling for Perfect Decisions
  • Overconfidence in Human Decisions

I’m limiting my review here, since the explanations for most of these should be obvious by now and I want to limit the length of my blog. But the disgust explanation was not one I expected and a short quote by Andrew Woods might be helpful, along with the robot photo I added.

Uncannily Creepy Robot

[T]he more that robots become humanlike, the more they can trigger feelings of disgust. In the 1970s, roboticist Masahiro Mori hypothesized that people would be more willing to accept robots as the machines became more humanlike, but only up to a point, and then human acceptance of nearly-human robots would decline.[227] This decline has been called the “uncanny valley,” and it has turned out to be a profound insight about how humans react to nonhuman agents. This means that as robots take the place of humans with increasing frequency—companion robots for the elderly, sex robots for the lonely, doctor robots for the sick—reports of robots’ uncanny features will likely increase.

For interesting background on the uncanny valley, see these You Tube videos and experience robot disgust for yourself. Uncanny Valley by Popular Science 2008 (old, but pretty disgusting). Here’s a more recent and detailed one, pretty good, by a popular twenty-something with pink hair. Why is this image creepy? by TUV 2022.

Parts IV and V – The Cases For and Against Robophobia

Part IV lays out all the good reasons to be suspect of delegating decision to algorithms. Part V is the new counter-argument, one we have not heard before, why robophobia is bad for us. This is probably the heart of the article and suggest you read this part for sure.

Here is a good quote at the end of Part IV to put the pro versus anti-robot positions into perspective:

Pro-robot bias is no better than antirobot bias. If we are inclined both to over- and underrely on robots, then we need to correct both problems—the human fear of robots is one piece of the larger puzzle of how robots and humans should coexist. The regulatory challenge vis-à-vis human-robot interactions then is not merely minimizing one problem or the other but rather making a rational assessment of the risks and rewards offered by nonhuman decision-makers. This requires a clear sense of the key variables along which to evaluate decision-makers.

In the first two paragraphs of Part V of his article Professor Woods deftly summarizes the case against robophobia.

We are irrational in our embrace of technology, which is driven more by intuition than reasoned debate. Sensible policy will only come from a thoughtful and deliberate—and perhaps counterintuitive—approach to integrating robots into our society. This is a point about the policymaking process as much as it is about the policies themselves. And at the moment, we are getting it wrong—most especially with the important policy choice of where to transfer control from a human decider to a robot decider.

Specifically, in most domains, we should accept much more risk from algorithms than we currently do. We should assess their performance comparatively—usually by comparing robots to the human decider they would replace—and we should care about rates of improvement. This means we should embrace robot decision-makers whenever they are better than human decision-makers. We should even embrace robot decision-makers when they are less effective than humans, as long as we have a high level of confidence that they will soon become better than humans. Implicit in this framing is a rejection of deontological claims—some would say a “right”—to having humans do certain tasks instead of robots.[255] But, this is not to say that we should prefer robots to humans in general. Indeed, we must be just as vigilant about the risks of irrationally preferring robots over humans, which can be just as harmful.[256]


The concluding Part Three of my review of Robophobia is coming soon. In the meantime, take a break and think about Professor Woods policy-based perspective. That is something practicing lawyers like me do not do often enough. Also, it is of value to consider Andrew’s reference to “deontology“, not a word previously in my vocabulary. It is a good ethics term to pick up. Thank you Immanuel Kant.




Transparency in a Salt Lake TAR Pit?

November 11, 2018

A Salt Lake City Court braved the TAR pits to decide a “transparency” issue. Entrata, Inc. v. Yardi Systems, Inc., Case No. 2:15-cv-00102 (D.C. Utah, 10/29/18). The results were predictable for TAR, which is usually dark. The requesting party tried to compel the respondent to explain their TAR. Tried to force them to disclose the hidden metrics of Recall and Richness. The motion was too little, too late and was denied. The TAR pits of Entrata remain dark. Maybe TAR was done well, maybe not. For all we know the TAR was done by Sponge Bob Square Pants using Bikini Bottom software. We may never know.

Due to the Sponge Bobby type motion leading to an inevitable denial, the requesting party, Yardi Systems, Inc., remains in dark TAR. Yardi still does not know whether the respondent, Entrata,Inc., used active machine learning? Maybe they used a new kind of Bikini Bottom software nobody has ever heard of? Maybe they used KL’s latest software? Or Catalyst? Maybe they did keyword search and passive analytics and never used machine training at all? Maybe they threw darts for search and used Adobe for review? Maybe they ran a series of random and judgmental samples for quality control and assurance? Maybe the results were good? Maybe not?

The review by Entrata could have been a very well managed project. It could have had many built-in quality control activities. It could have been an exemplar of Hybrid Multimodal Predictive Coding 4.0. You know, the method we perfected at NIST’s TREC Total Recall Track? The one that uses the more advanced IST, instead of simple CAL? I am proud to talk about these methods all day and how it worked out on particular projects. The whole procedure is transparent, even though disclosure of all metrics and tests is not. These measurements are anyway secondary to method. Yardi’s motion to compel disclosure should not have been so focused on a recall and richness number. It should instead of focused on methods. The e-Discovery Team methods are spelled out in detail in the TAR Course. Maybe that is what Entrata followed? Probably not. Maybe, God forbid, Entrata used random driven CAL? Maybe the TAR was a classic Sponge Bob Square Pants production of shame and failure? Now Yardi will never know. Or will they?

Yardi’s Quest for True Facts is Not Over

About the only way the requesting party, Yardi, can possibly get TAR disclosure in this case now is by proving the review and production made by Entrata was negligent, or worse, done in bad faith. That is a difficult burden. The requesting party has to hope they find serious omissions in the production to try to justify disclosure of method and metrics. (At the time of this order production by Entrata had not been made.) If expected evidence is missing, then this may suggest a record cleansing, or it may prove that nothing like that ever happened. Careful investigation is often required to know the difference between a non-existent unicorn and a rare, hard to find albino.

Remember, the producing party here, the one deep in the secret TAR, was Entrata, Inc. They are Yardi Systems, Inc. rival software company and defendant in this case. This is a bitter case with history. It is hard for attorneys not to get involved in a grudge match like this. Looks like strong feelings on both sides with a plentiful supply of distrust. Yardi is, I suspect, highly motivated to try to find a hole in the ESI produced, one that suggests negligent search, or worse, intentional withholding by the responding party, Entrata, Inc. At this point, after the motion to compel TAR method was denied, that is about the only way that Yardi might get a second chance to discover the technical details needed to evaluate Entrata’s TAR. The key question driven by Rule 26(g) is whether reasonable efforts were made. Was Entrata’s TAR terrible or terrific? Yardi may never know.

What about Yardi’s discovery? Do they have clean hands? Did Yardi do as good a job at ESI search as Entrata? (Assuing that Yardi used TAR too.) How good was Yardi’s TAR? (Had to ask that!) Was Yardi’s TAR as tardy as its motion? What were the metrics of Yardi’s TAR? Was it dark too? The opinion does not say what Yardi did for its document productions. To me that matters a lot. Cooperation is a mutual process. It is not capitulation. The same goes for disclosure. Do not come to me demanding disclosure but refusing to reciprocate.

How to Evaluate a Responding Party’s TAR?

Back to the TAR at issue. Was Entrata’s TAR riddled with errors? Did they oppose Yardi’s motion because they did a bad job? Was this whole project a disaster? Did Entrata know they had driven into a TAR pit? Who was the vendor? What software was used? Did it have active machine learning features? How were they used? Who was in charge of the TAR? What were their qualifications? Who did the hands-on review? What problems did they run into? How were these problems addressed? Did the client assist? Did the SMEs?

Perhaps the TAR was sleek and speedy and produced the kind of great results that many of us expect from active machine learning. Did sampling suggest low recall? Or high recall? How was the precision? How did this change over the rounds of training. The machine training was continuous, correct? The “seed-set nonsense” was not used, was it? You did not rely on a control set to measure results, did you? You accounted for natural concept drift, didn’t you, where the understanding of relevance changes over the course of the review? Did you use ei-Recall statistical sampling at the end of the project to test your work? Was a “Zero Error” policy followed for the omission of Highly Relevant documents as I recommend?. Are corrective supplemental searches now necessary to try to find missing evidence that is important to the outcome of the case? Do we need to force them to use an expert? Require that they use the state of the art standard, the e-Discovery Team’s Predictive Coding 4.0 Hybrid Multimodal IST?

Yardi’s motion was weak and tardy so Entrata, Inc. could defend its process simply by keeping it secret. This is the work-product defense approach. This is NOT how I would have defended a TAR process. Or rather, not the only way. I would have objected to interference, but also made controlled, limited disclosures. I would have been happy, even proud to show what state of the art search looks like. I would introduce our review team, including our experts, and provide an overview of the methods, the work-flow.

I would also have demanded reciprocal disclosures. What method, what system did you use? TAR is an amazing technology, if used correctly. If used improperly, TAR can be a piece of junk. How did the Subject Matter Experts in this case control the review? Train the machine? Is that a scary ghost in the machine or just a bad SMI?

How did Entrata do it? How for that matter did the requesting party, Yardi, do it? Did it use TAR as part of its document search? Is Yardi proud of its TAR? Or is Yardi’s TAR as dark and hardy har har as Entrata’s TAR. Are all the attorneys and techs walking around with their heads down and minds spinning with secret doc review failures?

e-Discovery Team reviews routinely exceed minimal reasonable efforts; we set the standards of excellence.  I would have made reasonable reassurances by disclosure of method. That builds trust. I would have pointed them to the TAR Course and the 4.0 methods. I would have sent them the below eight-step work-flow diagram. I would have told then that we follow these eight steps or if any deviations were expected, explained why.

I would have invited opposing counsel to participate in the process with any suggested keywords, hot documents to use to train. I would even allow them to submit model fictitious training documents. Let them create fake documents to help us to find any real ones that might be like it, no matter what specific words are used. We are not trying to hide anything. We are trying to find all relevant documents. All relevant documents will be produced, good or bad. Repeat that often. Trust is everything. You can never attain real cooperation without it. Trust but verify. And clarify by talk. Talk to your team, your client, witnesses, opposing counsel and the judge. That is always the first step.

Of course, I would not spend unlimited time going over everything. I dislike meetings and talking to people who have little or no idea what I am saying. Get your own expert. Do the work. These big document review projects often go on for weeks and you could waste and spend a small fortune with too much interaction and disclosures. I don’t want people looking over my shoulder and I don’t want to reveal all of my tricks and work-product secrets, just the general stuff you could get by study of my books. I would have drawn some kind of line of secrecy in the sand, hopefully not quicksand, so that our disclosures were reasonable and not over-burdensome. In Entrata the TAR masters doing the search did not want reveal much of anything. They were very distrustful of Yardi and perhaps sensed a trap. More on that later. Or maybe Entrata did have something to hide? How do we get at the truth of this question without looking at all of the documents ourselves? That is very difficult, but one way to get at the truth is to look at the search methods used, the project metadata.

The dark TAR defense worked for Entrata, but do not count on it working in your case. The requesting party might not be tardy like Yardi. They might make a far better argument.

Order Affirming Denial of Motion to Compel Disclosure of TAR

The well-written opinion in Entrata, Inc. v. Yardi Systems, Inc., (D.C. Utah, 10/29/18) was  by United States District Judge Clark Waddoups. Many other judges have gone over this transparency issue before and Judge Waddoups has a good summary of the ones cited to him by the moving party. I remember tackling these transparency issues with Judge Andrew Peck in Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (S.D.N.Y. 2012), which is one of the cases that Judge Waddoups cites. At that time, 2012, there was no precedent even allowing Predictive Coding, much less discussing details of its use, including disclosure best-practices. We made strong efforts of cooperation on the transparency issues after Judge Peck approved predictive coding. Judge Peck was an expert in TAR and very involved in the process. That kind of cooperation that can be encouraged by a very active judge did not happen in Entrata. The cooperative process failed. That led to a late motion by the Plaintiff to force disclosure of the TAR.

The plaintiff, Yardi Systems, Inc, is the party who requested ESI from defendants in this software infringement case. It wanted to know how the defendant was using TAR to respond to their request. Plaintiff’s motion to compel focused on disclosure of the statistical analysis of the results, Recall and Prevalence (aka Richness). That was another mistake. Statistics alone can be meaningless and misleading, especially if range is not considered, including the binomial adjustment for low prevalence. This is explained and covered by my ei-Recall test. Introducing “ei-Recall” – A New Gold Standard for Recall Calculations in Legal SearchPart One, Part Two and Part Three (e-Discovery Team, 2015). Also see: In Legal Search Exact Recall Can Never Be Known.

Disclosure of the whole process, the big picture, is the best Defense Of Process evidence, not just a couple of random sample test results. Looks like the requesting party here might have just been seeking “gotcha material” by focusing so much on the recall numbers. That may be another unstated reason both the Magistrate and District Court Judges denied their late request for disclosure. That could why the attorneys for Entrata kept their TAR dark, even though they were not negligent or in bad faith. Maybe they were proud of their efforts, but were tired of bad faith hassling by the requesting party. Hard to know based on this opinion alone.

After Chief Magistrate Judge Warner denied Yardi’s motion to compel, Yardi appealed to the District Court Judge Waddoups and argued that the Magistrate’s order was “clearly erroneous and contrary to law.”  Yardi argued that “the Federal Rules of Civil Procedure and case law require Entrata, in the first instance, to provide transparent disclosures as a requirement attendant to its use of TAR in its document review.”

Please, that is not an accurate statement of the governing legal precedent. It was instead “wishful thinking” on the part of plaintiff’s counsel. Sounds like a Sponge Bob Square Pants move to me. Judge Waddoups considered taxing fees against the plaintiff under Rule 37(a)(5)(B) because of this near frivolous argument, but ultimately let them off by finding the position was not “wholly meritless.”

Judge Waddoups had no choice but to deny a motion like this filed under these procedures. Here is a key paragraph explaining his reasoning for denial.

The Federal Rules of Civil Procedure assume cooperation in discovery. Here, the parties never reached an agreement regarding search methodology. In the court’s view, the lack of any agreement regarding search methodology is a failure on the part of both parties. Nevertheless, Yardi knew, as early as May of 2017, that Entrata intended to use TAR. (See ECF No. 257-1 at 2.) The Magistrate Court’s September 20, 2017 Order stated, in part, that “[i]f the parties are unable to agree on . . . search methodology within 30 days of the entry of this Order, the parties will submit competing proposals . . . .” (ECF No. 124 at 2.) Yardi, as early as October 2, 2017, knew that “Entrata [was] refus[ing] to provide” “TAR statistics.” (See ECF No. 134 at 3.) In other words, Yardi knew that the parties had not reached an agreement regarding search methodology well before the thirty day window closed. Because Yardi knew that the parties had not reached an agreement on search methodology, it should have filed a proposal with the Magistrate Court. This would have almost certainly aided in resolving this dispute long before it escalated. But neither party filed any proposal with the Magistrate Court within 30 days of entry of its Order. Yardi has not pointed to any Federal Rule of Civil Procedure demonstrating that the Magistrate Court’s Order was contrary to law. This court rejects Yardi’s argument relating to the Federal Rules of Civil Procedure.

Conclusion

The requesting party in Entrata did not meet the high burden needed to reverse a magistrate,s discovery ruling as clearly erroneous and contrary to law. If you are ever going to win on a motion like this, it will likely be on a Magistrate level. Seeking to overturn a denial and meet this burden to reverse is extremely difficult, perhaps impossible in cases seeking to compel TAR disclosure. The whole point is that there is no clear law on the topic yet. We are asking judges to make new law, to establish new standards of transparency. You must be open and honest to attain this kind of new legal precedent. You must use great care to be accurate in any representations of Fact or Law made to a court. Tell them it is a case of first impression when the precedent is not on point as was the situation in Entrata, Inc. v. Yardi Systems, Inc., Case No. 2:15-cv-00102 (D.C. Utah, 10/29/18). Tell them the good and the bad. There was never a perfect case and there always has to be a first for anything. Legal precedent moves slowly, but it moves continuously. It is our job as practicing attorneys to try to guide that change.

The requesting party seeking disclosure of TAR methods in Entrata doomed their argument by case law  misstatements and in-actions. They might have succeeded by making full disclosures themselves, both of the law and their own TAR. The focus of their argument should be on the benefits of doing TAR right and the dangers of doing it wrong. They should have talked more about what TAR – Technology Assisted Review – really means. They should have stressed cooperation and reciprocity.

To make new precedent in this area you must first recognize and explain away a number of opposing principles,  including especially The Sedona Conference Principle Six. That says responding parties always know best and requesting parties should stay out of their document reviews. I have written about this Principle and why it should be updated. Losey, Protecting the Fourteen Crown Jewels of the Sedona Conference in the Third Revision of its Principles (e-Discovery Team, 2//2/17). The Sedona Principle Six argument  is just one of many successful defenses that can be used to protect against forced TAR disclosure. There are also good arguments based on the irrelevance of this search information to claims or defenses under Rule 26(b)(2) and under work-product confidentiality protection.

Any party who would like to force another to make TAR disclosure should make such voluntary disclosures themselves. Walk your talk to gain credibility. The disclosure argument will only succeed, at least for the first time (the all -important test case), in the context of proportional cooperation. An extended 26(f) conference is a good setting and time. Work-product confidentiality issues should be raised in the first days of discovery, not the last day. Timing is critical.

The 26(f) discovery conference dialogue should be directed towards creating a uniform plan for both sides. This means the TAR disclosures should be reciprocal. The ideal test case to make this law would be a situation where the issue is decided early at a Rule 16(b) hearing. It would involve a situation where one side is willing to disclose, but the other is not, or where the scope of disclosures is disputed. At the 16(b) hearing, which usually takes place in the first three months, the judge is supposed to consider the parties’ Rule 26(f) report and address any discovery issues raised, such as TAR method and disclosures.

The first time disclosure is forced by a judge it will almost certainly be a mutual obligation. Each side should will be required to assume the same disclosure obligations. This could include  a requirement for statistical sampling  and disclosure of certain basic metrics such as Recall range, Prevalence and Precision? Sampling tests like this can be run no matter what search method is used, even little old keyword search.

It is near impossible to come into court when both sides have extensive ESI and demand that your opponent do something that you yourself refuse to do. If you expect to be able to force someone to use TAR, or to disclose basic TAR methods and metrics, then you had better be willing to do that yourself. If you are going to try to force someone to disclose work-product protected information, such as an attorney’s quality control tests for Recall range in document review, then you had better make such a limited waiver yourself.

 


Do TAR the Right Way with “Hybrid Multimodal Predictive Coding 4.0”

October 8, 2018

The term “TAR” – Technology Assisted Review – as we use it means document review enhanced by active machine learning. Active machine learning is an important tool of specialized Artificial Intelligence. It is now widely used in many industries, including Law. The method of AI-enhanced document review we developed is called Hybrid Multimodal Predictive Coding 4.0. Interestingly, reading these words in the new Sans Forgetica font will help you to remember them.

We have developed an online instructional program to teach our TAR methods and AI infused concepts to all kinds of legal professionals. We use words, studies, case-law, science, diagrams, math, statistics, scientific studies, test results and appeals to reason to teach the methods. To balance that out, we also make extensive use of photos and videos. We use right brain tools of all kinds, even subliminals, special fonts, hypnotic images and loads of hyperlinks. We use emotion as another teaching tool. Logic and Emotion. Sorry Spock, but this multimodal, holistic approach is more effective with humans than an all-text, reason-only approach of Vulcan law schools.

We even try to use humor and promote student creativity with our homework assignments. Please remember, however, this is not an accredited law school class, so do not expect professorial interaction. Did we mention the TAR Course is free?

By the end of study of the TAR Course you will know and remember exactly what Hybrid Multimodal means. You will understand the importance of using all varieties of legal search, for instance: keywords, similarity searches, concept searches and AI driven probable relevance document ranking. That is the Multimodal part. We use all of the search tools that our KL Discovery document review software provides.

 

The Hybrid part refers to the partnership with technology, the reliance of the searcher on the advanced algorithmic tools. It is important than Man and Machine work together, but that Man remain in charge of justice. The predictive coding algorithms and software are used to enhance the lawyers, paralegals and law tech’s abilities, not replace them.

By the end of the TAR Course you will also know what IST means, literally Intelligently Spaced Training. It is our specialty technique of AI training where you keep training the Machine until first pass relevance review is completed. This is a type of Continuous Active Learning, or as Grossman and Cormack call it, CAL. By the end of the TAR Course you should also know what a Stop Decision is. It is a critical point of the document review process. When do you stop the active machine teaching process? When is enough review enough? This involves legal proportionality issues, to be sure, but it also involves technological processes, procedures and measurements. What is good enough Recall under the circumstances with the data at hand? When should you stop the machine training?

We can teach you the concepts, but this kind of deep knowledge of timing requires substantial experience. In fact, refining the Stop Decision was one of the main tasks we set for ourself for the  e-Discovery Team experiments in the Total Recall Track of the National Institute of Standards and Technology Text Retrieval Conference in 2015 and 2016. We learned a lot in our two years. I do not think anyone has spent more time studying this in both scientific and commercial projects than we have. Kudos again to KL Discovery for helping to sponsor this kind of important research  by the e-Discovery Team.

 

 

Working with AI like this for evidence gathering is a newly emerging art. Take the TAR Course and learn the latest methods. We divide the Predictive Coding work flow into eight-steps. Master these steps and related concepts to do TAR the right way.

 

Pop Quiz: What is one of the most important considerations on when to train again?

One Possible Correct Answer: The schedule of the humans involved. Logistics and project planning is always important for efficiency. Flexibility is easy to attain with the IST method. You can easily accommodate schedule changes and make it as easy as possible for humans and “robots” to work together. We do not literally mean robots, but rather refer to the advanced software and the AI that arises from the machine training as an imiginary robot.

 

 

 

 

 

 

 

 

 


%d bloggers like this: