Robophobia: Great New Law Review Article – Part 1

May 19, 2022

This blog is the first part of my review of one of the most interesting law review articles I’ve read in a long time, Robophobia. Woods, Andrew K., Robophobia, 93 U. Colo. L. Rev. 51  (Winter, 2022). Robophobia provides the first in-depth analysis of human prejudice against smart computer technologies and its policy implications. Robophobia is the next generation of technophobia, now focusing on the human fear of replacing human decision makers with robotic ones. For instance, I love technology, but am still very reluctant to let an AI drive my car. My son, on the other hand, loves to let his Tesla take over and do the driving, and watch while my knuckles go white. Then he plays the car’s damn fart noises and other joke features and I relax. Still, I much prefer a human at the wheel. This kind of anxiety about advanced technology decision making is at the heart of the law review article.

Technophobia and its son, robophobia, are psychological anxieties that electronic discovery lawyers know all too well. Often it is from first-hand experience with working with other lawyers. This is especially true for those who work with active machine learning. Ediscovery lawyers tire of hearing that keyword search and predictive coding are not to be trusted, that humans reviewing every document is the gold standard. Professor Woods goes into AI and ediscovery a little bit in Robophobia. He cites our friends Judge Andrew Peck, Maura Grossman, Doug Austin and others. But that is only a small part of this interesting technology policy paper. It argues that a central question now facing humanity is when and where to delegate decision-making authority to machines. This question should be made based on the facts and reason, not on emotions and unconscious prejudices.

Ralph and Robot

To answer this central question we need to recognize and overcome our negative stereotypes and phobias about AI. Robots are not all bad. Neither are people. Both have special skills and abilities and both make mistakes. As should be mentioned right away, Professor Woods in Robophobia uses the term “robot” very broadly to include all kinds of smart algorithms, not just actual robots. We need to overcome our robot phobias. Algorithms are already better than people at a huge array of tasks, yet we reject them for not being perfect. This must change.

Robophobia is a decision-making bias. It interferes with our ability to make sensible policy choices. The law should help society to decide when and what kind of decisions should be delegated to the robots, to balance the risk of using a robot compared to the risk of not using one. Robophobia is a decision-making bias that interferes with our ability to make sensible policy choices. In my view, we need to overcome this bias now, to delegate responsibly, so that society can survive the current danger of misinformation overload. See eg. my blog, Can Justice Survive the Internet? Can the World? It’s Not a Sure Thing. Look Up!

This meta review article (review of a law review) is written in three parts, each fairly short (for me), largely because the Robophobia article itself is over 16,000 words and has 308 footnotes. My meta-review will focus on the parts I know best, the use of artificial intelligence in electronic discovery. The summary will include my typical snarky remarks to keep you somewhat amused, and several cool quotes of Woods, all in an attempt to entice some of you to take the deep dive and read Professor Woods’ entire article. Robophobia is all online and free to access at the University of Colorado Law Review website.

Professor Andrew Woods

Professor Andrew Woods

Andrew Keane Woods is an Professor of Law at the University of Arizona College of Law. He is a young man with an impressive background. First the academics, since, after all, he is a Professor:

  • Brown University, A.B. in Political Science, magna cum laude, 2002;
  • Harvard Law School, J.D., cum laude (2007);
  • University of Cambridge, Ph.D. in Politics and International Studies (2012);
  • Stanford University, Postdoctoral Fellow in Cybersecurity (2012—2014).

As to writing, he has at least twenty law review articles and book chapters to his credit. Aside from Robophobia, some of the most interesting ones I see on his resume are:

  • Artificial Intelligence and Sovereignty, DATA SOVEREIGNTY ALONG THE SILK ROAD (Anupam Chander & Haochen Sun eds., Oxford University Press, forthcoming);
  • Internet Speech Will Never Go Back to Normal,” (with Jack Goldmsith) THE ATLANTIC (Apr. 25, 2020).
  • Our Robophobia,” LAWFARE (Feb. 19, 2020).
  • Keeping the Patient at the Center of Machine Learning in Healthcare, 20 AMERICAN JOURNAL OF BIOETHICS 54 (2020) (w/ Chris Robertson, Jess Findley, Marv Slepian);
  • Mutual Legal Assistance in the Digital Age, THE CAMBRIDGE HANDBOOK OF SURVEILLANCE LAW (Stephen Henderson & David Gray eds., Cambridge University Press, 2020);
  • Litigating Data Sovereignty, 128 YALE LAW JOURNAL 328 (2018).

Bottom line, Woods is a good researcher (of course he had help from a zillion law students, whom he names and thanks), and a deep thinker on AI, technology, privacy, politics and social policies. His opinions deserve our careful consideration. In my language, his insights can help us to move beyond mere information to genuine knowledge, perhaps even some wisdom. See eg. my prior blogs, Information → Knowledge → Wisdom: Progression of Society in the Age of Computers (2015); AI-Ethics: Law, Technology and Social Values (website).

Quick Summary of Robophobia

Bad Robot?

Robots – machines, algorithms, artificial intelligence – already play an important role in society. Their influence is growing very fast. Robots are already supplementing or even replacing some human judgments. Many are concerned with the fairness, accuracy, and humanity of these systems. This is rightly so. But, at this point, the anxiety about machine bias is crazy high. The concerns are important, but they almost always run in one direction. We worry about robot bias against humans. We do not worry about human bias against robots. Professor Woods shows that this is a critical mistake.

It is not an error because robots somehow inherently deserve to be treated fairly, although that may someday be true. It is an error because our bias against nonhuman deciders is bad for us humans. A great example Professor Woods provides is self-driving cars. It would be an obvious mistake to reject all self-driving cars merely because one causes a single fatal accident. Yet this is what happened, for a while at least, when an Uber self-driving car crashed into a pedestrian in Phoenix. See eg. FN 71 of Robophobia: Ryan Randazzo, Arizona Gov. Doug Ducey Suspends Testing of Uber Self-Driving Cars, Ariz. Republic, (Mar. 26, 2018). This kind of one-sided perfection bias ignores the fact that humans cause forty thousand traffic fatalities a year, with an average of three deaths every day in Arizona alone. We tolerate enormous risk from our fellow humans, but almost none from machines. That is flawed, biased thinking. Yet, even rah-rah techno promoters like me suffer from it.

Ralph hoping a human driver shows up soon.

Professor Woods shows that there is a substantial literature concerned with algorithmic bias, but until now, its has been ignored by scholars. This suggests that we routinely prefer worse-performing humans over better-performing robots. Woods points out that we do this on our roads, in our courthouses, in our military, and in our hospitals. As he puts it in his Highlights section, that precede the Robophobia article itself, which I am liberally paraphrasing in this Quick Summary: “Our bias against robots is costly, and it will only get more so as robots become more capable.

Robophobia not only catalogs the many different forms of anti-robot bias that already exist, which he calls a taxonomy of robophobia, it also suggests reforms to curtail the harmful effects of that bias. Robophobia provides many good reasons to be less biased against robots. We should not be totally trusting mind you, but less biased. It is in our own best interests to do so. As Professor Woods puts it, “We are entering an age when one of the most important policy questions will be how and where to deploy machine decision-makers.

 Note About “Robot” Terminology

Before we get too deep into Robophobia, we need to be clear about what Professor Woods means here. We need to define our terms. Woods does this in the first footnote where he explains as follows (HAL image added):

The article is concerned with human judgment of automated decision-makers, which include “robots,” “machines,” “algorithms,” or “AI.” There are meaningful differences between these concepts and important line-drawing debates to be had about each one. However, this Article considers them together because they share a key feature: they are nonhuman deciders that play an increasingly prominent role in society. If a human judge were replaced by a machine, that machine could be a robot that walks into the courtroom on three legs or an algorithm run on a computer server in a faraway building remotely transmitting its decisions to the courthouse. For present purposes, what matters is that these scenarios represent a human decider being replaced by a nonhuman one. This is consistent with the approach taken by several others. See, e.g., Eugene Volokh, Chief Justice Robots, 68 DUKE L.J. 1135 (2019) (bundling artificial intelligence and physical robots under the same moniker, “robots”); Jack Balkin, 2016 Sidley Austin Distinguished Lecture on Big Data Law and Policy: The Three Laws of Robotics in the Age of Big Data, 78 OHIO ST. L.J. 1217, 1219 (2017) (“When I talk of robots … I will include not only robots – embodied material objects that interact with their environment – but also artificial intelligence agents and machine learning algorithms.”); Berkeley Dietvorst & Soaham Bharti, People Reject Algorithms in Uncertain Decision Domains Because They Have Diminishing Sensitivity to Forecasting Error, 31 PSYCH. SCI. 1302, 1314 n.1 (2020) (“We use the term algorithm to describe any tool that uses a fixed step-by-step decision-making process, including statistical models, actuarial tables, and calculators.”). This grouping contrasts scholars who have focused explicitly on certain kinds of nonhuman deciders. Seee.g., Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 CALIF. L. REV. 513, 529 (2015) (focusing on robots as physical, corporeal objects that satisfy the “sense-think-act” test as compared to, say, a “laptop with a camera”).

I told you Professor Woods was a careful scholar, but wanted you to see for yourself by a full quote of footnote one. I promise to exclude footnotes and his many string cites going forward in this blog article, but I do intend to frequently quote his insightful, policy packed language. Did you note his citation to Chief Justice Roberts in his explanation of “robophobia”? I will end this first part of my review of Robophobia with a side excursion into the Justice Robert cite. It provides a good example of irrational robot fears and insight into the Chief Justice himself, which is something I’ve been considering a lot lately. See eg. my recent article The Words of Chief Justice Roberts on JUDICIAL INTEGRITY Suggest the Supreme Court Should Step Away from the Precipice and Not Overrule ‘Roe v Wade’.

Chief Justice Roberts Told High School Graduates in 2018 to “Beware the Robots”

The Chief Justice gave a very short speech at his daughter’s private high school graduation. There he demonstrated a bit of robot anxiety, but did so in an interesting manner. It bears some examination before we get into the substance of Woods’ Robophobia article. For more background on the speech see eg. Debra Cassens Weiss, Beware the robots,’ chief justice tells high school graduates (June 6, 2018). Here are the excerpted words of Chief Justice John Roberts:

Beware the robots! My worry is not that machines will start thinking like us. I worry that we will start thinking like machines. Private companies use artificial intelligence to tell you what to read, to watch and listen to, based on what you’ve read, watched and listened to. Those suggestions can narrow and oversimplify information, stifling individuality and creativity.

Any politician would find it very difficult not to shape his or her message to what constituents want to hear. Artificial intelligence can change leaders into followers. You should set aside some time each day to reflect. Do not read more, do not research more, do not take notes. Put aside books, papers, computers, telephones. Sit, perhaps just for a half hour, and think about what you’re learning. Acquiring more information is less important than thinking about the information you have.”

Aside from the robot fear part, which was really just an attention grabbing speech thing, I could not agree more with his main point. We should move beyond mere information, we should take time to process the information and subject it to critical scrutiny. We should transform from mere information gatherers, into knowledge makers. My point exactly in Information → Knowledge → Wisdom: Progression of Society in the Age of Computers (2015). You could also compare this progression with an ediscovery example, moving from just keyword search to predictive coding.


Part Two of my review of Robophobia is coming soon. In the meantime, take a break and think about any fears you may have about AI. Everyone has some. Would you let the AI drive your car? Select your documents for production? Are our concerns about killer robots really justified, or maybe just the result of media hype? For more thoughts on this, see AI-Ethics.com. And yes, I’ll be Baaack.


Elusion Random Sample Test Ordered Under Rule 26(g) in a Keyword Search Based Discovery Plan

August 26, 2018

There is a new case out of Chicago that advances the jurisprudence of my sub-specialty, Legal Search. City of Rockford v. Mallinckrodt ARD Inc., 2018 WL 3766673, Case 3:17-cv-50107 (N.D. Ill., Aug. 7, 2018). This discovery order was written by U.S. Magistrate Judge Iain Johnston who entitled it: “Order Establishing Production Protocol for Electronically Stored Information.” The opinion is both advanced and humorous, destined to be an oft-cited favorite for many. Thank you Judge Johnston.

In City of Rockford an Elusion random sample quality assurance test was required as part of the parties discovery plan to meet the reasonable efforts requirements of Rule 26(g). The random sample procedure proposed was found to impose only a proportional, reasonable burden under Rule 26(b)(1). What makes this holding particularly interesting is that an Elusion test is commonly employed in predictive coding projects, but here the parties had agreed to a keyword search based discovery plan. Also see: Tara Emory, PMP, Court Holds that Math Matters for eDiscovery Keyword Search,  Urges Lawyers to Abandon their Fear of Technology (Driven, (August 16, 2018) (“party using keywords was required to test the search effectiveness by sampling the set of documents that did not contain the keywords.”)

The Known Unknowns and Unknown Unknowns

Judge Johnston begins his order in City of Rockford with a famous quote by Donald Rumseld, a two-time Secretary of Defense.

“[A]s we know there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. . .”
Donald Rumseld

For those not familiar with this famous Known Knowns quip, here is a video of the original:

Here the knowledge logic is spelled out in a chart, since I know we all love that sort of thing. Deconstructing Rumsfeld: Knowledge and Ignorance in the Age of Innovation (Inovo 5/114).

Anybody who does complex investigations is familiar with this problem. Indeed, you can argue this insight is fundamental to all of science and experimental method. Logan, David C. (March 1, 2009). “Known knowns, known unknowns, unknown unknowns and the propagation of scientific enquiry”, Journal of Experimental Botany 60 (3). pp. 712–4. [I have always wanted to quote a botany journal.]

How do you deal with the known unknowns and the unknown unknowns, the information that we don’t even know that we don’t know about? The deep, hidden information that is both obtuse and rare. Information that is hard to retrieve and harder still to prove does not exist at all. Are you chasing something that might not exist? Something unknown because nonexistent? Such as an overlooked Highly Relevant document? (The stuff of nightmares!) Are you searching for nothing? Zero? If you find it, what does that mean? What can be known and what can never be known? Scientists, investigators and the Secretary of Defense alike all have to ponder these questions and all want to use the best tools and best people possible to do so. See: Deconstructing Rumsfeld: Knowledge and Ignorance in the Age of Innovation (Inovo 5/114).

Seeking Knowledge of the Unknown Elusion Error Rate

These big questions, though interesting, are not why Judge Johnston started his opinion with the Rumseld quote. Instead, he used the quote to emphasize that new e-discovery methods, namely random sampling and statistical analysis, can empower lawyers to know what they never did before. A technical way to know the known unknowns. For instance, a way to know the number of relevant documents that will be missed and not produced: the documents that elude retrieval.

As the opinion and this blog will explain, you can do that, know that, by using an Elusion random sample of the null-set. The statistical analysis of the sample transforms the unknown quantity to a known (subject to statistical probabilities and range). It allows lawyers to know, at least within a range, the number of relevant documents that have not been found. This is a very useful quality assurance method that relies on objective measurements to demonstrate success of your project, which here is information retrieval. This and other random sampling methods allow for the calculation of Recall, meaning the percent of total relevant documents found. This is another math-based, quality assurance tool in the field of information retrieval.

One of the main points Judge Johnston makes in his order is that lawyers should embrace this kind of technical knowledge, not shy away from it. As Tara Emory said in her article, Court Holds that Math Matters for eDiscovery Keyword Search:

A producing party must determine that its search process was reasonable. In many cases, the best way to do this is with objective metrics. Producing parties often put significant effort into brainstorming keywords, interviewing witnesses to determine additional terms, negotiating terms with the other party, and testing the documents containing their keywords to eliminate false positives. However, these efforts often still fail to identify documents if important keywords were missed, and sampling the null set is a simple, reasonable way to test whether additional keywords are needed. …

It is important to overcome the fear of technology and its related jargon, which can help counsel demonstrate the reasonableness of search and production process. As Judge Johnston explains, sampling the null set is a process to determine “the known unknown,” which “is the number of the documents that will be missed and not produced.” Judge Johnson disagreed with the defendants’ argument “that searching the null set would be costly and burdensome.” The Order requires Defendants to sample their null set at a 95% +/-2% margin of error (which, even for a very large set of documents, would be about 2,400 documents to review).[4] By taking these measures—either with TAR or with search terms, counsel can more appropriately represent that they have undertaken a “reasonable inquiry” for relevant information within the meaning of FRCP 26(g)(1).

Small Discovery Dispute in an Ocean of Cooperation

Judge Johnston was not asked to solve the deep mysteries of knowing and not knowing in City of Rockford. The parties came to him instead with an interesting, esoteric discovery dispute. They had agreed on a great number of things, for which the court profusely congratulated them.

The attorneys are commended for this cooperation, and their clients should appreciate their efforts in this regard. The Court certainly does. The litigation so far is a solid example that zealous advocacy is not necessarily incompatible with cooperation. The current issue before the Court is an example of that advocacy and cooperation. The parties have worked to develop a protocol for the production of ESI in this case, but have now reached an impasse as to one aspect of the protocol.

The parties disagreed on whether to include a document review quality assurance test in the protocol. The Plaintiffs wanted one and the Defendants did not. Too burdensome they said.

To be specific, the Plaintiffs wanted a test where the efficacy of any parties production would be tested by use of an Elusion type of Random Sample of the documents not produced. The Defendants opposed any specific test. Instead, they wanted the discovery protocol to say that if the receiving party had concerns about the adequacy of the producing party’s efforts, then they would have a conference to address the concerns.

Judge Johnston ruled for the plaintiff in this dispute and ordered a  random elusion sample to be taken after the defendant stopped work and completed production. In this case it was a good decision, but should not be routinely required in all matters.

The Stop Decision and Elusion Sample

One of the fundamental problems in any investigation is to know when you should stop the investigation because it is no longer worth the effort to carry on. When has a reasonable effort been completed? Ideally this happens after all of the important documents have already been found. At that point you should stop the effort and move on to a new project. Alternatively, perhaps you should keep on going and look for more? Should you stop or not?

In Legal Search we all this the “Stop Decision.” Should you conclude the investigation or continue further AI training rounds and other search. As explained in the e-Discovery Team TAR Course:

The all important stop decision is a legal, statistical decision requiring a holistic approach, including metrics, sampling and over-all project assessment.You decide to stop the review after weighing a multitude of considerations. Then you test your decision with a random sample in Step Seven.

See: TAR Course: 15th Class – Step Seven – ZEN Quality Assurance Tests.

If you want to go deeper into this, then listen in on this TAR Course lecture on the Stop decision.

____________

Once a decision is made to Stop, then a well managed document review project will use different tools and metrics to verify that the Stop decision was correct. Judge Johnston in City of Rockford used one of my favorite tools, the Elusion random sample that I teach in the e-Discovery Team TAR Course. This type of random sample is called an Elusion sample.

Judge Johnston ordered an Elusion type random sample of the null set in City of Rockford. The sample would determine the range of relevant documents that likely eluded you. These are called False Negatives. Documents presumed Irrelevant and withheld that were in fact Relevant and should have been produced. The Elusion sample is designed to give you information on the total number of Relevant documents that were likely missed, unretrieved, unreviewed and not produced or logged. The fewer the number of False Negatives the better the Recall of True Positives. The goal is to find, to retrieve, all of the Relevant ESI in the collection.

Another way to say the same thing is to say that the goal is Zero False Negatives. You do not miss a single relevant file. Every file designated Irrelevant is in fact not relevant. They are all True Negatives. That would be Total Recall: “the Truth, the Whole Truth …” But that is very rare and some error, some False Negatives, are expected in every large information retrieval project. Some relevant documents will almost always be missed, so the goal is to make the False Negatives inconsequential and keep the Elusion rate low.

Here is how Judge Iain Johnston explained the random sample:

Plaintiffs propose a random sample of the null set. (The “null set” is the set of documents that are not returned as responsive by a search process, or that are identified as not relevant by a review process. See Maura R. Grossman & Gordon v. Cormack, The Grossman-Cormack Glossary of Technology-Assisted Review, 7 Fed. Cts. L. Rev. 1, 25 (2013). The null set can be used to determine “elusion,” which is the fraction of documents identified as non-relevant by a search or review effort that are, in fact, relevant. Elusion is estimated by taking a random sample of the null set and determining how many or what portion of documents are actually relevant. Id. at 15.) FN 2

Judge Johnston’s Footnote Two is interesting for two reasons. One, it attempts to calm lawyers who freak out when hearing anything having to do with math or statistics, much less information science and technology. Two, it does so with a reference to Fizbo the clown.

The Court pauses here for a moment to calm down litigators less familiar with ESI. (You know who you are.) In life, there are many things to be scared of, including, but not limited to, spiders, sharks, and clowns – definitely clowns , even Fizbo. ESI is not something to be scared of. The same is true for all the terms and jargon related to ESI. … So don’t freak out.

Accept on Zero Error for Hot Documents

Although this is not addressed in the court order, in my personal view, no False Negatives, iw – overlooked  documents – are acceptable when it comes to Highly Relevant documents. If even one document like that is found in the sample, one Highly Relevant Document, then the Elusion test has failed in my view. You must conclude that the Stop decision was wrong and training and document review must recommence. That is called an Accept on Zero Error test for any hot documents found. Of course my personal views on best practice here assume the use of AI ranking, and the parties in City of Rockford only used keyword search. Apparently they were not doing machine training at all.

The odds of finding False Negatives, assuming that only a few exist (very low prevalence) and the database is large, are very unlikely in a modest sized random sample. With very low prevalence of relevant ESI the test can be of limited effectiveness. That is an inherent problem with low prevalence and random sampling. That is why statistics have only limited effectiveness and should be considered part of a total quality control program. See Zero Error Numerics: ZEN. Math matters, but so too does good project management and communications.

The inherent problem with random sampling is that the only way to reduce the error interval is to increase the size of the sample. For instance, to decrease the margin of error to only 2% either way, a total error of 4%, a random sample size of around 2,400 documents is needed. Even though that narrows the error rate to 4%, there is still another error factor of the Confidence Level, here at 95%. Still, it is not worth the effort to review even more sample documents to reduce that to a 99% Level.

Random sampling has limitations in low prevalence datasets, which is typical in e-discovery, but still sampling can be very useful. Due to this rarity issue, and the care that producing parties always take to attain high Recall, any documents found in an Elusion random sample should be carefully studied to see if they are of any significance. We look very carefully at any new documents found that are of a kind not seen before. That is unusual. Typically  any relevant documents found by random sample of the elusion set are of a type that have been seen before, often many, many times before. These “same old, same old” type of documents are of no importance to the investigation at this point.

Most email related datasets are filled with duplicative, low value data. It is not exactly irrelevant noise, but it is not a helpful signal either. We do not care if we  get all of that kind of merely relevant data. What we really want are the Hot Docs, the high value Highly Relevant ESI, or at least Relevant and of a kind not seen before. That is why the Accept On Zero Error test is so important for Highly Relevant documents.

The Elusion Test in City of Rockford 

In City of Rockford Judge Johnston considered a discovery stipulation where the parties had agreed to use a typical keyword search protocol, but disagreed on a quality assurance protocol. Judge Johnston held:

With key word searching (as with any retrieval process), without doubt, relevant documents will be produced, and without doubt, some relevant documents will be missed and not produced. That is a known known. The known unknown is the number of the documents that will be missed and not produced.

Back to the False Negatives again, the known unknown. Judge Johnston continues his analysis:

But there is a process by which to determine that answer, thereby making the known unknown a known known. That process is to randomly sample the nullset. Karl Schieneman & Thomas C. Gricks III, The Implications of Rule26(g) on the Use of Technology-Assisted Review, 2013 Fed. Cts. L. Rev. 239, 273 (2013)(“[S]ampling the null set will establish the number of relevant documents that are not being produced.”). Consequently, the question becomes whether sampling the null set is a reasonable inquiry under Rule 26(g) and proportional to the needs of this case under Rule 26(b)(1).

Rule 26(g) Certification
Judge Johnston takes an expansive view of the duties placed on counsel of record by Rule 26(g), but concedes that perfection is not required:

Federal Rule of Civil Procedure 26(g) requires all discovery requests be signed by at least one attorney (or party, if proceeding pro se). Fed. R. Civ. P. 26(g)(1). By signing the response, the attorney is certifying that to the best of counsel’s knowledge, information, and belief formed after a reasonable inquiry, the disclosure is complete and correct at the time it was made. Fed. R. Civ. P. 26(g)(1)(A). But disclosure of documents need not be perfect. … If the Federal Rules of Civil Procedure were previously only translucent on this point, it should now be clear with the renewed emphasis on proportionality.

Judge Johnston concludes that Rule 26(g) on certification applies to require the Elusion sample in this case.

Just as it is used in TAR, a random sample of the null set provides validation and quality assurance of the document production when performing key word searches.  Magistrate Judge Andrew Peck made this point nearly a decade ago. See William A. Gross Constr. Assocs., 256 F.R.D. at 135-6 (citing Victor Stanley, Inc. v. Creative Pipe, Inc., 250 F.R.D. 251, 262 (D. Md. 2008)); In re Seroquel Products Liability Litig., 244 F.R.D. 650, 662 (M.D. Fla. 2007) (requiring quality assurance).

Accordingly, because a random sample of the null set will help validate the document production in this case, the process is reasonable under Rule 26(g).

Rule 26(b)(1) Proportionality

Judge Johnston considered as a separate issue whether it was proportionate under Rule 26(b)(1) to require the elusion test requested. Again, the court found that it was in this large case on the pricing of prescription medication and held that it was proportional:

The Court’s experience and understanding is that a random sample of the null set will not be unreasonably expensive or burdensome. Moreover and critically, Defendants have failed to provide any evidence to support their contention. Mckinney/Pearl Rest. Partners, L.P. v. Metro. Life Ins. Co., 322 F.R.D. 235, 242 (N.D.Tex. 2016) (party required to submit affidavits or offer evidence revealing the nature of the burden)
Once again we see a party seeking protection from having to do something because it is so burdensome then failing to present actual evidence of burden. We see this a lot lately. Responding Party’s Complaints of Financial Burden of Document Review Were Unsupported by the Evidence, Any Evidence (e-Discovery Team, 8/5/18);

Judge Johnston concludes his “Order Establishing Production Protocol for Electronically Stored Information” with the following:

The Court adopts the parties’ proposed order establishing the production protocol for ESI with the inclusion of Plaintiffs’ proposal that a random sample of the null set will occur after the production and that any responsive documents found as a result of that process will be produced. Moreover, following that production, the parties should discuss what additional actions, if any, should occur. If the parties cannot agree at that point, they can raise the issue with the Court.

Conclusion

City of Rockford is important because it is the first case to hold that a quality control procedure should be used to meet the reasonable efforts certification requirements of Rule 26(g). The procedure here required was a random sample Elusion test with related, limited data sharing. If this interpretation of Rule 26(g) is followed by other courts, then it could have a big impact on legal search jurisprudence. Tara Emory in her article, Court Holds that Math Matters for eDiscovery Keyword Search goes so far as to conclude that City of Rockford stands for the proposition that “the testing and sampling process associated with search terms is essential for establishing the reasonableness of a search under FRCP 26(g).”

The City of Rockford holding could persuade other judges and encourage courts to be more active and impose specific document review procedures on all parties, including requiring the use of sampling and artificial intelligence. The producing party cannot always have a  free pass under Sedona Principle Six. Testing and sampling may well be routinely ordered in all “large” document review cases in the future.

It will be very interesting to watch how other attorneys argue City of Rockford. It will continue a line of cases examining methodology and procedures in document review. See eg., William A. Gross Construction Associates, Inc. v. American Manufacturers Mutual Insurance Co., 256 F.R.D. 134 (S.D.N.Y. 2009) (“wake-up call” for lawyers on keyword search); Winfield v. City of New York (SDNY, Nov. 27, 2017), where Judge Andrew Peck considers methodologies and quality controls of the active machine learning process. Also see Special Master Maura Grossman’s Order Regarding Search Methodology for ESI, a validation Protocol for the Broiler Chicken antitrust cases.

The validation procedure of an Elusion sample in City of Rockford is just one of many possible review protocols that a court could impose under Rule 26(g). There are dozens more, including whether predictive coding should be required. So far, courts have been reluctant to order that, as Judge Peck explained in Hyles:

There may come a time when TAR is so widely used that it might be unreasonable for a party to decline to use TAR. We are not there yet.

Hyles v. New York City, No. 10 Civ. 3119 (AT)(AJP), 2016 WL 4077114 (S.D.N.Y. Aug. 1, 2016):

Like a kid in the backseat of the car, I cannot help but ask, are we there yet? Hyles was published over two years ago now. Maybe some court, somewhere in the world, has already ordered a party to do predictive coding against their will, but not to our knowledge. That is a known unknown. Still, we are closer to “There” with the City of Rockford’s requirement of an Elusion test.

When we get “there,” and TAR is finally ordered in a case, it will probably arise in a situation like City of Rockford where a joint protocol applicable to all parties is involved. That is easier to sell than a one-sided protocol. The court is likely to justify the order by Rule 26(g), and hold that it requires all parties in the case to use predictive coding. Otherwise, they will not meet the  reasonable effort burdens of Rule 26(g). Other rules will be cited too, of course, including Rule 1, but Rule 26(g) is likley to be key.

____________

___

 

____

 

 


Responding Party’s Complaints of Financial Burden of Document Review Were Unsupported by the Evidence, Any Evidence

August 5, 2018

One of the largest cases in the U.S. today is a consolidated group of price-fixing cases in District Court in Chicago. In Re Broiler Chicken Antitrust Litigation, 290 F. Supp. 3d 772 (N.D. Ill. 2017) (order denying motions to dismiss and discussing the case). The consolidated antitrust cases involve allegations of a wide spread chicken price-fixing. Big Food Versus Big Chicken: Lawsuits Allege Processors Conspired To Fix Bird Prices (NPR 2/6/18).

The level of sales and potential damages are high. For instance, in 2014 the sales of broiler chickens in the U.S. was $32.7 Billion. That’s sales for one year. The classes have not been certified yet, but discovery is underway in the consolidated cases.

The Broiler Chicken case is not only big money, but big e-discovery. A Special Master (Maura Grossman) was appointed months ago and she developed a unique e-discovery validation protocol order for the case. See: TAR for Smart Chickens, by John Tredennick and Jeremy Pickens that analyzes the validation protocol.

Maura was not involved in the latest discovery dispute where, Agri Stats, one of many defendants, claimed a request for production was too burdensome as to it. The latest problem went straight to the presiding Magistrate Judge Jeffrey T. Gilbert who issued his order on July 26, 2018. In re Broiler Chicken Antitrust Litig., 2018 WL 3586183 (N.D. Ill. 7/26/18).

Agri Stats had moved for a protective order to limit an email production request. Agri Stats claimed that the burden imposed was not proportional because it would be too expensive. Its lawyers told Judge Gilbert that it would cost between $1,200,000 and $1,700,00 to review the email using the keywords negotiated.

Fantasy Hearing

I assume that there were hearings and attorney conferences before the hearings. But I do not know that for sure. I have not seen a transcript of the hearings with Judge Gilbert. All we know is that defense counsel told the judge that under the keywords selected the document review would cost between $1,200,000 and $1,700,000, and that they had no explanation on how the cost estimate was prepared, nor any specifics as to what it covered. Although I was not there, after four decades of doing this sort of work, I have a pretty good idea of what was or might have been said at the hearing.

This representation of million dollar costs by defense counsel would have gotten the attention of the judge. He would naturally have wanted to know how the cost range was calculated. I can almost hear the judge say from the bench: “$1.7 Million Dollars to do a doc review. Yeah, ok. That is a lot of money. Why so much counsel? Anyone?” To which the defense attorneys said in response, much like the students in Ferris Beuller’s class:

“. . . . . .”

 

Yes. That’s right. They had Nothing. Just Voodoo Economics

Well, Judge Gilbert’s short opinion makes it seem that way. In re Broiler Chicken Antitrust Litig., 2018 WL 3586183 (N.D. Ill. 7/26/18).

If a Q&A interchange like this happened, either in a phone hearing, or in person, then the lawyers must have said something. You do not just ignore a question by a federal judge. The defense attorneys probably did a little hemming and hawing, conferred among themselves, and then said something to the judge like: “We are not sure how those numbers were derived, $1.2M to $1.5M, and will have to get back to you on that question, Your Honor.” And then, they never did. I have seen this kind of thing a few times before. We all try to avoid it. But it is even worse to make up a false story, or even present an unverified story to the judge. Better to say nothing and get back to the judge with accurate information.

Discovery Order of July 26, 2018

Here is a quote from Judge Gilbert’s Order so you can read for yourself the many questions the moving party left unanswered (detailed citations to record removed; graphics added):

Agri Stats represents that the estimated cost to run the custodial searches EUCPs propose and to review and produce the ESI is approximately $1.2 to $1.7 million. This estimated cost, however, is not itemized nor broken down for the Court to understand how it was calculated. For example, is it $1.2 to $1.7 million to review all the custodial documents from 2007 through 2016? Or does this estimate isolate only the pre-October 2012 custodial searches that Agri Stats does not want to have to redo, in its words? More importantly, Agri Stats also admits that this estimate is based on EUCPs’ original proposed list of search terms. But EUCPs represent (and Agri Stats does not disagree) that during their apparently ongoing discussions, EUCPs have proposed to relieve Agri Stats of the obligation to produce various categories of documents and data, and to revise the search terms to be applied to data that is subject to search. Agri Stats does not appear to have provided a revised cost estimate since EUCPs agreed to exclude certain categories of documents and information and revised their search terms. Rather, Agri Stats takes the position that custodial searches before October 3, 2012 are not proportional to the needs of the case — full stop — so it apparently has not fully analyzed the cost impact of EUCPs’ revised search terms or narrowed document and data categories.

The Court wonders what the cost estimate is now after EUCPs have proposed to narrow the scope of what they are asking Agri Stats to do. (emphasis added) EUCPs say they already have agreed, or are working towards agreement, that 2.5 million documents might be excluded from Agri Stats’s review. That leaves approximately 520,000 documents that remain to be reviewed. In addition, EUCPs say they have provided to Agri Stats revised search terms, but Agri Stats has not responded. Agri Stats says nothing about this in its reply memorandum.

EUCPs contend that Agri Stats’s claims of burden and cost are vastly overstated. The Court tends to agree with EUCPs on this record. It is not clear what it would cost in either time or money to review and produce the custodial ESI now being sought by EUCPs for the entire discovery period set forth in the ESI Protocol or even for the pre-October 3, 2102 period. It seems that Agri Stats itself also does not know for sure what it would have to do and how much it would cost because the parties have not finished that discussion. Because EUCPs say they are continuing to work with Agri Stats to reduce what it must do to comply with their discovery requests, the incremental burden on what Agri Stats now is being asked to do is not clear.

For all these reasons, Agri Stats falls woefully short of satisfying its obligation to show that the information [*10] EUCPs are seeking is not reasonably accessible because of undue burden or cost.

Estimations for Fun and Profit

In order to obtain a protective order you need to estimate the costs that will likely be involved in the discovery from which you seek protection. Simple. Moreover, it obviously has to be a reasonable estimate, a good faith estimate, supported by the facts. The Brolier Chicken defendant, Agri Stats, came up with an estimate. They got that part right. But then they stopped. You never do that. You do not just throw up a number and hope for the best. You have to explain how it was derived. Blushing at any price higher than that is not a reasonable explanation, but is often honest.

Be ready to explain how you came up with the cost estimate. To break down the total into its component parts and allow the “Court to understand how it was calculated.” Agri Stats did not do that. Instead, they just used a cost estimate of between $1.2 to $1.7 million. So of course Agri Stats’ motion for protective order was denied. The judge had no choice because no evidence to support the motion was presented, neither factual or expert evidence. There was no need for Judge Gilbert to go into the secondary questions of whether expert testimony was also needed and whether it should be under Rule 702. He got nothing remember. No explanation for the $1.7 Million.

The lesson of the latest discovery order in Broiler Chicken is pretty simple. In re Broiler Chicken Antitrust Litig., 2018 WL 3586183 (N.D. Ill. 7/26/18). Get a real cost estimate from an expert. The expert needs to know and understand document review, search and costs of review. They need to know how to make reasonable search and retrieval efforts. They also need to know how to make reliable estimates. You may need two experts for this, as not all have expertise in both fields, but they are readily available. Many can even talk pretty well too, but not all! Seriously, everybody knows we are the most fun and interesting lawyer subgroup.

The last thing to do is skimp on an expert and just pull out a number from your hat (or your vendor’s hat) and hope for the best.

This is federal court, not a political rally. You do not make bald assertions and leave the court wondering. Facts matter. Back of the envelope type guesses are not sufficient, especially in a big case like Broiler Chicken. Neither are guesstimates by people who do not know what they are doing. Make disclosure and cooperate with the requesting party to reach agreement. Do not just rush to the courthouse hoping to  dazzle with smoke and mirrors. Bring in the experts. They may not dazzle, but they can get you beyond the magic mirrors.

Case Law Background

Judge Paul S. Grewal, who is now Deputy G.C. of Facebook, said quoting The Sedona Conference in Vasudevan: There is no magic to the science of search and retrieval: only mathematics, linguistics, and hard work.Vasudevan Software, Inc. v. Microstrategy Inc., No. 11-cv-06637-RS-PSG, 2012 US Dist LEXIS 163654 (ND Cal Nov 15, 2012) (quoting The Sedona Conference, Best Practices Commentary on the Use of Search and Information and Retrieval Methods in E-Discovery, 8 Sedona Conf. J. 189, 208 (2007). There is also no magic to the art of estimation, no magic to calculating the likely range of cost to search and retrieve the documents requested. Judge Grewal refused to make any decision in Vasudevan without expert assistance, recognizing that this area is “fraught with traps for the unwary” and should not be decided on mere arguments of counsel.

Judge Grewal did not address the procedural issue of whether Rule 702 should govern. But he did cite to Judge Facciola’s case on the subject, United States v. O’Keefe, 537 F. Supp. 2d 14 (D.D.C. 2008). Here Judge Facciola first raised the discovery expert evidence issue. He not only opined that experts should be used, but that the parties should follow the formalities of Evidence Rule 702. That governs things such as whether you should qualify and swear in an expert and follow otherwise follow Rule 702 on their testimony. I discussed this somewhat in my earlier article this year, Judge Goes Where Angels Fear To Tread: Tells the Parties What Keyword Searches to Use.

Judge Facciola in O’Keffe held that document review issues require expert input and that this input should be provided with all of the protections provided by Evidence Rule 702.

Given this complexity, for lawyers and judges to dare opine that a certain search term or terms would be more likely to produce information than the terms that were used is truly to go where angels fear to tread. This topic is clearly beyond the ken of a layman and requires that any such conclusion be based on evidence that, for example, meets the criteria of Rule 702 of the Federal Rules of Evidence. Accordingly, if defendants are going to contend that the search terms used by the government were insufficient, they will have to specifically so contend in a motion to compel and their contention must be based on evidence that meets the requirements of Rule 702 of the Federal Rules of Evidence.

Conclusion

In the Boiler Chicken Antitrust Order of July 27, 2018, a motion for protective order was denied because of inadequate evidence of burden. All the responding party did was quote a price-range, a number presumably provided by an expert, but there was no explanation. More evidence was needed, both expert and fact. I agree that generally document review cost estimation requires opinions of experts. The experts need to be proficient in two fields. They need to know and understand the science of document search and retrieval and the likely costs for these services for a particular set of data.

Although all of the formalities and expense of compliance with Evidence Rule 702 may be needed in some cases, it is probably not necessary in most. Just bring your expert to the attorney conference or hearing. Yes, two experts may well disagree on some things, probably will, but the areas of agreement are usually far more important. That in turn makes compromise and negotiation far easier. Better leave the technical details to the experts to sort out. That follows the Rule 1 prime directive of “just, speedy and inexpensive.” Keep the trial lawyers out of it. They should instead focus and argue on what the documents mean.

 

 

 


Another Judge is Asked to Settle a Keyword Squabble and He Hesitates To Go Where Angels Fear To Tread: Only Tells the Parties What Keywords NOT To Use

July 15, 2018

In this blog we discuss yet another case where the parties are bickering over keywords and the judge was asked to intervene. Webastro Thermo & Comfort v. BesTop, Inc., 2018 WL 3198544, No.16-13456 (E.D. Mich. June 29, 2018). The opinion was written in a patent case in Detroit by Executive Magistrate Judge R. Steven Whalen. He looked at the proposed keywords and found them wanting, but wisely refused to go further and tell them what keywords to use. Well done Judge Whalen!

This case is similar to the one discussed in my last blog, Judge Goes Where Angels Fear To Tread: Tells the Parties What Keyword Searches to Use, where Magistrate Judge Laura Fashing in Albuquerque was asked to resolve a keyword dispute in United States v. New Mexico State University, No. 1:16-cv-00911-JAP-LF, 2017 WL 4386358 (D.N.M. Sept. 29, 2017). Judge Fashing not only found the proposed keywords inadequate, but came up with her own replacement keywords and did so without any expert input.

In my prior blog on Judge Fashing’s decision I discussed Judge John Facciola’s landmark legal search opinion in United States v. O’Keefe, 537 F. Supp. 2d 14 (D.D.C. 2008) and other cases that follow it. In O’Keefe Judge Facciola held that because keyword search questions involve complex, technical, scientific questions, that a judge should not decide such issues without the help of expert testimony. That is the context for his famous line:

Given this complexity, for lawyers and judges to dare opine that a certain search term or terms would be more likely to produce information than the terms that were used is truly to go where angels fear to tread. This topic is clearly beyond the ken of a layman and requires that any such conclusion be based on evidence that, for example, meets the criteria of Rule 702 of the Federal Rules of Evidence.

In this weeks blog I consider the opinion by Judge Whalen in Webastro Thermo & Comfort v. BesTop, Inc., 2018 WL 3198544, No.16-13456 (E.D. Mich. June 29, 2018) where he told the parties what keywords not to use, again without expert input, but stopped there. Interesting counterpoint cases. It is also interesting to observe that in all three cases, O’Keefe, New Mexico State University and Webastro, the judges end on the same note where the parties are ordered to cooperate. Ah, if it were only so easy.

Stipulated Order Governing ESI Production

In Webastro Thermo & Comfort v. BesTop, Inc., the parties cooperated at the beginning of the case. They agreed to the entry of a stipulated ESI Order governing ESI production. The stipulation included a cooperation paragraph where the parties pledge to try to resolve all ESI issues without judicial intervention. Apparently, the parties cooperation did not go much beyond the stipulated order. Cooperation broke down and the plaintiff filed a barrage of motions to avoid having to do document review, including an Emergency Motion to Stay ESI Discovery. The plaintiff alleged that the defendant violated the ESI stipulation by “propounding overly broad search terms in its request for ESI.” Oh, how terrible. Red Alert!

Plaintiffs further accused defense counsel of “propounding prima facie inappropriate search criteria, and refusal to work in good faith to target its search terms to specific issues in this case.” Again, the outrageous behavior reminds me of the Romulans. I can see why plaintiff’s counsel called an emergency and asked for costs and relief from having to produce any ESI at all. That kind of approach rarely goes over well with any judge, but here it worked. That’s because the keywords the defense wanted plaintiff to use in its search for relevant ESI were, in fact, very bad.

Paragraph 1.3(3) of the ESI Order establishes a protocol designed to constrain e-discovery, including a limitation to eight custodians with no more than ten keyword search terms for each. It goes on to provide the following very interesting provision:

The search terms shall be narrowly tailored to particular issues. Indiscriminate terms, such as the producing company’s name or its product name, are inappropriate unless combined with narrowing search criteria that significantly reduce the risk of overproduction. A conjunctive combination of multiple words or phrases (e.g. ‘computer’ and ‘system’) narrows the search and shall count as a single term. A disjunctive combination of multiple words or phrases (e.g. ‘computer’ or ‘system’) broadens the search, and thus each word or phrase shall count as a separate search term unless they are variants of the same word. Use of narrowing search criteria (e.g. ‘and,’ ‘but not,’ ‘w/x’) is encouraged to limit the production and shall be considered when determining whether to shift costs for disproportionate discovery.

Remember, this is negotiated wording that the parties agreed to, including the bit about product names and “conjunctive combination.”

Defendant’s Keyword Demands

The keywords proposed by defense counsel for plaintiff’s search then included: “Jeep,” “drawing” and its abbreviation “dwg,” “top,” “convertible,” “fabric,” “fold,” “sale or sales,” and the plaintiff’s product names,  “Swaptop” and “Throwback.”

Plaintiff’s counsel advised Judge Whalen that the ten terms created the following results with five custodians (no word on the other three):

  • Joseph Lupo: 30 gigabytes, 118,336 documents.
  • Ryan Evans: 13 gigabytes, 44,373 documents.
  • Tyler Ruby: 10 gigabytes, 44,460 documents.
  • Crystal Muglia: 245,019 documents.
  • Mark Denny: 162,067 documents.
In Footnote Three Judge Whalen adds, without citation to authority or the record, that:
One gigabyte would comprise approximately 678,000 pages of text. 30 gigabytes would represent approximately 21,696,000 pages of text.

Note that Catalyst did a study of average number of files in a gigabyte in 2014. They found that the average number was 2,500 files per gigabyte. They suggest using 3,000 files per gigabyte for cost estimates, just to be safe. So I have to wonder where Judge Whalen got this 678,000 pages of text per gigabyte.

Plaintiff’s counsel added that:

Just a subset of the email discovery requests propounded by BesTop have returned more than 614,00 documents, comprising potentially millions of individual pages for production.

Plaintiff’s counsel also filed an affidavit where he swore that he reviewed the first 100 consecutively numbered documents to evaluate the burden. Very impressive effort. Not! He looked at the first one-hundred documents that happened to be on top of a 614,000 pile. He also swore that none of these first one-hundred were relevant. (One wonders how many of them were empty pst container files. They are often the “documents” found first in consecutive numbering of an email collection. A better sample might have been to look at the 100 docs with the most hits.)

Judge Whalen Agrees with Plaintiff on Keywords

Judge Whalen agreed with plaintiff and held that:

The majority of defendant’s search terms are overly broad, and in some cases violate the ESI Order on its face. For example, the terms “throwback” and “swap top” refer to Webasto’s product names, which are specifically excluded under 1.3(3) of the ESI Order.

The overbreadth of other terms is obvious, especially in relation to a company that manufactures and sells convertible tops: “top,” “convertible,” “fabric,” “fold,” “sale or sales.” Using “dwg” as an alternate designation for “drawing” (which is itself a rather broad term) would call into play files with common file extension .dwg.

Apart from the obviously impermissible breadth of BesTop’s search terms, their overbreadth is borne out by Mr. Carnevale’s declarations, which detail a return of multiple gigabytes of ESI potentially comprising tens of millions of pages of documents, based on only a partial production. In addition, the search of just the first 100 records produced using BesTop’s search terms revealed that none were related to the issues in this lawsuit. Contrary to BesTop’s contention that Webasto’s claim of prejudice is conclusory, I find that Webasto has sufficiently “articulate[d] specific facts showing clearly defined and serous injury resulting from the discovery sought ….” Nix, 11 Fed.App’x. at 500.

Thus, BesTop’s reliance on City of Seattle v. Professional Basketball Club, LLC, 2008 WL 539809 (W.D. Wash. 2008), is inapposite. In City of Seattle, the defendant offered no facts to support its assertion that discovery would be overly burdensome, instead “merely state[ing] that producing such emails ‘would increase the email universe exponentially[.]’” Id. at *3. In our case, Webasto has proffered hard numbers as to the staggering amount of ESI returned based on BesTop’s search requests. Moreover, while disapproving of conclusory claims of burden, the Court in City of Seattle recognized that the overbreadth of some search terms would be apparent on their face:

“‘[U]nless it is obvious from the wording of the request itself that it is overbroad, vague, ambiguous or unduly burdensome, an objection simply stating so is not sufficiently specific.’” Id., quoting Boeing Co. v. Agric. Ins. Co., 2007 U.S. Dist. LEXIS 90957, *8 (W.D.Wash. Dec. 11, 2007).

As discussed above, many of BesTop’s terms are indeed overly general on their face. And again, propounding Webasto’s product names (e.g., “throwback” and “swap top”) violates the express language of the ESI Order.

Defense Counsel Did Not Cooperate

Judge Whalen then went on to address the apparent lack of cooperation by defendant.

Adversarial discovery practice, particularly in the context of ESI, is anathema to the principles underlying the Federal Rules, particularly Fed.R.Civ.P. 1, which directs that the Rules “be construed, administered, and employed by the court and the parties to secure the just, speedy, and inexpensive determination of every action and proceeding.” In this regard, the Sedona Conference Cooperation Proclamation states:

“Indeed, all stakeholders in the system–judges, lawyers, clients, and the general pubic–have an interest in establishing a culture of cooperation in the discovery process. Over-contentious discovery is a cost that has outstripped any advantage in the face of ESI and the data deluge. It is not in anyone’s interest to waste resources on unnecessary disputes, and the legal system is strained by ‘gamesmanship’ or ‘hiding the ball,’ to no practical effect.”

The stipulated ESI Order, which controls electronic discovery in this case, is an important step in the right direction, but whether as the result of adversarial overreach or insufficient effort, BesTop’s proposed search terms fall short of what is required under that Order.

Judge Whalen’s Ruling

Judge Whalen concluded his short Order with the following ruling:

For these reasons, Webasto’s motion for protective order [Doc. #78] is GRANTED as follows:

Counsel for the parties will meet and confer in a good-faith effort to focus and narrow BesTop’s search terms to reasonably limit Webastro’s production of ESI to emails relevant (within the meaning of Rule 26) to the issues in this case, and to exclude ESI that would have no relationship to this case.

Following this conference, and within 14 days of the date of this Order, BesTop will submit an amended discovery request with the narrowed search terms.  …

Because BesTop will have the opportunity to reformulate its discovery request to conform to the ESI Order, Webasto’s request for cost-shifting is DENIED at this time. However, the Court may reconsider the issue of cost-shifting if BesTop does not reasonably narrow its requests.

Difficult to Cooperate on Legal Search Without the Help of Experts

The defense in Webastro violated their own stipulation by the use of a party’s product names without further Boolean limiters, such as “product name AND another term.” Then defense counsel added insult to injury by coming across as uncooperative. I don’t know if they alone were uncooperative, or if it was a two way street, but appearances are everything. The emails between counsel were attached to the motions, and the judge scowled at the defense here, not plaintiff’s counsel. No judge likes attorneys who ignore orders, stipulated or otherwise, and are uncooperative to boot. “Uncooperative” is  label that you should avoid being called by a judge, especially in the world of e-discovery. Better to be an angel for discovery and save the devilish details for motions and trial.

In Webastro Thermo & Comfort v. BesTop, Inc., Judge Whalen struck down the proposed keywords without expert input. Instead Judge Whalen based his order on some incomplete metrics, namely the number of hits produced by the keywords that defense dreamed up. At least Judge Whalen did not go further and order the use of specific keywords as Judge Fashing did in United States v. New Mexico State University. Still, I wish he had not only ordered the parties to cooperate, but also ordered them to bring in some experts to help with the search tasks. You cannot just talk your way into good searches. No matter what the level of cooperation, you still have to know what you are doing.

If I had been handling this for the plaintiff, I would have gotten my hands much dirtier in the digital mud, meaning I would have done far more than just look at the first one-hundred of 614,000 documents. That was a poor quality control test, but obviously, here at least, was better than nothing. I would have done a sample review of each keyword and evaluated the precision of each. Some might have been ok as is, although probably not. They usually require some refinement. Sometimes it only takes a few minutes of review to determine that. Bottom line, I would have checked out the requested keywords. There were only ten here. That would take maybe three hours or so with the right software. You do not need big judgmental sampling most of the time to see the effectiveness, or not, or keywords.

The next step is to come up with, and test, a number of keyword refinements based on what you see in the data. Learn from the data. Test and improve various keyword combinations. That can take a few more hours. Some may think this is too much work, but it is far less time than preparing motions, memos and attending hearings. And anyway, you need to find the relevant evidence for your case.

After the tests, you share what you learned with opposing counsel and the judge, assuming they want to know. In my experience, most could care less about your methods, so long as your production includes the information they were looking for. You do not have to disclose your every little step, but you should at least advise, again if asked, information about “hit results.” This disclosure alone can go a long way, as this opinion demonstrates. Plaintiff’s counsel obtained very little data about the ineffectiveness of the defendants proposed searched terms, but that was enough to persuade the judge to enter a protective order.

To summarize, after evaluating the proposed search terms I would have improved on them. Using the improved searches I would have begun the attorney review and production. I would have shared the search information, cooperated as required by stipulation, case-law and rules, and gone ahead with my multimodal searches. I would use keywords and the many other wonderful kinds of searches that the Legal Technology industry has come up with in the last 25 years or so since keyword search was new and shiny.

Conclusion

The stipulation the parties used in Webastro could have been used at the turn of the century. Now it seems a little quaint, but alas, suits most inexperienced lawyers today. Anyway, talking about and using keywords is a good way to start a legal search. I sometimes call that Relevancy Dialogues or ESI Communications. Try out some keywords, refine and use them to guide your review, but do not stop there. Try other types of search too. Multimodal. Harness the power of the latest technology, namely AI enhanced search (Predictive Coding). Use statistics too and random sampling to better understand the data prevalence and overall search effectiveness.

If you do not know how to do legal search, and I estimate that 98% of lawyers today do not, then hire an expert. (Or take the time to learn, see eg TARcourse.com.) Your vendor probably has a couple of search experts. There may also be a lawyer in town with this expertise. Now there are even a few specialty law firms that offer these services nationwide. It is a waste of time to reinvent the Wheel, plus it is an ethical dictate under Rule 1.1 – Competence, to associate with competent counsel on a legal task when you are not.

Regarding the vendor experts, remember that even though they may be lawyers, they can only go so far. They can only provide technical advice, not legal, such as proportionality analysis under Rule 26, etc. That requires a practicing lawyer who specializes in e-discovery, preferably as a full-time specialty and not just something they do every now and then. If you are in a big firm, like I am, find the expert in your firm who specializes in e-discovery, like me. They will help you. If your firm does not have such an expert, better get one, either that or get used to losing and having your clients complain.

 


%d bloggers like this: