Another Judge is Asked to Settle a Keyword Squabble and He Hesitates To Go Where Angels Fear To Tread: Only Tells the Parties What Keywords NOT To Use

July 15, 2018

In this blog we discuss yet another case where the parties are bickering over keywords and the judge was asked to intervene. Webastro Thermo & Comfort v. BesTop, Inc., 2018 WL 3198544, No.16-13456 (E.D. Mich. June 29, 2018). The opinion was written in a patent case in Detroit by Executive Magistrate Judge R. Steven Whalen. He looked at the proposed keywords and found them wanting, but wisely refused to go further and tell them what keywords to use. Well done Judge Whalen!

This case is similar to the one discussed in my last blog, Judge Goes Where Angels Fear To Tread: Tells the Parties What Keyword Searches to Use, where Magistrate Judge Laura Fashing in Albuquerque was asked to resolve a keyword dispute in United States v. New Mexico State University, No. 1:16-cv-00911-JAP-LF, 2017 WL 4386358 (D.N.M. Sept. 29, 2017). Judge Fashing not only found the proposed keywords inadequate, but came up with her own replacement keywords and did so without any expert input.

In my prior blog on Judge Fashing’s decision I discussed Judge John Facciola’s landmark legal search opinion in United States v. O’Keefe, 537 F. Supp. 2d 14 (D.D.C. 2008) and other cases that follow it. In O’Keefe Judge Facciola held that because keyword search questions involve complex, technical, scientific questions, that a judge should not decide such issues without the help of expert testimony. That is the context for his famous line:

Given this complexity, for lawyers and judges to dare opine that a certain search term or terms would be more likely to produce information than the terms that were used is truly to go where angels fear to tread. This topic is clearly beyond the ken of a layman and requires that any such conclusion be based on evidence that, for example, meets the criteria of Rule 702 of the Federal Rules of Evidence.

In this weeks blog I consider the opinion by Judge Whalen in Webastro Thermo & Comfort v. BesTop, Inc., 2018 WL 3198544, No.16-13456 (E.D. Mich. June 29, 2018) where he told the parties what keywords not to use, again without expert input, but stopped there. Interesting counterpoint cases. It is also interesting to observe that in all three cases, O’Keefe, New Mexico State University and Webastro, the judges end on the same note where the parties are ordered to cooperate. Ah, if it were only so easy.

Stipulated Order Governing ESI Production

In Webastro Thermo & Comfort v. BesTop, Inc., the parties cooperated at the beginning of the case. They agreed to the entry of a stipulated ESI Order governing ESI production. The stipulation included a cooperation paragraph where the parties pledge to try to resolve all ESI issues without judicial intervention. Apparently, the parties cooperation did not go much beyond the stipulated order. Cooperation broke down and the plaintiff filed a barrage of motions to avoid having to do document review, including an Emergency Motion to Stay ESI Discovery. The plaintiff alleged that the defendant violated the ESI stipulation by “propounding overly broad search terms in its request for ESI.” Oh, how terrible. Red Alert!

Plaintiffs further accused defense counsel of “propounding prima facie inappropriate search criteria, and refusal to work in good faith to target its search terms to specific issues in this case.” Again, the outrageous behavior reminds me of the Romulans. I can see why plaintiff’s counsel called an emergency and asked for costs and relief from having to produce any ESI at all. That kind of approach rarely goes over well with any judge, but here it worked. That’s because the keywords the defense wanted plaintiff to use in its search for relevant ESI were, in fact, very bad.

Paragraph 1.3(3) of the ESI Order establishes a protocol designed to constrain e-discovery, including a limitation to eight custodians with no more than ten keyword search terms for each. It goes on to provide the following very interesting provision:

The search terms shall be narrowly tailored to particular issues. Indiscriminate terms, such as the producing company’s name or its product name, are inappropriate unless combined with narrowing search criteria that significantly reduce the risk of overproduction. A conjunctive combination of multiple words or phrases (e.g. ‘computer’ and ‘system’) narrows the search and shall count as a single term. A disjunctive combination of multiple words or phrases (e.g. ‘computer’ or ‘system’) broadens the search, and thus each word or phrase shall count as a separate search term unless they are variants of the same word. Use of narrowing search criteria (e.g. ‘and,’ ‘but not,’ ‘w/x’) is encouraged to limit the production and shall be considered when determining whether to shift costs for disproportionate discovery.

Remember, this is negotiated wording that the parties agreed to, including the bit about product names and “conjunctive combination.”

Defendant’s Keyword Demands

The keywords proposed by defense counsel for plaintiff’s search then included: “Jeep,” “drawing” and its abbreviation “dwg,” “top,” “convertible,” “fabric,” “fold,” “sale or sales,” and the plaintiff’s product names,  “Swaptop” and “Throwback.”

Plaintiff’s counsel advised Judge Whalen that the ten terms created the following results with five custodians (no word on the other three):

  • Joseph Lupo: 30 gigabytes, 118,336 documents.
  • Ryan Evans: 13 gigabytes, 44,373 documents.
  • Tyler Ruby: 10 gigabytes, 44,460 documents.
  • Crystal Muglia: 245,019 documents.
  • Mark Denny: 162,067 documents.
In Footnote Three Judge Whalen adds, without citation to authority or the record, that:
One gigabyte would comprise approximately 678,000 pages of text. 30 gigabytes would represent approximately 21,696,000 pages of text.

Note that Catalyst did a study of average number of files in a gigabyte in 2014. They found that the average number was 2,500 files per gigabyte. They suggest using 3,000 files per gigabyte for cost estimates, just to be safe. So I have to wonder where Judge Whalen got this 678,000 pages of text per gigabyte.

Plaintiff’s counsel added that:

Just a subset of the email discovery requests propounded by BesTop have returned more than 614,00 documents, comprising potentially millions of individual pages for production.

Plaintiff’s counsel also filed an affidavit where he swore that he reviewed the first 100 consecutively numbered documents to evaluate the burden. Very impressive effort. Not! He looked at the first one-hundred documents that happened to be on top of a 614,000 pile. He also swore that none of these first one-hundred were relevant. (One wonders how many of them were empty pst container files. They are often the “documents” found first in consecutive numbering of an email collection. A better sample might have been to look at the 100 docs with the most hits.)

Judge Whalen Agrees with Plaintiff on Keywords

Judge Whalen agreed with plaintiff and held that:

The majority of defendant’s search terms are overly broad, and in some cases violate the ESI Order on its face. For example, the terms “throwback” and “swap top” refer to Webasto’s product names, which are specifically excluded under 1.3(3) of the ESI Order.

The overbreadth of other terms is obvious, especially in relation to a company that manufactures and sells convertible tops: “top,” “convertible,” “fabric,” “fold,” “sale or sales.” Using “dwg” as an alternate designation for “drawing” (which is itself a rather broad term) would call into play files with common file extension .dwg.

Apart from the obviously impermissible breadth of BesTop’s search terms, their overbreadth is borne out by Mr. Carnevale’s declarations, which detail a return of multiple gigabytes of ESI potentially comprising tens of millions of pages of documents, based on only a partial production. In addition, the search of just the first 100 records produced using BesTop’s search terms revealed that none were related to the issues in this lawsuit. Contrary to BesTop’s contention that Webasto’s claim of prejudice is conclusory, I find that Webasto has sufficiently “articulate[d] specific facts showing clearly defined and serous injury resulting from the discovery sought ….” Nix, 11 Fed.App’x. at 500.

Thus, BesTop’s reliance on City of Seattle v. Professional Basketball Club, LLC, 2008 WL 539809 (W.D. Wash. 2008), is inapposite. In City of Seattle, the defendant offered no facts to support its assertion that discovery would be overly burdensome, instead “merely state[ing] that producing such emails ‘would increase the email universe exponentially[.]’” Id. at *3. In our case, Webasto has proffered hard numbers as to the staggering amount of ESI returned based on BesTop’s search requests. Moreover, while disapproving of conclusory claims of burden, the Court in City of Seattle recognized that the overbreadth of some search terms would be apparent on their face:

“‘[U]nless it is obvious from the wording of the request itself that it is overbroad, vague, ambiguous or unduly burdensome, an objection simply stating so is not sufficiently specific.’” Id., quoting Boeing Co. v. Agric. Ins. Co., 2007 U.S. Dist. LEXIS 90957, *8 (W.D.Wash. Dec. 11, 2007).

As discussed above, many of BesTop’s terms are indeed overly general on their face. And again, propounding Webasto’s product names (e.g., “throwback” and “swap top”) violates the express language of the ESI Order.

Defense Counsel Did Not Cooperate

Judge Whalen then went on to address the apparent lack of cooperation by defendant.

Adversarial discovery practice, particularly in the context of ESI, is anathema to the principles underlying the Federal Rules, particularly Fed.R.Civ.P. 1, which directs that the Rules “be construed, administered, and employed by the court and the parties to secure the just, speedy, and inexpensive determination of every action and proceeding.” In this regard, the Sedona Conference Cooperation Proclamation states:

“Indeed, all stakeholders in the system–judges, lawyers, clients, and the general pubic–have an interest in establishing a culture of cooperation in the discovery process. Over-contentious discovery is a cost that has outstripped any advantage in the face of ESI and the data deluge. It is not in anyone’s interest to waste resources on unnecessary disputes, and the legal system is strained by ‘gamesmanship’ or ‘hiding the ball,’ to no practical effect.”

The stipulated ESI Order, which controls electronic discovery in this case, is an important step in the right direction, but whether as the result of adversarial overreach or insufficient effort, BesTop’s proposed search terms fall short of what is required under that Order.

Judge Whalen’s Ruling

Judge Whalen concluded his short Order with the following ruling:

For these reasons, Webasto’s motion for protective order [Doc. #78] is GRANTED as follows:

Counsel for the parties will meet and confer in a good-faith effort to focus and narrow BesTop’s search terms to reasonably limit Webastro’s production of ESI to emails relevant (within the meaning of Rule 26) to the issues in this case, and to exclude ESI that would have no relationship to this case.

Following this conference, and within 14 days of the date of this Order, BesTop will submit an amended discovery request with the narrowed search terms.  …

Because BesTop will have the opportunity to reformulate its discovery request to conform to the ESI Order, Webasto’s request for cost-shifting is DENIED at this time. However, the Court may reconsider the issue of cost-shifting if BesTop does not reasonably narrow its requests.

Difficult to Cooperate on Legal Search Without the Help of Experts

The defense in Webastro violated their own stipulation by the use of a party’s product names without further Boolean limiters, such as “product name AND another term.” Then defense counsel added insult to injury by coming across as uncooperative. I don’t know if they alone were uncooperative, or if it was a two way street, but appearances are everything. The emails between counsel were attached to the motions, and the judge scowled at the defense here, not plaintiff’s counsel. No judge likes attorneys who ignore orders, stipulated or otherwise, and are uncooperative to boot. “Uncooperative” is  label that you should avoid being called by a judge, especially in the world of e-discovery. Better to be an angel for discovery and save the devilish details for motions and trial.

In Webastro Thermo & Comfort v. BesTop, Inc., Judge Whalen struck down the proposed keywords without expert input. Instead Judge Whalen based his order on some incomplete metrics, namely the number of hits produced by the keywords that defense dreamed up. At least Judge Whalen did not go further and order the use of specific keywords as Judge Fashing did in United States v. New Mexico State University. Still, I wish he had not only ordered the parties to cooperate, but also ordered them to bring in some experts to help with the search tasks. You cannot just talk your way into good searches. No matter what the level of cooperation, you still have to know what you are doing.

If I had been handling this for the plaintiff, I would have gotten my hands much dirtier in the digital mud, meaning I would have done far more than just look at the first one-hundred of 614,000 documents. That was a poor quality control test, but obviously, here at least, was better than nothing. I would have done a sample review of each keyword and evaluated the precision of each. Some might have been ok as is, although probably not. They usually require some refinement. Sometimes it only takes a few minutes of review to determine that. Bottom line, I would have checked out the requested keywords. There were only ten here. That would take maybe three hours or so with the right software. You do not need big judgmental sampling most of the time to see the effectiveness, or not, or keywords.

The next step is to come up with, and test, a number of keyword refinements based on what you see in the data. Learn from the data. Test and improve various keyword combinations. That can take a few more hours. Some may think this is too much work, but it is far less time than preparing motions, memos and attending hearings. And anyway, you need to find the relevant evidence for your case.

After the tests, you share what you learned with opposing counsel and the judge, assuming they want to know. In my experience, most could care less about your methods, so long as your production includes the information they were looking for. You do not have to disclose your every little step, but you should at least advise, again if asked, information about “hit results.” This disclosure alone can go a long way, as this opinion demonstrates. Plaintiff’s counsel obtained very little data about the ineffectiveness of the defendants proposed searched terms, but that was enough to persuade the judge to enter a protective order.

To summarize, after evaluating the proposed search terms I would have improved on them. Using the improved searches I would have begun the attorney review and production. I would have shared the search information, cooperated as required by stipulation, case-law and rules, and gone ahead with my multimodal searches. I would use keywords and the many other wonderful kinds of searches that the Legal Technology industry has come up with in the last 25 years or so since keyword search was new and shiny.

Conclusion

The stipulation the parties used in Webastro could have been used at the turn of the century. Now it seems a little quaint, but alas, suits most inexperienced lawyers today. Anyway, talking about and using keywords is a good way to start a legal search. I sometimes call that Relevancy Dialogues or ESI Communications. Try out some keywords, refine and use them to guide your review, but do not stop there. Try other types of search too. Multimodal. Harness the power of the latest technology, namely AI enhanced search (Predictive Coding). Use statistics too and random sampling to better understand the data prevalence and overall search effectiveness.

If you do not know how to do legal search, and I estimate that 98% of lawyers today do not, then hire an expert. (Or take the time to learn, see eg TARcourse.com.) Your vendor probably has a couple of search experts. There may also be a lawyer in town with this expertise. Now there are even a few specialty law firms that offer these services nationwide. It is a waste of time to reinvent the Wheel, plus it is an ethical dictate under Rule 1.1 – Competence, to associate with competent counsel on a legal task when you are not.

Regarding the vendor experts, remember that even though they may be lawyers, they can only go so far. They can only provide technical advice, not legal, such as proportionality analysis under Rule 26, etc. That requires a practicing lawyer who specializes in e-discovery, preferably as a full-time specialty and not just something they do every now and then. If you are in a big firm, like I am, find the expert in your firm who specializes in e-discovery, like me. They will help you. If your firm does not have such an expert, better get one, either that or get used to losing and having your clients complain.

 


Project Cost Estimation Is Key To Opposing ESI Discovery as Disproportionately Burdensome Under Rule 26(b)(1)

May 6, 2018

If you are opposing ESI discovery as over-burdensome under Rule 26(b)(1), then you MUST provide evidence of the economic burden of the requested review. You cannot just say it is over-burdensome. Even if it seems obvious, you must provide some metrics, some data, some hard evidence to back that up. That requires the ability to estimate the costs and burdens involved in a document review. In the old days, the nineties, almost every litigator could estimate the cost of a paper review. It was not a tough skill. But today, where large volumes of ESI are common, everything is much more complicated. Today you need an expert to accurately and reliably estimate the costs of various types of ESI reviews.

Requiring proof of burden is nothing new to the law, yet most lawyers today need outside help to do it, especially in large ESI projects. For example, consider the defense team of lawyers representing the City of Chicago and other defendants in a major civil rights case with lots of press, Mann v. City of Chicago, Nos. 15 CV 9197, 13 CV 4531, (N.D. Ill. Sept. 8, 2017); Chicago sued for ‘unconstitutional and torturous’ Homan Square police abuse (The Guardian, 10/19/15). They did not even attempt to estimate the costs of the review they opposed. They also failed or refused to hire an expert who could do that for them. Sine they had no evidence, not even an estimate, their argument under Rule 26(b)(1) failed miserably.

Mann v. City of Chicago: Case Background

The background of the case is interesting, but I won’t go into the fact details here; just enough to set up the discovery dispute. Plaintiffs in later consolidated cases sued the City of Chicago and the Chicago police alleging that they had been wrongfully arrested, detained and abused at “off the books” detention centers without access to an attorney. Aside from the salacious allegations, it does not look like the plaintiffs have a strong case. It looks like a fishing expedition to me, in more ways than one as I will explain. With this background, it seems to me that if defendants had made any real effort to prove burden here, they could have prevailed on this discovery dispute.

The parties agreed on the majority of custodians whose ESI would be searched, but, as usual, the plaintiffs’ wanted more custodians searched, including that of the mayor himself, Rahm Emanuel. The defendants did not want to include the mayor’s email in the review. They argued, without any real facts showing burden, that the Mayor’s email would be irrelevant (a dubious argument that seemed to be a throw-away) and too burdensome (their real argument).

Here is how Magistrate Judge Mary M. Rowland summarized the custodian dispute in her opinion:

Plaintiffs argue Mayor Emanuel and ten members of his senior staff, including current and former chiefs of staff and communications directors are relevant to Plaintiffs’ Monell claim. (Id. at 5).[2] The City responds that Plaintiffs’ request is burdensome, and that Plaintiffs have failed to provide any grounds to believe that the proposed custodians were involved with CPD’s policies and practices at Homan Square. (Dkt. 74 at 1, 6). The City proposes instead that it search the two members of the Mayor’s staff responsible for liasoning with the CPD and leave “the door open for additional custodians” depending on the results of that search. (Id. at 2, 4).[3]

Another Silly “Go Fish” Case

As further background, this is one of those negotiated keywords Go Fish cases where the attorneys involved all thought they had the magical powers to divine what words were used in relevant ESI. The list is not shared, but I bet it included wondrous words like “torture” and “off the books,” plus every plaintiff’s favorite “claim.”

The parties agreed that the defendants would only review for relevant evidence the ESI of the custodians that happened to have one or more of the keyword incantations they dreamed up. Under this still all to common practice the attorneys involved, none of whom appear to have any e-discovery search expertise, the majority of documents in the custody of the defense custodians would never be reviewed. They would not be reviewed because they did not happen to have a “magic word” in them. This kind of untested, keyword filtering agreement is irrational, archaic and not a best practice in any but small cases, but that is what the attorneys for both sides agreed to. They were convinced they could guess that words were used by police, city administrators and politicians in any relevant document. It is a common delusion facilitated by Google’s search of websites.

When will the legal profession grow up and stop playing Go Fish when it comes to a search for relevant legal evidence? I have been writing about this for years. Losey, R., Adventures in Electronic Discovery (West 2011); Child’s Game of ‘Go Fish’ is a Poor Model for e-Discovery Search. Guessing keywords does not work. It almost always fails in both precision and recall. The keyword hits docs are usually filled with junk and relevant docs often used unexpected language, not to mention abbreviations and spelling errors. If you do not at least test proposed keywords on a sample custodian, then your error rate will multiply. I saw a review recently where the precision rate on keywords was only six percent, and that is with superficial feedback, i.w. – unskilled testing. You never want to waste so much attorney time, even if you are reviewing at low rates. The ninety-four irrelevant docs to find six is an inefficient expensive approach. We try to improve precision without a significant loss of recall.

When I first wrote about Go Fish and keywords back in 2010 most everyone agreed with me, even if they disagreed on the significance, the meaning and what you should do about it. That started the proportionality debate in legal search. E-Discovery search expert Judges Peck and Scheindlin joined in the chorus of criticism of negotiated keywords. National Day Laborer Organizing Network v. US Immigration and Customs Enforcement Agency, 877 F.Supp.2d 87 (SDNY, 2012) (J. Scheindlin) (“As Judge Andrew Peck — one of this Court’s experts in e-discovery — recently put it: “In too many cases, however, the way lawyers choose keywords is the equivalent of the child’s game of `Go Fish’ … keyword searches usually are not very effective.” FN 113“); Losey, R., Poor Plaintiff’s Counsel, Can’t Even Find a CAR, Much Less Drive One (9/1/13). Don’t you love the quote within a quote. A rare gem in legal writing.

Judge Rowland’s Ruling

I have previously written about the author of the Mann v. City of Chicago opinion, Judge Mary Rowland. Spoliated Schmalz: New Sanctions Case in Chicago That Passes-Over a Mandatory Adverse Inference. She is a rising star in the e-discovery world. Judge Rowland found that the information sought from the additional custodians would be relevant. This disposed of the defendants first and weakest argument. Judge Rowland then held that Defendants did not meet the burden of proof “—failing to provide even an estimate—” and for that reason granted, in part, Plaintiffs’ motion to compel, including their request to add the Mayor. Judge Rowland reviewed all six of the proportionality factors under Rule 26(b)(1), including the importance of the issues at stake and the plaintiffs’ lack of access to the requested information.

On the relevance issue Judge Rowland held that, in addition to the agreed-upon staff liaisons, the Mayor and his “upper level staff” might also have relevant information in their email. As to the burden argument, Judge Rowland held that the City did not “offer any specifics or even a rough estimate about the burden.” Judge Rowland correctly rejected the City’s argument that they could not provide any such information because “it is impossible to determine how many emails there may be ‘unless the City actually runs the searches and collects the material.’” Instead, the court held that the defendants should have at least provided “an estimate of the burden.” Smart Judge. Here are her words:

The City argues that it will be “burdened with the time and expense of searching the email boxes of nine (9) additional custodians.” (Dkt. 74 at 5). The City does not offer any specifics or even a rough estimate about the burden. See Kleen Prods. LLC 2012 U.S. Dist. LEXIS 139632, at *48 (“[A] party must articulate and provide evidence of its burden. While a discovery request can be denied if the `burden or expense of the proposed discovery outweighs its likely benefit,’ Fed. R. Civ. P. 26(b)(2)(C)(iii), a party objecting to discovery must specifically demonstrate how the request is burdensome.”) (internal citations and quotations omitted).

As the Seventh Circuit stated in Heraeus Kulzer, GmbH, v. Biomet, Inc., 633 F.3d 591, 598 (7th Cir. 2011):

[The party] could have given the district court an estimate of the number of documents that it would be required to provide Heraeus in order to comply with the request, the number of hours of work by lawyers and paralegals required, and the expense. A specific showing of burden is commonly required by district judges faced with objections to the scope of discovery . . . Rough estimates would have sufficed; none, rough or polished, was offered.

The City argues in its sur-reply that it is impossible to determine how many emails there may be “unless the City actually runs the searches and collects the material.” (Dkt. 78-1 at 4). Still, the City should have provided an estimate of the burden. The Court is not convinced by the City’s argument about the burden.

Judge Rowland also held that the City should have addressed the “other Rule 26 factors—the importance of the issues and of the discovery in resolving the issues, and the parties’ relative access to information and their resources.” She noted that these other factors: “weigh[ed] in favor of allowing discovery of more than just the two custodians proposed by the City.”  However, the court declined to compel the search of four proposed custodians based on their “short tenure” or the “time during which the person held the position,” concluding the requested searches were “not proportional to the needs of the case.”

Judge Rowland’s opinion notes with seeming surprise the failure of the City of Chicago to provide any argument at all on the five non-economic factors in Rule 26(b)(1). I do not fault them for that. Their arguments on these points were necessarily weak in this type of case, but a conciliatory gesture, a polite acknowledgement showing awareness, might have helped sweeten the vinegar. As it is, they came across as oblivious to the full requirements of the Rule.

What Chicago Should Have Done

What additional information should the defendants have provided to oppose the search and review of the additional nine custodians, including the Mayor’s email? Let’s start with the obvious. They should have shared the total document count and GB size of the nine custodians, and they should have broken that information down on a per-custodian basis. Then they should have estimated the costs to review that many emails and attachments.

The file count information should have been easy to ascertain from the City’s IT department. They know the PST sizes and can also determine, or at least provide a good estimate of the total document count. The problem they had with this obvious approach is that they wanted a keyword filter. They did not want to search all documents of the custodians, only the ones with keyword hits. Still, that just made the process slightly more difficult, not impossible.

Yes, it is true, as defendant’s alleged, that to ascertain this supporting information, they would have to run the searches and collect the material. So what? Their vendor or Chicago IT department should have helped them with that. It is not that difficult or expensive to do. No expensive lawyer time is required. It is just a computer process. Any computer technician could do it. Certainly any e-discovery vendor. The City could easily have gone ahead and done the silly keyword filtering and provide an actual file count. This would have provided the City some hard facts to support their burden argument. It should not be that expensive to do. Almost certainly the expense would have been less than this motion practice.

Alternatively, the City could have at least estimated the file count and other burden metrics. They could have made reasonable estimated based on their document review experience in the case so far. They had already reviewed uncontested custodians under their Go Fish structure, so they could have made projections based on past results. Estimates made by projections like this would probably have been sufficient in this case and was certainly better than the track they chose, not providing any information at all.

Another alternative, the one that would have produced the most persuasive evidence, would be to load the filtered ESI of at least a sample of the nine custodians, including the Mayor. Then begin the review, say for a couple of days, and see what that costs. Then project those costs for the rest of the review and rest of the custodians. By this gold standard approach you would not only have the metrics from the data itself — the file counts, page counts, GB size — but also metrics of the document review, what it costs.

You would need to do this on the Mayor’s email separately and argue this burden separately. The Mayor’s email would likely be much more expensive to review than any of the other custodians. It would take attorneys longer to review his documents. There would be more privileged materials to find and log and there would be more redactions. It is like reviewing a CEO’s email. If the attorneys for the City had at least begun some review of Emanuel’s email, they would have been able to provide extensive evidence on the cost and time burden to complete the review.

I suspect the Mayor was the real target here and the other eight custodians were of much less importance. The defense should have gauged their response accordingly. Instead, they did little or nothing to support their burdensome argument, even with the Mayor’s sensitive government email account.

We have a chance to learn from Chicago’s mistake. Always, at the very least, provide some kind of an estimate of the burden. The estimate should include as much information as possible, including time and costs. These estimates can, with time and knowledge, be quite accurate and should be used to set budgets, along with general historical knowledge of costs and expenses. The biggest problem now is a shortage of experts on how to properly estimate document review projects, specifically large ESI-only projects. I suggest you consult with such an cost-expert anytime you are faced with a disproportionate ESI review demands. You should do so before you make final decisions or reply in writing.

 Conclusion

Mann v. City of Chicago is one of those cases where we can learn from the mistakes of others. At least provide an estimate of costs in every dispute under Rule 26(b)(1). Learn to estimate the costs of document reviews. Either that or hire an expert who can do that for you, one that can provide testimony. Start with file counts and go from there. Always have some metrics to back-up your argument. Learn about your data. Learn what it will likely cost to review that data. Learn how to estimate the costs of document reviews. It will probably be a range. The best way to do that is by sampling. With sampling you at least start the document review and estimate total costs by projection of what it has actually cost to date. There are fewer speculative factors that way.

If you agree to part of the review requested, for instance to three out of ten custodians requested, then do that review and measure its costs. That creates the gold standard for metrics of burden under Rule 26(b)(1) and is, after all, required in any objections under Rule 34(b)(2)(B)&(C). See: Judge Peck Orders All Lawyers in NY to Follow the Rules when Objecting to Requests for Production, or Else ….

For more on cost burden estimation listen to my Ed-Talk on the subject, Proportional Document Review under the New Rules and the Art of Cost Estimation.

 



WHY I LOVE PREDICTIVE CODING: Making Document Review Fun Again with Mr. EDR and Predictive Coding 4.0

December 3, 2017

Many lawyers and technologists like predictive coding and recommend it to their colleagues. They have good reasons. It has worked for them. It has allowed them to do e-discovery reviews in an effective, cost efficient manner, especially the big projects. That is true for me too, but that is not why I love predictive coding. My feelings come from the excitement, fun, and amazement that often arise from seeing it in action, first hand. I love watching the predictive coding features in my software find documents that I could never have found on my own. I love the way the AI in the software helps me to do the impossible. I really love how it makes me far smarter and skilled than I really am.

I have been getting those kinds of positive feelings consistently by using the latest Predictive Coding 4.0 methodology (shown right) and KrolLDiscovery’s latest eDiscovery.com Review software (“EDR”). So too have my e-Discovery Team members who helped me to participate in TREC 2015 and 2016 (the great science experiment for the latest text search techniques sponsored by the National Institute of Standards and Technology). During our grueling forty-five days of experiments in 2015, and again for sixty days in 2016, we came to admire the intelligence of the new EDR software so much that we decided to personalize the AI as a robot. We named him Mr. EDR out of respect. He even has his own website now, MrEDR.com, where he explains how he helped my e-Discovery Team in the 2015 and 2015 TREC Total Recall Track experiments.

Bottom line for us from this research was to prove and improve our methods. Our latest version 4.0 of Predictive Coding, Hybrid Multimodal IST Method is the result. We have even open-sourced this method, well most of it, and teach it in a free seventeen-class online program: TARcourse.com. Aside from testing and improving our methods, another, perhaps even more important result of TREC for us was our rediscovery that with good teamwork, and good software like Mr. EDR at your side, document review need never be boring again. The documents themselves may well be boring as hell, that’s another matter, but the search for them need not be.

How and Why Predictive Coding is Fun

Steps Four, Five and Six of the standard eight-step workflow for Predictive Coding 4.0 is where we work with the active machine-learning features of Mr. EDR. These are its predictive coding features, a type of artificial intelligence. We train the computer on our conception of relevance by showing it relevant and irrelevant documents that we have found. The software is designed to then go out and find all other relevant documents in the total dataset. One of the skills we learn is when we have taught enough and can stop the training and complete the document review. At TREC we call that the Stop decision. It is important to keep down the costs of document review.

We use a multimodal approach to find training documents, meaning we use all of the other search features of Mr. EDR to find relevant ESI, such as keyword searches, similarity and concept. We iterate the training by sample documents, both relevant and irrelevant, until the computer starts to understand the scope of relevance we have in mind. It is a training exercise to make our AI smart, to get it to understand the basic ideas of relevance for that case. It usually takes multiple rounds of training for Mr. EDR to understand what we have in mind. But he is a fast learner, and by using the latest hybrid multimodal IST (“intelligently spaced learning“) techniques, we can usually complete his training in a few days. At TREC, where we were moving fast after hours with the Ã-Team, we completed some of the training experiments in just a few hours.

After a while Mr. EDR starts to “get it,” he starts to really understand what we are after, what we think is relevant in the case. That is when a happy shock and awe type moment can happen. That is when Mr. EDR’s intelligence and search abilities start to exceed our own. Yes. It happens. The pupil then starts to evolve beyond his teachers. The smart algorithms start to see patterns and find evidence invisible to us. At that point we sometimes even let him train himself by automatically accepting his top-ranked predicted relevant documents without even looking at them. Our main role then is to determine a good range for the automatic acceptance and do some spot-checking. We are, in effect, allowing Mr. EDR to take over the review. Oh what a feeling to then watch what happens, to see him keep finding new relevant documents and keep getting smarter and smarter by his own self-programming. That is the special AI-high that makes it so much fun to work with Predictive Coding 4.0 and Mr. EDR.

It does not happen in every project, but with the new Predictive Coding 4.0 methods and the latest Mr. EDR, we are seeing this kind of transformation happen more and more often. It is a tipping point in the review when we see Mr. EDR go beyond us. He starts to unearth relevant documents that my team would never even have thought to look for. The relevant documents he finds are sometimes completely dissimilar to any others we found before. They do not have the same keywords, or even the same known concepts. Still, Mr. EDR sees patterns in these documents that we do not. He can find the hidden gems of relevance, even outliers and black swans, if they exist. When he starts to train himself, that is the point in the review when we think of Mr. EDR as going into superhero mode. At least, that is the way my young e-Discovery Team members likes to talk about him.

By the end of many projects the algorithmic functions of Mr. EDR have attained a higher intelligence and skill level than our own (at least on the task of finding the relevant evidence in the document collection). He is always lighting fast and inexhaustible, even untrained, but by the end of his training, he becomes a search genius. Watching Mr. EDR in that kind of superhero mode is what makes Predictive Coding 4.0 a pleasure.

The Empowerment of AI Augmented Search

It is hard to describe the combination of pride and excitement you feel when Mr. EDR, your student, takes your training and then goes beyond you. More than that, the super-AI you created then empowers you to do things that would have been impossible before, absurd even. That feels pretty good too. You may not be Iron Man, or look like Robert Downey, but you will be capable of remarkable feats of legal search strength.

For instance, using Mr. EDR as our Iron Man-like suits, my e-discovery Ã-Team of three attorneys was able to do thirty different review projects and classify 17,014,085 documents in 45 days. See 2015 TREC experiment summary at Mr. EDR. We did these projects mostly at nights, and on weekends, while holding down our regular jobs. What makes this crazy impossible, is that we were able to accomplish this by only personally reviewing 32,916 documents. That is less than 0.2% of the total collection. That means we relied on predictive coding to do 99.8% of our review work. Incredible, but true.

Using traditional linear review methods it would have taken us 45 years to review that many documents! Instead, we did it in 45 days. Plus our recall and precision rates were insanely good. We even scored 100% precision and 100% recall in one TREC project in 2015 and two more in 2016. You read that right. Perfection. Many of our other projects attained scores in the high and mid nineties. We are not saying you will get results like that. Every project is different, and some are much more difficult than others. But we are saying that this kind of AI-enhanced review is not only fast and efficient, it is effective.

Yes, it’s pretty cool when your little AI creation does all the work for you and makes you look good. Still, no robot could do this without your training and supervision. We are a team, which is why we call it hybrid multimodal, man and machine.

Having Fun with Scientific Research at TREC 2015 and 2016

During the 2015 TREC Total Recall Track experiments my team would sometimes get totally lost on a few of the really hard Topics. We were not given legal issues to search, as usual. They were arcane technical hacker issues, political issues, or local news stories. Not only were we in new fields, the scope of relevance of the thirty Topics was never really explained. (We were given one to three word explanations in 2015, in 2016 we got a whole sentence!) We had to figure out intended relevance during the project based on feedback from the automated TREC document adjudication system. We would have some limited understanding of relevance based on our suppositions of the initial keyword hints, and so we could begin to train Mr. EDR with that. But, in several Topics, we never had any real understanding of exactly what TREC thought was relevant.

This was a very frustrating situation at first, but, and here is the cool thing, even though we did not know, Mr. EDR knew. That’s right. He saw the TREC patterns of relevance hidden to us mere mortals. In many of the thirty Topics we would just sit back and let him do all of the driving, like a Google car. We would often just cheer him on (and each other) as the TREC systems kept saying Mr. EDR was right, the documents he selected were relevant. The truth is, during much of the 45 days of TREC we were like kids in a candy store having a great time. That is when we decided to give Mr. EDR a cape and superhero status. He never let us down. It is a great feeling to create an AI with greater intelligence than your own and then see it augment and improve your legal work. It is truly a hybrid human-machine partnership at its best.

I hope you get the opportunity to experience this for yourself someday. The TREC experiments in 2015 and 2016 on recall in predictive coding are over, but the search for truth and justice goes on in lawsuits across the country. Try it on your next document review project.

Do What You Love and Love What You Do

Mr. EDR, and other good predictive coding software like it, can augment our own abilities and make us incredibly productive. This is why I love predictive coding and would not trade it for any other legal activity I have ever done (although I have had similar highs from oral arguments that went great, or the rush that comes from winning a big case).

The excitement of predictive coding comes through clearly when Mr. EDR is fully trained and able to carry on without you. It is a kind of Kurzweilian mini-singularity event. It usually happens near the end of the project, but can happen earlier when your computer catches on to what you want and starts to find the hidden gems you missed. I suggest you give Predictive Coding 4.0 and Mr. EDR a try. To make it easier I open-sourced our latest method and created an online course. TARcourse.com. It will teach anyone our method, if they have the right software. Learn the method, get the software and then you too can have fun with evidence search. You too can love what you do. Document review need never be boring again.

Caution

One note of caution: most e-discovery vendors, including the largest, do not have active machine learning features built into their document review software. Even the few that have active machine learning do not necessarily follow the Hybrid Multimodal IST Predictive Coding 4.0 approach that we used to attain these results. They instead rely entirely on machine-selected documents for training, or even worse, rely entirely on random selected documents to train the software, or have elaborate unnecessary secret control sets.

The algorithms used by some vendors who say they have “predictive coding” or “artificial intelligence” are not very good. Scientists tell me that some are only dressed-up concept search or unsupervised document clustering. Only bona fide active machine learning algorithms create the kind of AI experience that I am talking about. Software for document review that does not have any active machine learning features may be cheap, and may be popular, but they lack the power that I love. Without active machine learning, which is fundamentally different from just “analytics,” it is not possible to boost your intelligence with AI. So beware of software that just says it has advanced analytics. Ask if it has “active machine learning“?

It is impossible to do the things described in this essay unless the software you are using has active machine learning features.  This is clearly the way of the future. It is what makes document review enjoyable and why I love to do big projects. It turns scary to fun.

So, if you tried “predictive coding” or “advanced analytics” before, and it did not work for you, it could well be the software’s fault, not yours. Or it could be the poor method you were following. The method that we developed in Da Silva Moore, where my firm represented the defense, was a version 1.0 method. Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182, 183 (S.D.N.Y. 2012). We have come a long way since then. We have eliminated unnecessary random control sets and gone to continuous training, instead of train then review. This is spelled out in the TARcourse.com that teaches our latest version 4.0 techniques.

The new 4.0 methods are not hard to follow. The TARcourse.com puts our methods online and even teaches the theory and practice. And the 4.0 methods certainly will work. We have proven that at TREC, but only if you have good software. With just a little training, and some help at first from consultants (most vendors with bona fide active machine learning features will have good ones to help), you can have the kind of success and excitement that I am talking about.

Do not give up if it does not work for you the first time, especially in a complex project. Try another vendor instead, one that may have better software and better consultants. Also, be sure that your consultants are Predictive Coding 4.0 experts, and that you follow their advice. Finally, remember that the cheapest software is almost never the best, and, in the long run will cost you a small fortune in wasted time and frustration.

Conclusion

Love what you do. It is a great feeling and sure fire way to job satisfaction and success. With these new predictive coding technologies it is easier than ever to love e-discovery. Try them out. Treat yourself to the AI high that comes from using smart machine learning software and fast computers. There is nothing else like it. If you switch to the 4.0 methods and software, you too can know that thrill. You can watch an advanced intelligence, which you helped create, exceed your own abilities, exceed anyone’s abilities. You can sit back and watch Mr. EDR complete your search for you. You can watch him do so in record time and with record results. It is amazing to see good software find documents that you know you would never have found on your own.

Predictive coding AI in superhero mode can be exciting to watch. Why deprive yourself of that? Who says document review has to be slow and boring? Start making the practice of law fun again.

Here is the PDF version of this article, which you may download and distribute, so long as you do not revise it or charge for it.

 

 


%d bloggers like this: