Do TAR the Right Way with “Hybrid Multimodal Predictive Coding 4.0”

October 8, 2018

The term “TAR” – Technology Assisted Review – as we use it means document review enhanced by active machine learning. Active machine learning is an important tool of specialized Artificial Intelligence. It is now widely used in many industries, including Law. The method of AI-enhanced document review we developed is called Hybrid Multimodal Predictive Coding 4.0. Interestingly, reading these words in the new Sans Forgetica font will help you to remember them.

We have developed an online instructional program to teach our TAR methods and AI infused concepts to all kinds of legal professionals. We use words, studies, case-law, science, diagrams, math, statistics, scientific studies, test results and appeals to reason to teach the methods. To balance that out, we also make extensive use of photos and videos. We use right brain tools of all kinds, even subliminals, special fonts, hypnotic images and loads of hyperlinks. We use emotion as another teaching tool. Logic and Emotion. Sorry Spock, but this multimodal, holistic approach is more effective with humans than an all-text, reason-only approach of Vulcan law schools.

We even try to use humor and promote student creativity with our homework assignments. Please remember, however, this is not an accredited law school class, so do not expect professorial interaction. Did we mention the TAR Course is free?

By the end of study of the TAR Course you will know and remember exactly what Hybrid Multimodal means. You will understand the importance of using all varieties of legal search, for instance: keywords, similarity searches, concept searches and AI driven probable relevance document ranking. That is the Multimodal part. We use all of the search tools that our KL Discovery document review software provides.

 

The Hybrid part refers to the partnership with technology, the reliance of the searcher on the advanced algorithmic tools. It is important than Man and Machine work together, but that Man remain in charge of justice. The predictive coding algorithms and software are used to enhance the lawyers, paralegals and law tech’s abilities, not replace them.

By the end of the TAR Course you will also know what IST means, literally Intelligently Spaced Training. It is our specialty technique of AI training where you keep training the Machine until first pass relevance review is completed. This is a type of Continuous Active Learning, or as Grossman and Cormack call it, CAL. By the end of the TAR Course you should also know what a Stop Decision is. It is a critical point of the document review process. When do you stop the active machine teaching process? When is enough review enough? This involves legal proportionality issues, to be sure, but it also involves technological processes, procedures and measurements. What is good enough Recall under the circumstances with the data at hand? When should you stop the machine training?

We can teach you the concepts, but this kind of deep knowledge of timing requires substantial experience. In fact, refining the Stop Decision was one of the main tasks we set for ourself for the  e-Discovery Team experiments in the Total Recall Track of the National Institute of Standards and Technology Text Retrieval Conference in 2015 and 2016. We learned a lot in our two years. I do not think anyone has spent more time studying this in both scientific and commercial projects than we have. Kudos again to KL Discovery for helping to sponsor this kind of important research  by the e-Discovery Team.

 

 

Working with AI like this for evidence gathering is a newly emerging art. Take the TAR Course and learn the latest methods. We divide the Predictive Coding work flow into eight-steps. Master these steps and related concepts to do TAR the right way.

 

Pop Quiz: What is one of the most important considerations on when to train again?

One Possible Correct Answer: The schedule of the humans involved. Logistics and project planning is always important for efficiency. Flexibility is easy to attain with the IST method. You can easily accommodate schedule changes and make it as easy as possible for humans and “robots” to work together. We do not literally mean robots, but rather refer to the advanced software and the AI that arises from the machine training as an imiginary robot.

 

 

 

 

 

 

 

 

 


Responding Party’s Complaints of Financial Burden of Document Review Were Unsupported by the Evidence, Any Evidence

August 5, 2018

One of the largest cases in the U.S. today is a consolidated group of price-fixing cases in District Court in Chicago. In Re Broiler Chicken Antitrust Litigation, 290 F. Supp. 3d 772 (N.D. Ill. 2017) (order denying motions to dismiss and discussing the case). The consolidated antitrust cases involve allegations of a wide spread chicken price-fixing. Big Food Versus Big Chicken: Lawsuits Allege Processors Conspired To Fix Bird Prices (NPR 2/6/18).

The level of sales and potential damages are high. For instance, in 2014 the sales of broiler chickens in the U.S. was $32.7 Billion. That’s sales for one year. The classes have not been certified yet, but discovery is underway in the consolidated cases.

The Broiler Chicken case is not only big money, but big e-discovery. A Special Master (Maura Grossman) was appointed months ago and she developed a unique e-discovery validation protocol order for the case. See: TAR for Smart Chickens, by John Tredennick and Jeremy Pickens that analyzes the validation protocol.

Maura was not involved in the latest discovery dispute where, Agri Stats, one of many defendants, claimed a request for production was too burdensome as to it. The latest problem went straight to the presiding Magistrate Judge Jeffrey T. Gilbert who issued his order on July 26, 2018. In re Broiler Chicken Antitrust Litig., 2018 WL 3586183 (N.D. Ill. 7/26/18).

Agri Stats had moved for a protective order to limit an email production request. Agri Stats claimed that the burden imposed was not proportional because it would be too expensive. Its lawyers told Judge Gilbert that it would cost between $1,200,000 and $1,700,00 to review the email using the keywords negotiated.

Fantasy Hearing

I assume that there were hearings and attorney conferences before the hearings. But I do not know that for sure. I have not seen a transcript of the hearings with Judge Gilbert. All we know is that defense counsel told the judge that under the keywords selected the document review would cost between $1,200,000 and $1,700,000, and that they had no explanation on how the cost estimate was prepared, nor any specifics as to what it covered. Although I was not there, after four decades of doing this sort of work, I have a pretty good idea of what was or might have been said at the hearing.

This representation of million dollar costs by defense counsel would have gotten the attention of the judge. He would naturally have wanted to know how the cost range was calculated. I can almost hear the judge say from the bench: “$1.7 Million Dollars to do a doc review. Yeah, ok. That is a lot of money. Why so much counsel? Anyone?” To which the defense attorneys said in response, much like the students in Ferris Beuller’s class:

“. . . . . .”

 

Yes. That’s right. They had Nothing. Just Voodoo Economics

Well, Judge Gilbert’s short opinion makes it seem that way. In re Broiler Chicken Antitrust Litig., 2018 WL 3586183 (N.D. Ill. 7/26/18).

If a Q&A interchange like this happened, either in a phone hearing, or in person, then the lawyers must have said something. You do not just ignore a question by a federal judge. The defense attorneys probably did a little hemming and hawing, conferred among themselves, and then said something to the judge like: “We are not sure how those numbers were derived, $1.2M to $1.5M, and will have to get back to you on that question, Your Honor.” And then, they never did. I have seen this kind of thing a few times before. We all try to avoid it. But it is even worse to make up a false story, or even present an unverified story to the judge. Better to say nothing and get back to the judge with accurate information.

Discovery Order of July 26, 2018

Here is a quote from Judge Gilbert’s Order so you can read for yourself the many questions the moving party left unanswered (detailed citations to record removed; graphics added):

Agri Stats represents that the estimated cost to run the custodial searches EUCPs propose and to review and produce the ESI is approximately $1.2 to $1.7 million. This estimated cost, however, is not itemized nor broken down for the Court to understand how it was calculated. For example, is it $1.2 to $1.7 million to review all the custodial documents from 2007 through 2016? Or does this estimate isolate only the pre-October 2012 custodial searches that Agri Stats does not want to have to redo, in its words? More importantly, Agri Stats also admits that this estimate is based on EUCPs’ original proposed list of search terms. But EUCPs represent (and Agri Stats does not disagree) that during their apparently ongoing discussions, EUCPs have proposed to relieve Agri Stats of the obligation to produce various categories of documents and data, and to revise the search terms to be applied to data that is subject to search. Agri Stats does not appear to have provided a revised cost estimate since EUCPs agreed to exclude certain categories of documents and information and revised their search terms. Rather, Agri Stats takes the position that custodial searches before October 3, 2012 are not proportional to the needs of the case — full stop — so it apparently has not fully analyzed the cost impact of EUCPs’ revised search terms or narrowed document and data categories.

The Court wonders what the cost estimate is now after EUCPs have proposed to narrow the scope of what they are asking Agri Stats to do. (emphasis added) EUCPs say they already have agreed, or are working towards agreement, that 2.5 million documents might be excluded from Agri Stats’s review. That leaves approximately 520,000 documents that remain to be reviewed. In addition, EUCPs say they have provided to Agri Stats revised search terms, but Agri Stats has not responded. Agri Stats says nothing about this in its reply memorandum.

EUCPs contend that Agri Stats’s claims of burden and cost are vastly overstated. The Court tends to agree with EUCPs on this record. It is not clear what it would cost in either time or money to review and produce the custodial ESI now being sought by EUCPs for the entire discovery period set forth in the ESI Protocol or even for the pre-October 3, 2102 period. It seems that Agri Stats itself also does not know for sure what it would have to do and how much it would cost because the parties have not finished that discussion. Because EUCPs say they are continuing to work with Agri Stats to reduce what it must do to comply with their discovery requests, the incremental burden on what Agri Stats now is being asked to do is not clear.

For all these reasons, Agri Stats falls woefully short of satisfying its obligation to show that the information [*10] EUCPs are seeking is not reasonably accessible because of undue burden or cost.

Estimations for Fun and Profit

In order to obtain a protective order you need to estimate the costs that will likely be involved in the discovery from which you seek protection. Simple. Moreover, it obviously has to be a reasonable estimate, a good faith estimate, supported by the facts. The Brolier Chicken defendant, Agri Stats, came up with an estimate. They got that part right. But then they stopped. You never do that. You do not just throw up a number and hope for the best. You have to explain how it was derived. Blushing at any price higher than that is not a reasonable explanation, but is often honest.

Be ready to explain how you came up with the cost estimate. To break down the total into its component parts and allow the “Court to understand how it was calculated.” Agri Stats did not do that. Instead, they just used a cost estimate of between $1.2 to $1.7 million. So of course Agri Stats’ motion for protective order was denied. The judge had no choice because no evidence to support the motion was presented, neither factual or expert evidence. There was no need for Judge Gilbert to go into the secondary questions of whether expert testimony was also needed and whether it should be under Rule 702. He got nothing remember. No explanation for the $1.7 Million.

The lesson of the latest discovery order in Broiler Chicken is pretty simple. In re Broiler Chicken Antitrust Litig., 2018 WL 3586183 (N.D. Ill. 7/26/18). Get a real cost estimate from an expert. The expert needs to know and understand document review, search and costs of review. They need to know how to make reasonable search and retrieval efforts. They also need to know how to make reliable estimates. You may need two experts for this, as not all have expertise in both fields, but they are readily available. Many can even talk pretty well too, but not all! Seriously, everybody knows we are the most fun and interesting lawyer subgroup.

The last thing to do is skimp on an expert and just pull out a number from your hat (or your vendor’s hat) and hope for the best.

This is federal court, not a political rally. You do not make bald assertions and leave the court wondering. Facts matter. Back of the envelope type guesses are not sufficient, especially in a big case like Broiler Chicken. Neither are guesstimates by people who do not know what they are doing. Make disclosure and cooperate with the requesting party to reach agreement. Do not just rush to the courthouse hoping to  dazzle with smoke and mirrors. Bring in the experts. They may not dazzle, but they can get you beyond the magic mirrors.

Case Law Background

Judge Paul S. Grewal, who is now Deputy G.C. of Facebook, said quoting The Sedona Conference in Vasudevan: There is no magic to the science of search and retrieval: only mathematics, linguistics, and hard work.Vasudevan Software, Inc. v. Microstrategy Inc., No. 11-cv-06637-RS-PSG, 2012 US Dist LEXIS 163654 (ND Cal Nov 15, 2012) (quoting The Sedona Conference, Best Practices Commentary on the Use of Search and Information and Retrieval Methods in E-Discovery, 8 Sedona Conf. J. 189, 208 (2007). There is also no magic to the art of estimation, no magic to calculating the likely range of cost to search and retrieve the documents requested. Judge Grewal refused to make any decision in Vasudevan without expert assistance, recognizing that this area is “fraught with traps for the unwary” and should not be decided on mere arguments of counsel.

Judge Grewal did not address the procedural issue of whether Rule 702 should govern. But he did cite to Judge Facciola’s case on the subject, United States v. O’Keefe, 537 F. Supp. 2d 14 (D.D.C. 2008). Here Judge Facciola first raised the discovery expert evidence issue. He not only opined that experts should be used, but that the parties should follow the formalities of Evidence Rule 702. That governs things such as whether you should qualify and swear in an expert and follow otherwise follow Rule 702 on their testimony. I discussed this somewhat in my earlier article this year, Judge Goes Where Angels Fear To Tread: Tells the Parties What Keyword Searches to Use.

Judge Facciola in O’Keffe held that document review issues require expert input and that this input should be provided with all of the protections provided by Evidence Rule 702.

Given this complexity, for lawyers and judges to dare opine that a certain search term or terms would be more likely to produce information than the terms that were used is truly to go where angels fear to tread. This topic is clearly beyond the ken of a layman and requires that any such conclusion be based on evidence that, for example, meets the criteria of Rule 702 of the Federal Rules of Evidence. Accordingly, if defendants are going to contend that the search terms used by the government were insufficient, they will have to specifically so contend in a motion to compel and their contention must be based on evidence that meets the requirements of Rule 702 of the Federal Rules of Evidence.

Conclusion

In the Boiler Chicken Antitrust Order of July 27, 2018, a motion for protective order was denied because of inadequate evidence of burden. All the responding party did was quote a price-range, a number presumably provided by an expert, but there was no explanation. More evidence was needed, both expert and fact. I agree that generally document review cost estimation requires opinions of experts. The experts need to be proficient in two fields. They need to know and understand the science of document search and retrieval and the likely costs for these services for a particular set of data.

Although all of the formalities and expense of compliance with Evidence Rule 702 may be needed in some cases, it is probably not necessary in most. Just bring your expert to the attorney conference or hearing. Yes, two experts may well disagree on some things, probably will, but the areas of agreement are usually far more important. That in turn makes compromise and negotiation far easier. Better leave the technical details to the experts to sort out. That follows the Rule 1 prime directive of “just, speedy and inexpensive.” Keep the trial lawyers out of it. They should instead focus and argue on what the documents mean.

 

 

 


Another Judge is Asked to Settle a Keyword Squabble and He Hesitates To Go Where Angels Fear To Tread: Only Tells the Parties What Keywords NOT To Use

July 15, 2018

In this blog we discuss yet another case where the parties are bickering over keywords and the judge was asked to intervene. Webastro Thermo & Comfort v. BesTop, Inc., 2018 WL 3198544, No.16-13456 (E.D. Mich. June 29, 2018). The opinion was written in a patent case in Detroit by Executive Magistrate Judge R. Steven Whalen. He looked at the proposed keywords and found them wanting, but wisely refused to go further and tell them what keywords to use. Well done Judge Whalen!

This case is similar to the one discussed in my last blog, Judge Goes Where Angels Fear To Tread: Tells the Parties What Keyword Searches to Use, where Magistrate Judge Laura Fashing in Albuquerque was asked to resolve a keyword dispute in United States v. New Mexico State University, No. 1:16-cv-00911-JAP-LF, 2017 WL 4386358 (D.N.M. Sept. 29, 2017). Judge Fashing not only found the proposed keywords inadequate, but came up with her own replacement keywords and did so without any expert input.

In my prior blog on Judge Fashing’s decision I discussed Judge John Facciola’s landmark legal search opinion in United States v. O’Keefe, 537 F. Supp. 2d 14 (D.D.C. 2008) and other cases that follow it. In O’Keefe Judge Facciola held that because keyword search questions involve complex, technical, scientific questions, that a judge should not decide such issues without the help of expert testimony. That is the context for his famous line:

Given this complexity, for lawyers and judges to dare opine that a certain search term or terms would be more likely to produce information than the terms that were used is truly to go where angels fear to tread. This topic is clearly beyond the ken of a layman and requires that any such conclusion be based on evidence that, for example, meets the criteria of Rule 702 of the Federal Rules of Evidence.

In this weeks blog I consider the opinion by Judge Whalen in Webastro Thermo & Comfort v. BesTop, Inc., 2018 WL 3198544, No.16-13456 (E.D. Mich. June 29, 2018) where he told the parties what keywords not to use, again without expert input, but stopped there. Interesting counterpoint cases. It is also interesting to observe that in all three cases, O’Keefe, New Mexico State University and Webastro, the judges end on the same note where the parties are ordered to cooperate. Ah, if it were only so easy.

Stipulated Order Governing ESI Production

In Webastro Thermo & Comfort v. BesTop, Inc., the parties cooperated at the beginning of the case. They agreed to the entry of a stipulated ESI Order governing ESI production. The stipulation included a cooperation paragraph where the parties pledge to try to resolve all ESI issues without judicial intervention. Apparently, the parties cooperation did not go much beyond the stipulated order. Cooperation broke down and the plaintiff filed a barrage of motions to avoid having to do document review, including an Emergency Motion to Stay ESI Discovery. The plaintiff alleged that the defendant violated the ESI stipulation by “propounding overly broad search terms in its request for ESI.” Oh, how terrible. Red Alert!

Plaintiffs further accused defense counsel of “propounding prima facie inappropriate search criteria, and refusal to work in good faith to target its search terms to specific issues in this case.” Again, the outrageous behavior reminds me of the Romulans. I can see why plaintiff’s counsel called an emergency and asked for costs and relief from having to produce any ESI at all. That kind of approach rarely goes over well with any judge, but here it worked. That’s because the keywords the defense wanted plaintiff to use in its search for relevant ESI were, in fact, very bad.

Paragraph 1.3(3) of the ESI Order establishes a protocol designed to constrain e-discovery, including a limitation to eight custodians with no more than ten keyword search terms for each. It goes on to provide the following very interesting provision:

The search terms shall be narrowly tailored to particular issues. Indiscriminate terms, such as the producing company’s name or its product name, are inappropriate unless combined with narrowing search criteria that significantly reduce the risk of overproduction. A conjunctive combination of multiple words or phrases (e.g. ‘computer’ and ‘system’) narrows the search and shall count as a single term. A disjunctive combination of multiple words or phrases (e.g. ‘computer’ or ‘system’) broadens the search, and thus each word or phrase shall count as a separate search term unless they are variants of the same word. Use of narrowing search criteria (e.g. ‘and,’ ‘but not,’ ‘w/x’) is encouraged to limit the production and shall be considered when determining whether to shift costs for disproportionate discovery.

Remember, this is negotiated wording that the parties agreed to, including the bit about product names and “conjunctive combination.”

Defendant’s Keyword Demands

The keywords proposed by defense counsel for plaintiff’s search then included: “Jeep,” “drawing” and its abbreviation “dwg,” “top,” “convertible,” “fabric,” “fold,” “sale or sales,” and the plaintiff’s product names,  “Swaptop” and “Throwback.”

Plaintiff’s counsel advised Judge Whalen that the ten terms created the following results with five custodians (no word on the other three):

  • Joseph Lupo: 30 gigabytes, 118,336 documents.
  • Ryan Evans: 13 gigabytes, 44,373 documents.
  • Tyler Ruby: 10 gigabytes, 44,460 documents.
  • Crystal Muglia: 245,019 documents.
  • Mark Denny: 162,067 documents.
In Footnote Three Judge Whalen adds, without citation to authority or the record, that:
One gigabyte would comprise approximately 678,000 pages of text. 30 gigabytes would represent approximately 21,696,000 pages of text.

Note that Catalyst did a study of average number of files in a gigabyte in 2014. They found that the average number was 2,500 files per gigabyte. They suggest using 3,000 files per gigabyte for cost estimates, just to be safe. So I have to wonder where Judge Whalen got this 678,000 pages of text per gigabyte.

Plaintiff’s counsel added that:

Just a subset of the email discovery requests propounded by BesTop have returned more than 614,00 documents, comprising potentially millions of individual pages for production.

Plaintiff’s counsel also filed an affidavit where he swore that he reviewed the first 100 consecutively numbered documents to evaluate the burden. Very impressive effort. Not! He looked at the first one-hundred documents that happened to be on top of a 614,000 pile. He also swore that none of these first one-hundred were relevant. (One wonders how many of them were empty pst container files. They are often the “documents” found first in consecutive numbering of an email collection. A better sample might have been to look at the 100 docs with the most hits.)

Judge Whalen Agrees with Plaintiff on Keywords

Judge Whalen agreed with plaintiff and held that:

The majority of defendant’s search terms are overly broad, and in some cases violate the ESI Order on its face. For example, the terms “throwback” and “swap top” refer to Webasto’s product names, which are specifically excluded under 1.3(3) of the ESI Order.

The overbreadth of other terms is obvious, especially in relation to a company that manufactures and sells convertible tops: “top,” “convertible,” “fabric,” “fold,” “sale or sales.” Using “dwg” as an alternate designation for “drawing” (which is itself a rather broad term) would call into play files with common file extension .dwg.

Apart from the obviously impermissible breadth of BesTop’s search terms, their overbreadth is borne out by Mr. Carnevale’s declarations, which detail a return of multiple gigabytes of ESI potentially comprising tens of millions of pages of documents, based on only a partial production. In addition, the search of just the first 100 records produced using BesTop’s search terms revealed that none were related to the issues in this lawsuit. Contrary to BesTop’s contention that Webasto’s claim of prejudice is conclusory, I find that Webasto has sufficiently “articulate[d] specific facts showing clearly defined and serous injury resulting from the discovery sought ….” Nix, 11 Fed.App’x. at 500.

Thus, BesTop’s reliance on City of Seattle v. Professional Basketball Club, LLC, 2008 WL 539809 (W.D. Wash. 2008), is inapposite. In City of Seattle, the defendant offered no facts to support its assertion that discovery would be overly burdensome, instead “merely state[ing] that producing such emails ‘would increase the email universe exponentially[.]’” Id. at *3. In our case, Webasto has proffered hard numbers as to the staggering amount of ESI returned based on BesTop’s search requests. Moreover, while disapproving of conclusory claims of burden, the Court in City of Seattle recognized that the overbreadth of some search terms would be apparent on their face:

“‘[U]nless it is obvious from the wording of the request itself that it is overbroad, vague, ambiguous or unduly burdensome, an objection simply stating so is not sufficiently specific.’” Id., quoting Boeing Co. v. Agric. Ins. Co., 2007 U.S. Dist. LEXIS 90957, *8 (W.D.Wash. Dec. 11, 2007).

As discussed above, many of BesTop’s terms are indeed overly general on their face. And again, propounding Webasto’s product names (e.g., “throwback” and “swap top”) violates the express language of the ESI Order.

Defense Counsel Did Not Cooperate

Judge Whalen then went on to address the apparent lack of cooperation by defendant.

Adversarial discovery practice, particularly in the context of ESI, is anathema to the principles underlying the Federal Rules, particularly Fed.R.Civ.P. 1, which directs that the Rules “be construed, administered, and employed by the court and the parties to secure the just, speedy, and inexpensive determination of every action and proceeding.” In this regard, the Sedona Conference Cooperation Proclamation states:

“Indeed, all stakeholders in the system–judges, lawyers, clients, and the general pubic–have an interest in establishing a culture of cooperation in the discovery process. Over-contentious discovery is a cost that has outstripped any advantage in the face of ESI and the data deluge. It is not in anyone’s interest to waste resources on unnecessary disputes, and the legal system is strained by ‘gamesmanship’ or ‘hiding the ball,’ to no practical effect.”

The stipulated ESI Order, which controls electronic discovery in this case, is an important step in the right direction, but whether as the result of adversarial overreach or insufficient effort, BesTop’s proposed search terms fall short of what is required under that Order.

Judge Whalen’s Ruling

Judge Whalen concluded his short Order with the following ruling:

For these reasons, Webasto’s motion for protective order [Doc. #78] is GRANTED as follows:

Counsel for the parties will meet and confer in a good-faith effort to focus and narrow BesTop’s search terms to reasonably limit Webastro’s production of ESI to emails relevant (within the meaning of Rule 26) to the issues in this case, and to exclude ESI that would have no relationship to this case.

Following this conference, and within 14 days of the date of this Order, BesTop will submit an amended discovery request with the narrowed search terms.  …

Because BesTop will have the opportunity to reformulate its discovery request to conform to the ESI Order, Webasto’s request for cost-shifting is DENIED at this time. However, the Court may reconsider the issue of cost-shifting if BesTop does not reasonably narrow its requests.

Difficult to Cooperate on Legal Search Without the Help of Experts

The defense in Webastro violated their own stipulation by the use of a party’s product names without further Boolean limiters, such as “product name AND another term.” Then defense counsel added insult to injury by coming across as uncooperative. I don’t know if they alone were uncooperative, or if it was a two way street, but appearances are everything. The emails between counsel were attached to the motions, and the judge scowled at the defense here, not plaintiff’s counsel. No judge likes attorneys who ignore orders, stipulated or otherwise, and are uncooperative to boot. “Uncooperative” is  label that you should avoid being called by a judge, especially in the world of e-discovery. Better to be an angel for discovery and save the devilish details for motions and trial.

In Webastro Thermo & Comfort v. BesTop, Inc., Judge Whalen struck down the proposed keywords without expert input. Instead Judge Whalen based his order on some incomplete metrics, namely the number of hits produced by the keywords that defense dreamed up. At least Judge Whalen did not go further and order the use of specific keywords as Judge Fashing did in United States v. New Mexico State University. Still, I wish he had not only ordered the parties to cooperate, but also ordered them to bring in some experts to help with the search tasks. You cannot just talk your way into good searches. No matter what the level of cooperation, you still have to know what you are doing.

If I had been handling this for the plaintiff, I would have gotten my hands much dirtier in the digital mud, meaning I would have done far more than just look at the first one-hundred of 614,000 documents. That was a poor quality control test, but obviously, here at least, was better than nothing. I would have done a sample review of each keyword and evaluated the precision of each. Some might have been ok as is, although probably not. They usually require some refinement. Sometimes it only takes a few minutes of review to determine that. Bottom line, I would have checked out the requested keywords. There were only ten here. That would take maybe three hours or so with the right software. You do not need big judgmental sampling most of the time to see the effectiveness, or not, or keywords.

The next step is to come up with, and test, a number of keyword refinements based on what you see in the data. Learn from the data. Test and improve various keyword combinations. That can take a few more hours. Some may think this is too much work, but it is far less time than preparing motions, memos and attending hearings. And anyway, you need to find the relevant evidence for your case.

After the tests, you share what you learned with opposing counsel and the judge, assuming they want to know. In my experience, most could care less about your methods, so long as your production includes the information they were looking for. You do not have to disclose your every little step, but you should at least advise, again if asked, information about “hit results.” This disclosure alone can go a long way, as this opinion demonstrates. Plaintiff’s counsel obtained very little data about the ineffectiveness of the defendants proposed searched terms, but that was enough to persuade the judge to enter a protective order.

To summarize, after evaluating the proposed search terms I would have improved on them. Using the improved searches I would have begun the attorney review and production. I would have shared the search information, cooperated as required by stipulation, case-law and rules, and gone ahead with my multimodal searches. I would use keywords and the many other wonderful kinds of searches that the Legal Technology industry has come up with in the last 25 years or so since keyword search was new and shiny.

Conclusion

The stipulation the parties used in Webastro could have been used at the turn of the century. Now it seems a little quaint, but alas, suits most inexperienced lawyers today. Anyway, talking about and using keywords is a good way to start a legal search. I sometimes call that Relevancy Dialogues or ESI Communications. Try out some keywords, refine and use them to guide your review, but do not stop there. Try other types of search too. Multimodal. Harness the power of the latest technology, namely AI enhanced search (Predictive Coding). Use statistics too and random sampling to better understand the data prevalence and overall search effectiveness.

If you do not know how to do legal search, and I estimate that 98% of lawyers today do not, then hire an expert. (Or take the time to learn, see eg TARcourse.com.) Your vendor probably has a couple of search experts. There may also be a lawyer in town with this expertise. Now there are even a few specialty law firms that offer these services nationwide. It is a waste of time to reinvent the Wheel, plus it is an ethical dictate under Rule 1.1 – Competence, to associate with competent counsel on a legal task when you are not.

Regarding the vendor experts, remember that even though they may be lawyers, they can only go so far. They can only provide technical advice, not legal, such as proportionality analysis under Rule 26, etc. That requires a practicing lawyer who specializes in e-discovery, preferably as a full-time specialty and not just something they do every now and then. If you are in a big firm, like I am, find the expert in your firm who specializes in e-discovery, like me. They will help you. If your firm does not have such an expert, better get one, either that or get used to losing and having your clients complain.

 


Document Review and Proportionality – Part One

March 18, 2018

In 2013 I wrote a law review article on how the costs of document review could be controlled using predictive coding and cost estimation. Predictive Coding and the Proportionality Doctrine: a Marriage Made in Big Data, 26 Regent U. Law Review 1 (2013-2014). Today I write on how it can be controlled in document review, even without predictive coding. Here is the opening paragraph of my earlier article:

The search of electronic data to try to find evidence for use at trial has always been difficult and expensive. Over the past few years, the advent of Big Data, where both individuals and organizations retain vast amounts of complex electronic information, has significantly compounded these problems. The legal doctrine of proportionality responds to these problems by attempting to constrain the costs and burdens of discovery to what are reasonable. A balance is sought between the projected burdens and likely benefits of proposed discovery, considering the issues and value of the case. Several software programs on the market today have responded to the challenges of Big Data by implementing a form of artificial intelligence (“AI”) known as active machine learning to help lawyers review electronic documents. This Article discusses these issues and shows that AI-enhanced document review directly supports the doctrine of proportionality. When used together, proportionality and predictive coding provide a viable, long-term solution to the problems and opportunities of the legal search of Big Data.

The 2013 article was based on version 1.0 Predictive Coding. Under this first method you train and rank documents and then review only the higher ranking documents. Here is a more detailed description from pages 23, 24 of the article:

This kind of AI-enhanced legal review is typically described today in legal literature by the term
predictive coding. This is because the computer predicts how an entire body of documents should be coded (classified) based on how the lawyer has coded the smaller training sets. The prediction places a probability ranking on each document, typically ranging from 0% to 100% probability. Thus, in a
relevancy classification, each and every document in the entire dataset (the corpus) is ranked with a percentage of likely relevance and irrelevance. …
As will be shown, this ranking feature is key to the use of the legal doctrine of proportionality. The ability to rank all documents in a corpus on probable relevance is a new feature that no other legal search software has previously provided.71

It was a two phase procedure: train then review. Yes, some review would take place in the first training phase, but this would be a relatively small number, say 10-20% of the total documents reviewed. Most of the human review of documents would take place in phase two. The workflow of version 1.0 is shown in the diagram below and is described in detail in the article, starting at page 31.

Predictive Coding and the Proportionality Doctrine argued that attorneys should scale the number of documents for the second phase of document review based on estimated costs constrained to a proportional amount. No more spending $100,000 for document review in a $200,000 case. The number of documents you selected for review would be limited to proportional costs. Predictive coding and its ranking features allowed you to select the documents for review that were most likely to be relevant. If you could only afford to spend $20,000 on a document review project, then you would limit the number of documents reviewed to those within that scope that were the highest ranked as probable relevant. Here is the article’s description at pages 54-55 of the process and link between the doctrine of proportionality and predictive coding.

What makes this a marriage truly made in heaven is the document-ranking capabilities of predictive coding. This allows parties to limit the documents considered for final production to those that the computer determines have the highest probative value. This key ranking feature of AI-enhanced document review allows the producing party to provide the requesting party with the most bang for the buck. This not only saves the producing party money, and thus keeps its costs proportional, but it saves time and expenses for the requesting party. It makes the production much more precise, and thus faster and easier to review. It avoids what can be a costly exercise to a requesting party to wade through a document dump 192, a production that contains a high number of irrelevant or marginally relevant documents. Most importantly, it gives the requesting party what it really wants—the documents that are the most important to the case.

In the article, pages 58-60, I called this method Bottom-Line-Driven Proportional Review and describe the process in greater detail.

The bottom line in e-discovery production is what it costs. Despite what some lawyers and vendors may say, total cost is not an impossible question to answer. It takes an experienced lawyer’s skill to answer, but,
after a while, you can get quite good at such estimation. It is basically a matter of estimating attorney billable hours plus vendor costs. With practice, cost estimation can become a reliable art, a projection that you can count on for budgeting purposes, and, as we will see, for proportionality arguments.  …
The new strategy and methodology is based on a bottom line approach where you estimate what review costs will be, make a proportionality analysis as to what should be spent, and then engage in defensible culling to bring the review costs within the proportional budget. The producing party determines the number of documents to be subjected to final review by calculating backwards from the bottom line of what they are willing, or required, to pay for the production.  …
Under the Bottom-Line-Driven Proportional approach, after analyzing the case merits and determining the maximum proportional expense, the responding party makes a good faith estimate of the likely
maximum number of documents that can be reviewed within that budget. The document count represents the number of documents that you estimate can be reviewed for final decisions of relevance, confidentiality, privilege, and other issues and still remain within budget. The review costs you estimate must be based on best practices, which in all large review projects today means predictive coding, and the estimates must be accurate (i.e., no puffing or mere guesswork).
Note this last quote (emphasis added) from Predictive Coding and the Proportionality Doctrine: a Marriage Made in Big Data shows an important limitation to the article’s budgetary proposal, it was limited to large review projects where predictive coding was used. Without this marriage to predictive coding, the promise of proportionality by cost estimations was lost. My article today fills this gap.
Here I will explain how document review can be structured to provide estimates and review constraints, even when predictive coding and its ranking are not used. This is, in effect, the single lawyers guide, one where there has not been a marriage with predictive coding. It is a guide for small and medium sized document review projects, which are, after all, the vast majority of projects faced by the legal profession.

To be honest, back when I first wrote the law review article I did not think it would be necessary to develop such a proportionality method, one that does not use AI document ranking. I assumed predictive coding would take off and by now would be used in almost all projects, no matter what the size. I assumed that since active machine learning and document ranking was such good new technology, that even our conservative profession would embrace it within the next few years. Boy was I wrong about that. The closing lines of Predictive Coding and the Proportionality Doctrine: a Marriage Made in Big Data have been proven naive.

The key facts needed to try a case and to do justice can be found in any size case, big and small, at an affordable price, but you have to embrace change and adopt new legal and technical methodologies. The Bottom-Line-Driven Proportional Review method is part of that answer, and so too is advanced-review software at affordable prices. When the two are used together, it is a marriage made in heaven.

I blame both lawyers and e-discovery vendors for this failure, as well as myself for misjudging my peers. Law schools and corporate counsel have not helped much either. Only the judiciary seems to have caught on and kept up.

Proportionality as a legal doctrine took off as expected after 2013, but not the marriage with predictive coding. Lawyers have proven to be much more stubborn than anticipated. They will barely even go out with predictive coding, no matter how attractive she is, much less marry her. The profession as a whole remains remarkably slow to adopt new technology. The judges are tempted to use their shotgun to force a wedding, but so far have refrained from ordering a party to use predictive coding. Hyles v. New York City, No. 10 Civ. 3119 (AT)(AJP), 2016 WL 4077114 (S.D.N.Y. Aug. 1, 2016) (J. Peck: “There may come a time when TAR is so widely used that it might be unreasonable for a party to decline to use TAR. We are not there yet.”)

Changes Since 2013

A lot has happened since 2013 when Predictive Coding and the Proportionality Doctrine was written. In December 2015 Rule 26(b) on relevance was revised to strengthen proportionality and we have made substantial improvements to Predictive Coding methods. In the ensuing years most experts have abandoned this early two-step method of train then review in favor of a method where training continues throughout the review process. In other words, today we keep training until the end. See Eg. the e-Discovery Team’s Predictive Coding, version 4.0, with its Intelligently Spaced Training. (This is similar to a method popularized by Maura Grossman and Gordon Cormack, which they called Continuous Active Learning or CAL for short, a term they later trademarked.)

The 2015 revision to Rule 26(b) on relevance has spurred case law and clarified that undue burden, the sixth factor of proportionality under Rule 26(b)(1), must be argued in detail with facts proven.

  1. the importance of the issues at stake in this action;
  2. the amount in controversy;
  3. the parties’ relative access to relevant information;
  4. the parties’ resources;
  5. the importance of the discovery in resolving the issues; and
  6. whether the burden or expense of the proposed discovery outweighs its likely benefit.”

Oxbow Carbon & Minerals LLC v. Union Pacific Railroad Company, No. 11-cv-1049 (PLF/GMH), 2017 WL 4011136, (D.D.C. Sept. 11, 2017). Although all factors are important and should be addressed, the last factor is usually the most important one in a discovery dispute. It is also the factor that can be addressed generally for all cases and is the core of proportionality.

Proportional “Bottom Line Driven” Method of Document Review that Does Not Require Use of Predictive Coding

I have shared how I use predictive coding with continuous training in my TARcourse.com online instruction program. The eight-step workflow is shown below.

I have not previously shared any information on the document review workflow that I follow in small and medium seized cases where predictive coding is not used. The rest of this article will do so now.

Please note that I have a well-developed and articulated list of steps and procedures for attorneys in my law firm to follow in such small cases. I keep this as a trade-secret and will not reveal them here. Although they are widely known in my firm, and slightly revised and updated each year, they are not public. Still, any expert in document review should be able to create their own particular rules and implementation methods. Warning, if you are not such an expert, be careful in relying on these high-level explanations alone. The devil is in the details and you should retain an expert to assist.

Here is a chart summarizing the SIX-Step Workflow and six basic concepts that must be understood for the process to work at maximum efficiency.

The first three steps iterate with searches to cull out the irrelevant documents, and then culminate with Disclosures of the plan developed for Steps Five and Six, Final Review and Production.  The sixth production step is always in phases according to proportional planning.

A key skill that must be learned is project cost estimation, including fees and expenses. The attorneys involved must also learn how to communicate with themselves, the vendors, opposing counsel and the court. Rigid enforcement of work-product confidentiality is counter-productive to the goal of cost efficient projects. Agree on the small stuff and save your arguments for the cost-saving questions that are worth the effort.

 

The Proportionality Doctrine

The doctrine of proportionality as a legal initiative was launched by The Sedona Conference in 2010 as a reaction to the exploding costs of e-discovery. The Sedona Conference, The Sedona Conference Commentary on Proportionality in Electronic Discovery, 11 SEDONA CONF. J. 289, 292–94 (2010). See also John L. Carroll, Proportionality in Discovery: A Cautionary Tale, 32 CAMPBELL L. REV. 455, 460 (2010) (“If courts and litigants approach discovery with the mindset of proportionality, there is the potential for real savings in both dollars and time to resolution.”); Maura Grossman & Gordon Cormack, Some Thoughts on Incentives, Rules, and Ethics Concerning the Use of Search Technology in E-Discovery, 12 SEDONA CONF. J. 89, 94–95, 101–02 (2011).

The doctrine received a big boost with the adoption of the 2015 Amendment to Rule 26. The rule was changed to provide discovery must be both relevant and “proportional to the needs of the case.” Fed. R. Civ. P. 26(b)(1). To determine whether a discovery request is proportional, you are required weigh the following six factors: “(1) the importance of the issues at stake in this action; (2) the amount in controversy; (3) the parties’ relative access to relevant information; (4) the parties’ resources; (5) the importance of the discovery in resolving the issues; and (6) whether the burden or expense of the proposed discovery outweighs its likely benefit.” Williams v. BASF Catalysts, LLC, Civ. Action No. 11-1754, 2017 WL 3317295, at *4 (D.N.J. Aug. 3, 2017) (citing Fed. R. Civ. P. 26(b)(1)); Arrow Enter. Computing Solutions, Inc. v. BlueAlly, LLC, No. 5:15-CV-37-FL, 2017 WL 876266, at *4 (E.D.N.C. Mar. 3, 2017); FTC v. Staples, Inc., Civ. Action No. 15-2115 (EGS), 2016 WL 4194045, at *2 (D.D.C. Feb. 26, 2016).

“[N]o single factor is designed to outweigh the other factors in determining whether the discovery sought is proportional,” and all proportionality determinations must be made on a case-by-case basis. Williams, 2017 WL 3317295, at *4 (internal citations omitted); see also Bell v. Reading Hosp., Civ. Action No. 13-5927, 2016 WL 162991, at *2 (E.D. Pa. Jan. 14, 2016). To be sure, however, “the amendments to Rule 26(b) do not alter the basic allocation of the burden on the party resisting discovery to—in order to successfully resist a motion to compel—specifically object and show that . . . a discovery request would impose an undue burden or expense or is otherwise objectionable.” Mir v. L-3 Commc’ns Integrated Sys., L.P., 319 F.R.D. 220, 226 (N.D. Tex. 2016), as quoted by Oxbow Carbon & Minerals LLC v. Union Pacific Railroad Company, No. 11-cv-1049 (PLF/GMH), 2017 WL 4011136, (D.D.C. Sept. 11, 2017).

The Oxbow case is discussed at length in my recent blog Judge Facciola’s Successor, Judge Michael Harvey, Provides Excellent Proportionality Analysis in an Order to Compel (e-Discovery Team,3/1/18). Judge Harvey carefully examined the costs and burdens claimed by plaintiffs and rejected the overly burdensome argument.

Plaintiffs’ counsel explained at the second hearing in this matter that Oxbow has spent $1.391 million to date on reviewing and producing approximately 584,000 documents from its nineteen other custodians and Oxbow’s email archive. See 8/24/17 TR. at 44:22-45:10. And again, Oxbow seeks tens of millions of dollars from Defendants. Through that lens, the estimated cost of reviewing and producing Koch’s responsive documents—even considering the total approximate cost of $142,000 for that effort, which includes the expense of the sampling effort—while certainly high, is not so unreasonably high as to warrant rejecting Defendants’ request out of hand. See Zubulake v. UBS Warburg, LLC, 217 F.R.D. 309, 321 (S.D.N.Y. 2003) (explaining, in the context of a cost-shifting request, that “[a] response to a discovery request costing $100,000 sounds (and is) costly, but in a case potentially worth millions of dollars, the cost of responding may not be unduly burdensome”); Xpedior Creditor Trust v. Credit Suisse First Boston (USA), Inc., 309 F. Supp. 2d 459, 466 (S.D.N.Y. 2003) (finding no “undue burden or expense” to justify cost-shifting where the requested discovery cost approximately $400,000 but the litigation involved at least $68.7 million in damages). …

In light of the above analysis—including the undersigned’s assessment of each of the Rule 26 proportionality factors, all of which weigh in favor of granting Defendants’ motion—the Court is unwilling to find that the burden of reviewing the remaining 65,000 responsive documents for a fraction of the cost of discovery to date should preclude Defendants’ proposed request. See BlueAlly, 2017 WL 876266, at *5 (“This [last Rule 26] factor may combine all the previous factors into a final analysis of burdens versus benefits.” (citing Fed. R. Civ. P. 26 advisory committee’s notes)).

For more analysis and case law on proportionality see Proportionality Φ and Making It Easy To Play “e-Discovery: Small, Medium or Large?” in Your Own Group or Class, (e-Discovery Team, 11/26/17). Also see The Sedona Conference Commentary on Proportionality, May 2017.

Learning How to Estimate Document Review Costs

The best way to determine a total cost of a project is by projection from experience and analysis on a cost per file basis. General experience of review costs can be very helpful, but the gold standard comes from measurement of costs actually incurred in the same project, usually after several hours of work, or days, depending on the size of the project. You calculate costs incurred to date and then project forward on a cost per file basis.  The is the core idea of the Six Step document review protocol that this article begins to explain.

The actual project costs are the best possible metrics for estimation. Apparently that was never done in Oxbow because the plaintiffs counsel’s projected document review cost estimates varied so much. A per file cost analysis of the information in the Oxbow opinion shows that the parties missed a key metric. The costs projected ranged from an actual cost of $2.38 per file for the first 584,000 files, to an 1.17 per file estimate to review 214,000 additional files, to an estimate of $1.73 per file to review 82,000 more files, to an actual cost of $4.74 per file to review 12,074 files, to final estimate of $1.22 per file to review the remaining 69,926 files. The actual costs are way higher than the estimated costs meaning the moving party cheated themselves by failing to do the math.

Here is how I explained the estimation process in Predictive Coding and the Proportionality Doctrine at pages 60-61:

Under the Bottom-Line-Driven Proportional approach, after analyzing the case merits and determining the maximum proportional expense, the responding party makes a good faith estimate of the likely maximum number of documents that can be reviewed within that budget. The document count represents the number of documents that you estimate can be reviewed for final decisions of relevance, confidentiality, privilege, and other issues and still remain within budget.
A few examples may help clarify how this method works. Assume a case where you determine a proportional cost of production to be $50,000, and estimate, based on sampling and other hard facts, that it will cost you $1.25 per file for both the automated and manual review before production of the ESI at issue … Then you can review no more than 40,000 documents and stay within budget. It is that simple. No higher math is required.

Estimation for bottom-line-driven review is essentially a method for marshaling evidence to support an undue burden argument under Rule 26(b)(1). Let’s run through it again with greater detail and make a simple formula to illustrate the process.

First, estimate the total number of documents remaining to be reviewed after culling by your tested keywords and other searches (hereinafter “T”). This is the most difficult step but is something most attorney experts and vendors are well qualified to help you with. Essentially “T” represents is the number of documents left unreviewed for Step Five, Final Review.  These are the documents found in Steps One and Two, ECA Multimodal Search and Testing. These steps, along with the estimate calculation, usually repeat several times to cull-in the documents that are most likely relevant to the claims and defenses. The T – Total Documents Left for Review – are the documents in the final revised keyword search folders and concept, similarity search folders. The goal is to obtain a relevance precision in these folders greater than 50%, preferably at least 75%.

To begin an example hypothetical, assume that the total document count in the folders set-up for final review is 5,000 documents. T=5,000. Next count how many relevant and highly relevant files have already been located (hereinafter “R”).  Assume for our example that 1,000 relevant and highly relevant documents have been found. R=1,000.

Next, look up the total attorney fees already incurred in the matter to date for the actual document search and review work by attorneys and paralegals (hereinafter collectively “F”). Include the vendor charges in this total related to the review, but excluding forensics and collection fees. To do this more easily, make sure that the time descriptions that your legal team inputs are clear on what fees are for review. Always remember that you may be required to provide an affidavit or testimony someday to support this cost estimate in a motion for protective order. For our example assume that a total in $1,500 in costs and fees have already been incurred for document search and review work only. F=$1,500. The F divided by R creates the cost per file. Here it is $1.50 per file (F/R).

Finally, multiply the cost per file (F/R) by the number of documents still remaining to be reviewed, T. In other words T * (F/R).  Here that is 5,000 (T) times the $1.50 cost per file (F/R), which equals $7,500. You can then disclose this calculation to opposing counsel to help establish the reasonableness (proportionality) of your plan. Step Four – Work Product Disclosure. Note you are estimating a total spend here for this review project of $9,000; $1,500 already spent, plus an estimated additional $7,500 to complete the project.

There are many ways to calculate probable fees to complete document review project. This simple formula method has the advantage of being based on actual experience and costs incurred. It is also simple and easy to understand compared to most other methods. The method could be criticized for inflating expected costs by observing that the work initially performed to find relevant documents is usually slower and more expensive than concluding work to review the tested search folders. This is generally true, but is countered by the fact that: (1) many of the initial relevant documents found in ECA (Step-One) were “low hanging fruit” and easier to locate than what remains; (2) the precision rate of the documents remaining to be reviewed after culling – T – will be much higher than the document folders previously reviewed (the higher the precision rate, the slower the rate of review, because it takes longer to code a relevant document than an irrelevant document); and, (3) additional time is necessarily incurred in the remaining review for redaction, privilege analysis, and quality control efforts not performed in the review to date.

To be concluded …  In the conclusion of this article I will review the Six Steps and complete discussion of the related concepts.


%d bloggers like this: