Responding Party’s Complaints of Financial Burden of Document Review Were Unsupported by the Evidence, Any Evidence

August 5, 2018

One of the largest cases in the U.S. today is a consolidated group of price-fixing cases in District Court in Chicago. In Re Broiler Chicken Antitrust Litigation, 290 F. Supp. 3d 772 (N.D. Ill. 2017) (order denying motions to dismiss and discussing the case). The consolidated antitrust cases involve allegations of a wide spread chicken price-fixing. Big Food Versus Big Chicken: Lawsuits Allege Processors Conspired To Fix Bird Prices (NPR 2/6/18).

The level of sales and potential damages are high. For instance, in 2014 the sales of broiler chickens in the U.S. was $32.7 Billion. That’s sales for one year. The classes have not been certified yet, but discovery is underway in the consolidated cases.

The Broiler Chicken case is not only big money, but big e-discovery. A Special Master (Maura Grossman) was appointed months ago and she developed a unique e-discovery validation protocol order for the case. See: TAR for Smart Chickens, by John Tredennick and Jeremy Pickens that analyzes the validation protocol.

Maura was not involved in the latest discovery dispute where, Agri Stats, one of many defendants, claimed a request for production was too burdensome as to it. The latest problem went straight to the presiding Magistrate Judge Jeffrey T. Gilbert who issued his order on July 26, 2018. In re Broiler Chicken Antitrust Litig., 2018 WL 3586183 (N.D. Ill. 7/26/18).

Agri Stats had moved for a protective order to limit an email production request. Agri Stats claimed that the burden imposed was not proportional because it would be too expensive. Its lawyers told Judge Gilbert that it would cost between $1,200,000 and $1,700,00 to review the email using the keywords negotiated.

Fantasy Hearing

I assume that there were hearings and attorney conferences before the hearings. But I do not know that for sure. I have not seen a transcript of the hearings with Judge Gilbert. All we know is that defense counsel told the judge that under the keywords selected the document review would cost between $1,200,000 and $1,700,000, and that they had no explanation on how the cost estimate was prepared, nor any specifics as to what it covered. Although I was not there, after four decades of doing this sort of work, I have a pretty good idea of what was or might have been said at the hearing.

This representation of million dollar costs by defense counsel would have gotten the attention of the judge. He would naturally have wanted to know how the cost range was calculated. I can almost hear the judge say from the bench: “$1.7 Million Dollars to do a doc review. Yeah, ok. That is a lot of money. Why so much counsel? Anyone?” To which the defense attorneys said in response, much like the students in Ferris Beuller’s class:

“. . . . . .”

 

Yes. That’s right. They had Nothing. Just Voodoo Economics

Well, Judge Gilbert’s short opinion makes it seem that way. In re Broiler Chicken Antitrust Litig., 2018 WL 3586183 (N.D. Ill. 7/26/18).

If a Q&A interchange like this happened, either in a phone hearing, or in person, then the lawyers must have said something. You do not just ignore a question by a federal judge. The defense attorneys probably did a little hemming and hawing, conferred among themselves, and then said something to the judge like: “We are not sure how those numbers were derived, $1.2M to $1.5M, and will have to get back to you on that question, Your Honor.” And then, they never did. I have seen this kind of thing a few times before. We all try to avoid it. But it is even worse to make up a false story, or even present an unverified story to the judge. Better to say nothing and get back to the judge with accurate information.

Discovery Order of July 26, 2018

Here is a quote from Judge Gilbert’s Order so you can read for yourself the many questions the moving party left unanswered (detailed citations to record removed; graphics added):

Agri Stats represents that the estimated cost to run the custodial searches EUCPs propose and to review and produce the ESI is approximately $1.2 to $1.7 million. This estimated cost, however, is not itemized nor broken down for the Court to understand how it was calculated. For example, is it $1.2 to $1.7 million to review all the custodial documents from 2007 through 2016? Or does this estimate isolate only the pre-October 2012 custodial searches that Agri Stats does not want to have to redo, in its words? More importantly, Agri Stats also admits that this estimate is based on EUCPs’ original proposed list of search terms. But EUCPs represent (and Agri Stats does not disagree) that during their apparently ongoing discussions, EUCPs have proposed to relieve Agri Stats of the obligation to produce various categories of documents and data, and to revise the search terms to be applied to data that is subject to search. Agri Stats does not appear to have provided a revised cost estimate since EUCPs agreed to exclude certain categories of documents and information and revised their search terms. Rather, Agri Stats takes the position that custodial searches before October 3, 2012 are not proportional to the needs of the case — full stop — so it apparently has not fully analyzed the cost impact of EUCPs’ revised search terms or narrowed document and data categories.

The Court wonders what the cost estimate is now after EUCPs have proposed to narrow the scope of what they are asking Agri Stats to do. (emphasis added) EUCPs say they already have agreed, or are working towards agreement, that 2.5 million documents might be excluded from Agri Stats’s review. That leaves approximately 520,000 documents that remain to be reviewed. In addition, EUCPs say they have provided to Agri Stats revised search terms, but Agri Stats has not responded. Agri Stats says nothing about this in its reply memorandum.

EUCPs contend that Agri Stats’s claims of burden and cost are vastly overstated. The Court tends to agree with EUCPs on this record. It is not clear what it would cost in either time or money to review and produce the custodial ESI now being sought by EUCPs for the entire discovery period set forth in the ESI Protocol or even for the pre-October 3, 2102 period. It seems that Agri Stats itself also does not know for sure what it would have to do and how much it would cost because the parties have not finished that discussion. Because EUCPs say they are continuing to work with Agri Stats to reduce what it must do to comply with their discovery requests, the incremental burden on what Agri Stats now is being asked to do is not clear.

For all these reasons, Agri Stats falls woefully short of satisfying its obligation to show that the information [*10] EUCPs are seeking is not reasonably accessible because of undue burden or cost.

Estimations for Fun and Profit

In order to obtain a protective order you need to estimate the costs that will likely be involved in the discovery from which you seek protection. Simple. Moreover, it obviously has to be a reasonable estimate, a good faith estimate, supported by the facts. The Brolier Chicken defendant, Agri Stats, came up with an estimate. They got that part right. But then they stopped. You never do that. You do not just throw up a number and hope for the best. You have to explain how it was derived. Blushing at any price higher than that is not a reasonable explanation, but is often honest.

Be ready to explain how you came up with the cost estimate. To break down the total into its component parts and allow the “Court to understand how it was calculated.” Agri Stats did not do that. Instead, they just used a cost estimate of between $1.2 to $1.7 million. So of course Agri Stats’ motion for protective order was denied. The judge had no choice because no evidence to support the motion was presented, neither factual or expert evidence. There was no need for Judge Gilbert to go into the secondary questions of whether expert testimony was also needed and whether it should be under Rule 702. He got nothing remember. No explanation for the $1.7 Million.

The lesson of the latest discovery order in Broiler Chicken is pretty simple. In re Broiler Chicken Antitrust Litig., 2018 WL 3586183 (N.D. Ill. 7/26/18). Get a real cost estimate from an expert. The expert needs to know and understand document review, search and costs of review. They need to know how to make reasonable search and retrieval efforts. They also need to know how to make reliable estimates. You may need two experts for this, as not all have expertise in both fields, but they are readily available. Many can even talk pretty well too, but not all! Seriously, everybody knows we are the most fun and interesting lawyer subgroup.

The last thing to do is skimp on an expert and just pull out a number from your hat (or your vendor’s hat) and hope for the best.

This is federal court, not a political rally. You do not make bald assertions and leave the court wondering. Facts matter. Back of the envelope type guesses are not sufficient, especially in a big case like Broiler Chicken. Neither are guesstimates by people who do not know what they are doing. Make disclosure and cooperate with the requesting party to reach agreement. Do not just rush to the courthouse hoping to  dazzle with smoke and mirrors. Bring in the experts. They may not dazzle, but they can get you beyond the magic mirrors.

Case Law Background

Judge Paul S. Grewal, who is now Deputy G.C. of Facebook, said quoting The Sedona Conference in Vasudevan: There is no magic to the science of search and retrieval: only mathematics, linguistics, and hard work.Vasudevan Software, Inc. v. Microstrategy Inc., No. 11-cv-06637-RS-PSG, 2012 US Dist LEXIS 163654 (ND Cal Nov 15, 2012) (quoting The Sedona Conference, Best Practices Commentary on the Use of Search and Information and Retrieval Methods in E-Discovery, 8 Sedona Conf. J. 189, 208 (2007). There is also no magic to the art of estimation, no magic to calculating the likely range of cost to search and retrieve the documents requested. Judge Grewal refused to make any decision in Vasudevan without expert assistance, recognizing that this area is “fraught with traps for the unwary” and should not be decided on mere arguments of counsel.

Judge Grewal did not address the procedural issue of whether Rule 702 should govern. But he did cite to Judge Facciola’s case on the subject, United States v. O’Keefe, 537 F. Supp. 2d 14 (D.D.C. 2008). Here Judge Facciola first raised the discovery expert evidence issue. He not only opined that experts should be used, but that the parties should follow the formalities of Evidence Rule 702. That governs things such as whether you should qualify and swear in an expert and follow otherwise follow Rule 702 on their testimony. I discussed this somewhat in my earlier article this year, Judge Goes Where Angels Fear To Tread: Tells the Parties What Keyword Searches to Use.

Judge Facciola in O’Keffe held that document review issues require expert input and that this input should be provided with all of the protections provided by Evidence Rule 702.

Given this complexity, for lawyers and judges to dare opine that a certain search term or terms would be more likely to produce information than the terms that were used is truly to go where angels fear to tread. This topic is clearly beyond the ken of a layman and requires that any such conclusion be based on evidence that, for example, meets the criteria of Rule 702 of the Federal Rules of Evidence. Accordingly, if defendants are going to contend that the search terms used by the government were insufficient, they will have to specifically so contend in a motion to compel and their contention must be based on evidence that meets the requirements of Rule 702 of the Federal Rules of Evidence.

Conclusion

In the Boiler Chicken Antitrust Order of July 27, 2018, a motion for protective order was denied because of inadequate evidence of burden. All the responding party did was quote a price-range, a number presumably provided by an expert, but there was no explanation. More evidence was needed, both expert and fact. I agree that generally document review cost estimation requires opinions of experts. The experts need to be proficient in two fields. They need to know and understand the science of document search and retrieval and the likely costs for these services for a particular set of data.

Although all of the formalities and expense of compliance with Evidence Rule 702 may be needed in some cases, it is probably not necessary in most. Just bring your expert to the attorney conference or hearing. Yes, two experts may well disagree on some things, probably will, but the areas of agreement are usually far more important. That in turn makes compromise and negotiation far easier. Better leave the technical details to the experts to sort out. That follows the Rule 1 prime directive of “just, speedy and inexpensive.” Keep the trial lawyers out of it. They should instead focus and argue on what the documents mean.

 

 

 


Judge Goes Where Angels Fear To Tread: Tells the Parties What Keyword Searches to Use

June 24, 2018

John Facciola was one of the first e-discovery expert judges to consider the adequacy of a producing parties keyword search efforts in United States v. O’Keefe, 537 F. Supp. 2d 14 (D.D.C. 2008). He first observed that keyword search and other computer assisted legal search techniques required special expertise to do properly. Everyone agrees with that. He then reached an interesting, but still somewhat controversial conclusion: because he lacked such special legal search expertise, and knew full well that most of the lawyers appearing before him did too, that he could not properly analyze and compel the use of specific keywords without the help of expert testimony. To help make his point he paraphrased Alexander Pope‘s famous line from An Essay on Criticism: “For fools rush in where angels fear to tread.

Here are the well-known words of Judge Facciola in O’Keffe (emphasis added):

As noted above, defendants protest the search terms the government used.[6]  Whether search terms or “keywords” will yield the information sought is a complicated question involving the interplay, at least, of the sciences of computer technology, statistics and linguistics. See George L. Paul & Jason R. Baron, Information Inflation: Can the Legal System Adapt?; 13 Ricn. J.L. & TECH. 10 (2007). Indeed, a special project team of the Working Group on Electronic Discovery of the Sedona Conference is studying that subject and their work indicates how difficult this question is. See The Sedona Conference, Best Practices Commentary on the Use of Search and Information Retrieval, 8 THE SEDONA CONF. J. 189 (2008).

Given this complexity, for lawyers and judges to dare opine that a certain search term or terms would be more likely to produce information than the terms that were used is truly to go where angels fear to tread. This topic is clearly beyond the ken of a layman and requires that any such conclusion be based on evidence that, for example, meets the criteria of Rule 702 of the Federal Rules of Evidence. Accordingly, if defendants are going to contend that the search terms used by the government were insufficient, they will have to specifically so contend in a motion to compel and their contention must be based on evidence that meets the requirements of Rule 702 of the Federal Rules of Evidence.

Many courts have followed O’Keffe, even though it is a criminal case, and declined to step in and order specific searches without expert input. See eg. the well-known patent case, Vasudevan Software, Inc. v. Microstrategy Inc., No. 11-cv-06637-RS-PSG, 2012 US Dist LEXIS 163654 (ND Cal Nov 15, 2012). The opinion was by U.S. Magistrate Judge Paul S. Grewal, who later became the V.P. and Deputy General Counsel of Facebook. Judge Grewal wrote:

But as this case makes clear, making those determinations often is no easy task. “There is no magic to the science of search and retrieval: only mathematics, linguistics, and hard work.”[9]

Unfortunately, despite being a topic fraught with traps for the unwary, the parties invite the court to enter this morass of search terms and discovery requests with little more than their arguments.

More recently, e-discovery expert Judge James Francis addressed this issue in Greater New York Taxi Association v. City of New York, No. 13 Civ. 3089 (VSB) (JCF) (S.D.N.Y. Sept. 11, 2017) and held:

The defendants have not provided the necessary expert opinions for me to assess their motion to compel search terms. The application is therefore denied. This leaves the defendants with three options: “They can cooperate [with the plaintiffs] (along with their technical consultants) and attempt to agree on an appropriate set of search criteria. They can refile a motion to compel, supported by expert testimony. Or, they can request the appointment of a neutral consultant who will design a search strategy.”[10] Assured Guaranty Municipal Corp. v. UBS Real Estate Securities Inc., No. 12 Civ. 1579, 2012 WL 5927379, at *4 (S.D.N.Y. Nov. 21, 2012).

I am inclined to agree with Judge Francis. I know from daily experience that legal search, even keyword search, can be very tricky, depends on many factors, including the documents searched. I have spent over a decade working hard to develop expertise in this area. I know that the appropriate searches to be run depends on experience and scientific, technical knowledge on information retrieval and statistics. It also depends on tests of proposed keywords; it depends on sampling and document reviews; it depends on getting your hands dirty in the digital mud of the actual ESI. It cannot be done effectively in the blind, no matter what your level of expertise. It is an iterative process of trial and errors, false positives and negatives alike.

Enter a Judge Braver Than Angels

Recently appointed U.S. Magistrate Judge Laura Fashing in Albuquerque, New Mexico, heard a case involving a dispute over keywords. United States v. New Mexico State University, No. 1:16-cv-00911-JAP-LF, 2017 WL 4386358 (D.N.M. Sept. 29, 2017). It looks like the attorneys in the case neglected to inform Judge Fashing of United States v. O’Keefe. It is a landmark case in this field, yet was not cited in Judge Fashing’s order. More importantly, Judge Fashing did not take the advice of O’Keefe, nor the many cases that follow it. Unlike Judge Facciola and his angels, she told the parties what keywords to use, even without input from experts.

The New Mexico State University opinion did, however, cite to two other landmark cases in legal search, William A. Gross Const. Assocs., Inc. v. Am. Mfrs. Mut. Ins. Co., 256 F.R.D. 134, 135 (S.D.N.Y. 2009) by Judge Andrew Peck and Victor Stanley, Inc. v. Creative Pipe, Inc., 250 F.R.D. 251, 260, 262 (D. Md. May 29, 2008) by Judge Paul Grimm. Judge Fashing held in New Mexico State University:

This case presents the question of how parties should search and produce electronically stored information (“ESI”) in response to discovery requests. “[T]he best solution in the entire area of electronic discovery is cooperation among counsel.” William A. Gross Const. Assocs., Inc. v. Am. Mfrs. Mut. Ins. Co., 256 F.R.D. 134, 135 (S.D.N.Y. 2009). Cooperation prevents lawyers designing keyword searches “in the dark, by the seat of the pants,” without adequate discussion with each other to determine which words would yield the most responsive results. Id.

While keyword searches have long been recognized as appropriate and helpful for ESI search and retrieval, there are well-known limitations and risks associated with them, and proper selection and implementation obviously involves technical, if not scientific knowledge.

* * *

Selection of the appropriate search and information retrieval technique requires careful advance planning by persons qualified to design effective search methodology. The implementation of the methodology selected should be tested for quality assurance; and the party selecting the methodology must be prepared to explain the rationale for the method chosen to the court, demonstrate that it is appropriate for the task, and show that it was properly implemented.

Id. (quoting Victor Stanley, Inc. v. Creative Pipe, Inc., 250 F.R.D. 251, 260, 262 (D. Md. May 29, 2008)).

Although NMSU has performed several searches and produced thousands of documents, counsel for NMSU did not adequately confer with the United States before performing the searches, which resulted in searches that were inadequate to reveal all responsive documents. As the government points out, “NMSU alone is responsible for its illogical choices in constructing searches.” Doc. 117-1 at 8. Consequently, which searches will be conducted is left to the Court.

Judges Francis, Peck and Facciola

Judge Laura Fashing had me in the quote above until the final sentence. Up till then she had been wisely following the four great judges in this area, Facciola, Peck, Francis and Grimm. Then in the next several paragraphs she rushes in to specify what search terms should be used for what categories of ESI requested. Why should the Court go ahead and do that without expert advice? Why not wait? Especially since Judge Fashing starts her opinion by recognizing the difficulty of the task, that “there are well-known limitations and risks associated with them [keyword searches], and proper selection and implementation obviously involves technical, if not scientific knowledge.” Knowing that, why was she fearless? Why did she ignore Judge Facciola’s advice? Why did she make multiple detailed, technical decisions on legal search, including specific keywords to be used, without the benefit of expert testimony? Was that foolish as several judges have suggested, or was she just doing her job by making the decisions that the parties asked her to make?

Judge Fashing recognized that she did have enough facts to make a decision, much less expert opinions based on technical, scientific knowledge, but she went ahead and ruled anyway.

Although NMSU argues that the search terms proposed by the government will return a greater number of non-responsive documents than responsive documents, this is not a particular and specific demonstration of fact, but is, instead, a conclusory argument by counsel. See Velasquez, 229 F.R.D. at 200. NMSU’s motion for a protective order with regard to RFP No. 8 is DENIED.

NMSU will perform a search of the email addresses of all individuals involved in salary-setting for Ms. Harkins and her comparators, including Kathy Agnew and Dorothy Anderson, to include the search terms “Meaghan,” “Harkins,” “Gregory,” or “Fister” for the time period of 2007-2012. If this search results in voluminous documents that are non-responsive, NMSU may further search the results by including terms such as “cross-country,” “track,” “coach,” “salary,” “pay,” “contract,” or “applicants,” or other appropriate terms such as “compensation,” which may reduce the results to those communications most likely relevant to this case, and which would not encompass every “Meaghan” or “Gregory” in the system. However, the Court will require NMSU to work with the USA to design an appropriate search if it seeks to narrow the search beyond the four search terms requested by the United States.

Judge Fashing goes on to make several specific orders on what to do to make a reasonable effort to find relevant evidence:

NMSU will conduct searches of the OIE databases, OIE employee’s email accounts, and the email accounts of all head coaches, sport administrators, HR liaisons working within the Athletics Department, assistant or associate Athletic Directors, and/or Athletic Directors employed by NMSU between 2007 and the present. The USA suggests that NMSU conduct a search for terms that are functionally equivalent to a search for (pay or compensate! or salary) and (discriminat! or fair! or unfair!). Doc. 117-1 at 13. If NMSU cannot search with “Boolean” connectors as suggested, it must search for the terms “pay” or “compensate” or “salary” and “discriminate” or “fair” or “unfair” and the various derivatives of these terms (for example the search would include “compensate” and “compensation”). The parties are to work together to determine what terms will be used to search these databases and email accounts.

Judge Laura Fashing hangs her hat on cooperation, but not on experts. She concludes her order with the following admonishment:

The parties are reminded that:

Electronic discovery requires cooperation between opposing counsel and transparency in all aspects of preservation and production of ESI. Moreover, where counsel are using keyword searches for retrieval of ESI, they at a minimum must carefully craft the appropriate keywords, with input from the ESI’s custodians as to the words and abbreviations they use, and the proposed methodology must be quality control tested to assure accuracy in retrieval and elimination of “false positives.” It is time that the Bar—even those lawyers who did not come of age in the computer era—understand this.

William A. Gross Const. Assocs., Inc., 256 F.R.D. at 136.

Conclusion

Of course I agree with Judge Fashing’s concluding reminder to the parties. Cooperation is key, but so is expertise. There is a good reason for the fear felt by Facciola’s angels. They wisely  knew that they lacked the necessary technical, scientific knowledge for the proper selection and implementation of keyword searches. I only wish that Judge Fashing’s order had reminded the parties of this need for experts too. It would have made her job much easier and also helped the parties. Sometimes the wisest thing to do is nothing, at least not until you have more information.

There is widespread agreement among legal search experts on such simplistic methods as keyword search. They would have helped. The same holds true on advanced search methods, such as active machine learning (predictive coding), at least among the elite. See TARcourse.com. There is still some disagreement on TAR methods, especially when you include the many pseudo experts out there. But even they can usually agree on keyword search methods.

I urge the judges and litigants faced with a situation like Judge Fashing had to deal with in New Mexico State University, to consider the three choices set out by Judge Francis in Greater New York Taxi Association:

  1. Cooperation with the other side and their technical consultants to attempt to agree on an appropriate set of search criteria.
  2. Motions supported by expert testimony and facts regarding the search.
  3. Appointment of a neutral consultant who will design a search strategy.

Going it alone with legal search in a complex case is a fool’s errand. Bring in an expert. Spend a little to save a lot. It is not only the smart thing to do, it is also required by ethics. Rule 1.1: Competence, Model Rules of Professional Conduct. The ABA Comment two to Rule 1.1 states that “Competent representation can also be provided through the association of a lawyer of established competence in the field in question.” Yet, in my experience, this is seldom done and is not something that clients are clamoring for. That should change, and quickly, if we are ever to stop wasting so much time and money on simplistic e-discovery arguments. I am again reminded of the great Alexander Pope (1688–1744) and another of his famous lines from An Essay on Criticism.

_______________

 

After I wrote this blog I did a webinar for ACEDS about this topic. Here is a one-hour talk to add to your personal Pierian spring.

 

_________

 

 

 


Disproportionate Keyword Search Demands Defeated by Metric Evidence of Burden

June 10, 2018

The defendant in a complex commercial dispute demanded that plaintiff search its ESI for all files that had the names of four construction projects. Am. Mun. Power, Inc. v. Voith Hydro, Inc. (S.D. Ohio, 6/4/18) (copy of full opinion below). These were the four projects underlying the law suit. Defense counsel, like many attorneys today, thought that they had magical powers when it comes to finding electronic evidence. They thought that all, or most all, of the ESI with these fairly common project names would be relevant or, at the very least, worth examining for relevance. As it turns out, defense counsel was very wrong, most of the docs with keyword hits were not relevant and the demand was unreasonable.

The Municipal Power opinion was written by Chief Magistrate Judge Elizabeth A. Preston Deavers of the Southern District Court of Ohio. She reached this conclusion based on evidence of burden, what we like to call the project metrics. We do not know the total evidence presented, but we do know that Judge Deavers was impressed by the estimate that the privilege review alone would cost the plaintiff between $100,000 – $125,000. I assume that estimate was based on a linear review of all relevant documents. That is very expensive to do right, especially in large, diverse data sets with high privilege and relevance prevalence. Triple and quadruple checks are common and are built into standard protocols.

Judge Deavers ruled against the defense on the four project names keywords request, and granted a protective order for the plaintiff because, in her words:

The burden and expense of applying the search terms of each Project’s name without additional qualifiers outweighs the benefits of this discovery for Voith and is disproportionate to the needs of even this extremely complicated case.

The plaintiff made its own excessive demand upon defendant to search its ESI using a long list of keywords, including Boolean logic. The plaintiff’s keyword list was much more sophisticated than the defendants four name search demand. The plaintiff’s proposal was rejected by the defendant and the judge for the same proportionality reason. It kind of looks like tit for tat with excessive demands on both sides. But, it is hard to say because the negotiations were apparently focused on mere guessed-keywords, instead of a process of testing and refining – evolved-tested keywords.

Defense counsel responded to the plaintiff’s keyword demands by presenting their own metrics of burden, including the projected costs of redaction of confidential customer information. These confidentiality concerns can be difficult, especially where you are required to redact. Better to agree upon an alternative procedure where you withhold the entire document and log them with a description. This can be a less expensive alternative to redaction.

When reading the opinion below note how the Plaintiff’s opposition to the demand to review all ESI with the four project names gave specific examples of types of documents (ESI) that would have the names on them and still have nothing whatsoever to do with the parties claims or defenses, the so called “false positives.” This is a very important exercise that should not be overlooked in any argument. I have seen some pretty terrible precision percentages, sometimes as low as two percent.

Get your hands in the digital mud. Go deep into TAR if you need to. It is where the time warps happen and we bend space and time to attain maximum efficiency. Our goal is to attain: (1) the highest possible review speeds (files per hr), both hybrid and human; (2)  the highest precision (% of relevant docs); and, (3) the countervailing goal of total recall (% of relevant docs found). The recall goal is typically given the greatest weight, with emphasis on highly relevant. The question is how much greater weight to give recall and that depends on the total facts and circumstances of the doc review project.

Keywords are the Model T of legal search, but we all start there. It is still a very important skill for everyone to learn and then move on to other techniques, especially to active machine learning.

In some simple projects it can still be effective, especially if the user is highly skilled and the data is simple. It also helps if the data is well known to the searcher from earlier projects. See TAR Course: 8th Class (Keyword and Linear Review).

________________________

Below is the unedited full opinion (very short). We look forward to more good opinions by Judge Deavers on e-discovery.

__________

UNITED STATES DISTRICT COURT FOR THE SOUTHERN DISTRICT OF OHIO, EASTERN DIVISION. No. 2:17-cv-708

June 4, 2018

AMERICAN MUNICIPAL POWER, INC., Plaintiff, vs. VOITH HYDRO, INC., Defendant.

ELIZABETH A. PRESTON DEAVERS, UNITED STATES MAGISTRATE JUDGE. Judge Algenon L. Marbley.

MEMORANDUM OF DECISION

This matter came before the Court for a discovery conference on May 24, 2018. Counsel for both parties appeared and participated in the conference.

The parties provided extensive letter briefing regarding certain discovery disputes relating to the production of Electronically Stored Information (“ESI”) and other documents. Specifically, the parties’ dispute centers around two ESI-related issues: (1) the propriety of a single-word search by Project name proposed by Defendant Voith Hydro, Inc. (“Voith”) which it seeks to have applied to American Municipal Power, Inc.’s (“AMP”) ESI; 1 and (2) the propriety of AMP’s request that Voith run crafted search terms which AMP has proposed that are not limited to the Project’s name. 2 After careful consideration of the parties’ letter briefing and their arguments during the discovery conference, the Court concluded as follows:

  • Voith’s single-word Project name search terms are over-inclusive. AMP’s position as the owner of the power-plant Projects puts it in a different situation than Voith in terms of how many ESI “hits” searching by Project name would return. As owner, AMP has stored millions of documents for more than a decade that contain the name of the Projects which refer to all kinds of matters unrelated to this case. Searching by Project name, therefore, would yield a significant amount of discovery that has no bearing on the construction of the power plants or Voith’s involvement in it, including but not limited to documents related to real property acquisitions, licensing, employee benefits, facility tours, parking lot signage, etc. While searching by the individual Project’s name would yield extensive information related to the name of the Project, it would not necessarily bear on or be relevant to the construction of the four hydroelectric power plants, which are the subject of this litigation. AMP has demonstrated that using a single-word search by Project name would significantly increase the cost of discovery in this case, including a privilege review that would add $100,000 – $125,000 to its cost of production. The burden and expense of applying the search terms of each Project’s name without additional qualifiers outweighs the benefits of this discovery for Voith and is disproportionate to the needs of even this extremely complicated case.
  • AMP’s request that Voith search its ESI collection without reference to the Project names by using as search terms including various employee and contractor names together with a list of common construction terms and the names of hydroelectric parts is overly inclusive and would yield confidential communications about other projects Voith performed for other customers. Voith employees work on and communicate regarding many customers at any one time. AMPs proposal to search terms limited to certain date ranges does not remedy the issue because those employees still would have sent and received communications about other projects during the times in which they were engaged in work related to AMP’s Projects. Similarly, AMP’s proposal to exclude the names of other customers’ project names with “AND NOT” phrases is unworkable because Voith cannot reasonably identify all the projects from around the world with which its employees were involved during the decade they were engaged in work for AMP on the Projects. Voith has demonstrated that using the terms proposed by AMP without connecting them to the names of the Projects would return thousands of documents that are not related to this litigation. The burden on Voith of running AMP’s proposed search terms connected to the names of individual employees and general construction terms outweighs the possibility that the searches would generate hits that are relevant to this case. Moreover, running the searches AMP proposes would impose on Voith the substantial and expensive burden of manually reviewing the ESI page by page to ensure that it does not disclose confidential and sensitive information of other customers. The request is therefore overly burdensome and not proportional to the needs of the case.

1 Voith seeks to have AMP use the names of the four hydroelectric projects at issue in this case (Cannelton, Smithland, Willow and Meldahl) as standalone search terms without qualifiers across all of AMP’s ESI. AMP proposed and has begun collecting from searches with numerous multiple-word search terms using Boolean connectors. AMP did not include the name of each Project as a standalone term.

2 AMP contends that if Voith connects all its searches together with the Project name, it will not capture relevant internal-Voith ESI relating to the construction claims and defenses in the case. AMP asserts Voith may have some internal documents that relate to the construction projects that do not refer to the Project by name, and included three (3) emails with these criteria it had discovered as exemplars. AMP proposes that Voith search its ESI collection without reference to the Project names by using as search terms including various employee and contractor names together with a list of generic construction terms and the names of hydroelectric parts.

IT IS SO ORDERED.

DATED: June 4, 2018

/s/ Elizabeth A. Preston Deavers

ELIZABETH A. PRESTON DEAVERS

UNITED STATES MAGISTRATE JUDGE

 

 


Document Review and Proportionality – Part Two

March 28, 2018

This is a continuation of a blog that I started last week. Suggest you read Part One before this.

Simplified Six Step Review Plan for Small and Medium Sized Cases or Otherwise Where Predictive Coding is Not Used

Here is the workflow for the simplified six-step plan. The first three steps repeat until you have a viable plan where the costs estimate is proportional under Rule 26(b)(1).

Step One: Multimodal Search

The document review begins with Multimodal Search of the ESI. Multimodal means that all modes of search are used to try to find relevant documents. Multimodal search uses a variety of techniques in an evolving, iterated process. It is never limited to a single search technique, such as keyword. All methods are used as deemed appropriate based upon the data to be reviewed and the software tools available. The basic types of search are shown in the search pyramid.

search_pyramid_revisedIn Step One we use a multimodal approach, but we typically begin with keyword and concept searches. Also, in most projects we will run similarity searches of all kinds to make the review more complete and broaden the reach of the keyword and concept searches. Sometimes we may even use a linear search, expert manual review at the base of the search pyramid. For instance, it might be helpful to see all communications that a key witness had on a certain day. The two-word stand-alone call me email when seen in context can sometimes be invaluable to proving your case.

I do not want to go into too much detail of the types of searches we do in this first step because each vendor’s document review software has different types of searches built it. Still, the basic types of search shown in the pyramid can be found in most software, although AI, active machine learning on top, is still only found in the best.

History of Multimodal Search

Professor Marcia Bates

Multimodal search, wherein a variety of techniques are used in an evolving, iterated process, is new to the legal profession, but not to Information Science. That is the field of scientific study which is, among many other things, concerned with computer search of large volumes of data. Although the e-Discovery Team’s promotion of multimodal search techniques to find evidence only goes back about ten years, Multimodal is a well-established search technique in Information Science. The pioneer professor who first popularized this search method was Marcia J. Bates, and her article, The Design of Browsing and Berrypicking Techniques for the Online Search Interface, 13 Online Info. Rev. 407, 409–11, 414, 418, 421–22 (1989). Professor Bates of UCLA did not use the term multimodal, that is my own small innovation, instead she coined the word “berrypicking” to describe the use of all types of search to find relevant texts. I prefer the term “multimodal” to “berrypicking,” but they are basically the same techniques.

In 2011 Marcia Bates explained in Quora her classic 1989 article and work on berrypicking:

An important thing we learned early on is that successful searching requires what I called “berrypicking.” . . .

Berrypicking involves 1) searching many different places/sources, 2) using different search techniques in different places, and 3) changing your search goal as you go along and learn things along the way. . . .

This may seem fairly obvious when stated this way, but, in fact, many searchers erroneously think they will find everything they want in just one place, and second, many information systems have been designed to permit only one kind of searching, and inhibit the searcher from using the more effective berrypicking technique.

Marcia J. Bates, Online Search and Berrypicking, Quora (Dec. 21, 2011). Professor Bates also introduced the related concept of an evolving search. In 1989 this was a radical idea in information science because it departed from the established orthodox assumption that an information need (relevance) remains the same, unchanged, throughout a search, no matter what the user might learn from the documents in the preliminary retrieved set. The Design of Browsing and Berrypicking Techniques for the Online Search Interface. Professor Bates dismissed this assumption and wrote in her 1989 article:

In real-life searches in manual sources, end users may begin with just one feature of a broader topic, or just one relevant reference, and move through a variety of sources.  Each new piece of information they encounter gives them new ideas and directions to follow and, consequently, a new conception of the query.  At each stage they are not just modifying the search terms used in order to get a better match for a single query.  Rather the query itself (as well as the search terms used) is continually shifting, in part or whole.   This type of search is here called an evolving search.

Furthermore, at each stage, with each different conception of the query, the user may identify useful information and references. In other words, the query is satisfied not by a single final retrieved set, but by a series of selections of individual references and bits of information at each stage of the ever-modifying search. A bit-at-a-time retrieval of this sort is here called berrypicking. This term is used by analogy to picking huckleberries or blueberries in the forest. The berries are scattered on the bushes; they do not come in bunches. One must pick them one at a time. One could do berrypicking of information without  the search need itself changing (evolving), but in this article the attention is given to searches that combine both of these features.

I independently noticed evolving search as a routine phenomena in legal search and only recently found Professor Bates’ prior descriptions. I have written about this often in the field of legal search (although never previously crediting Professor Bates) under the names “concept drift” or “evolving relevance.” See Eg. Concept Drift and Consistency: Two Keys To Document Review Quality – Part Two (e-Discovery Team, 1/24/16). Also see Voorhees, Variations in Relevance Judgments and the Measurement of Retrieval Effectiveness, 36 Info. Processing & Mgmt  697 (2000) at page 714.

SIDE NOTE: The somewhat related term query drift in information science refers to a different phenomena in machine learning. In query drift  the concept of document relevance unintentionally changes from the use of indiscriminate pseudorelevance feedback. Cormack, Buttcher & Clarke, Information Retrieval Implementation and Evaluation of Search Engines (MIT Press 2010) at pg. 277. This can lead to severe negative relevance feedback loops where the AI is trained incorrectly. Not good. If that happens a lot of other bad things can and usually do happen. It must be avoided.

Yes. That means that skilled humans must still play a key role in all aspects of the delivery and production of goods and services, lawyers too.

UCLA Berkeley Professor Bates first wrote about concept shift when using early computer assisted search in the late 1980s. She found that users might execute a query, skim some of the resulting documents, and then learn things which slightly changes their information need. They then refine their query, not only in order to better express their information need, but also because the information need itself has now changed. This was a new concept at the time because under the Classical Model Of Information Retrieval an information need is single and unchanging. Professor Bates illustrated the old Classical Model with the following diagram.

The Classical Model was misguided. All search projects, including the legal search for evidence, are an evolving process where the understanding of the information need progresses, improves, as the information is reviewed. See diagram below for the multimodal berrypicking type approach. Note the importance of human thinking to this approach.

See Cognitive models of information retrieval (Wikipedia). As this Wikipedia article explains:

Bates argues that searches are evolving and occur bit by bit. That is to say, a person constantly changes his or her search terms in response to the results returned from the information retrieval system. Thus, a simple linear model does not capture the nature of information retrieval because the very act of searching causes feedback which causes the user to modify his or her cognitive model of the information being searched for.

Multimodal search assumes that the information need evolves over the course of a document review. It is never just run one search and then review all of the documents found in the search. That linear approach was used in version 1.0 of predictive coding, and is still used by most lawyers today. The dominant model in law today is linear, wherein a negotiated list of keyword is used to run one search. I called this failed method “Go Fish” and a few judges, like Judge Peck, picked up on that name. Losey, R., Adventures in Electronic Discovery (West 2011); Child’s Game of ‘Go Fish’ is a Poor Model for e-Discovery Search; Moore v. Publicis Groupe & MSL Group, 287 F.R.D. 182, 190-91, 2012 WL 607412, at *10 (S.D.N.Y. Feb. 24, 2012) (J. Peck).

The popular, but ineffective Go Fish approach is like the Classical Information Retrieval Model in that only a single list of keywords is used as the query. The keywords are not refined over time as the documents are reviewed. This is a mono-modal process. It is contradicted by our evolving multimodal process, Step One in our Six-Step plan. In the first step we run many, many searches and review some of the results of each search, some of the documents, and then change the searches accordingly.

Step Two: Tests, Sample

Each search run is sampled by quick reviews and its effectiveness evaluated, tested. For instance, did a search of what you expected would be an unusual word turn up far more hits than anticipated? Did the keyword show up in all kinds of documents that had nothing to do with the case? For example, a couple of minutes of review might show that what you thought would be a carefully and rarely used word, Privileged, was in fact part of the standard signature line of one custodian. All his emails had the keyword Privileged on them. The keyword in these circumstances may be a surprise failure, at least as to that one custodian. These kind of unexpected language usages and surprise failures are commonplace, especially with neophyte lawyers.

Sampling here does not mean random sampling, but rather judgmental sampling, just picking a few representative hit documents and reviewing them. Were a fair number of berries found in that new search bush, or not? In our example, assume that your sample review of the documents with “Privileged” showed that the word was only part of one person’s standard signature on every one of their emails. When a new search is run wherein this custodian is excluded, the search results may now test favorably. You may devise other searches that exclude or limit the keyword “Privileged” whenever it is found in a signature.

There are many computer search tools used in a multimodal search method, but the most important tool of all is not algorithmic, but human. The most important search tool is the human ability to think the whole time you are looking for tasty berries. (The all important “T” in Professor Bates’ diagram above.) This means the ability to improvise, to spontaneously respond and react to unexpected circumstances. This mean ad hoc searches that change with time and experience. It is not a linear, set it and forget it, keyword cull-in and read all documents approach. This was true in the early days of automated search with Professor Bates berrypicking work in the late 1980s, and is still true today. Indeed, since the complexity of ESI has expanded a million times since then, our thinking, improvisation and teamwork are now more important than ever.

The goal in Step Two is to identify effective searches. Typically, that means where most of the results are relevant, greater than 50%. Ideally we would like to see roughly 80% relevancy. Alternatively, search hits that are very few in number, and thus inexpensive to review them all, may be accepted. For instance, you may try a search that only has ten documents, which you could review in just a minute. You may just find one relevant, but it could be important. The acceptable range of number of documents to review in Bottom Line Driven Review will always take cost into consideration. That is where Step-Three comes in, Estimation. What will it costs to review the documents found?

Step Three: Estimates

It is not enough to come up with effective searches, which is the goal of Steps One and Two, the costs involved to review all of the documents returned with these searches must also be considered. It may still cost way too much to review the documents when considering the proportionality factors under 26(b)(1) as discussed in Part One of this article. The plan of review must always take the cost of review into consideration.

In Part One we described an estimation method that I like to use to calculate the cost of an ESI review. When the projected cost, the estimate, is proportional in your judgment (and, where appropriate, in the judge’s judgment), then you conclude your iterative process of refining searches. You can then move onto the next Step-Four of preparing your discovery plan and making disclosures of that plan.

Step Four: Plan, Disclosures

Once you have created effective searches that produce an affordable number of documents to review for production, you articulate the Plan and make some disclosures about your plan. The extent of transparency in this step can vary considerably, depending on the circumstances and people involved. Long talkers like me can go on about legal search for many hours, far past the boredom tolerance level of most non-specialists. You might be fascinated by the various searches I ran to come up with the say 12,472 documents for final review, but most opposing counsel do not care beyond making sure that certain pet keywords they may like were used and tested. You should be prepared to reveal that kind of work-product for purposes of dispute avoidance and to build good will. Typically they want you to review more documents, no matter what you say. They usually save their arguments for the bottom line, the costs. They usually argue for greater expense based on the first five criteria of Rule 26(b)(1):

  1. the importance of the issues at stake in this action;
  2. the amount in controversy;
  3. the parties’ relative access to relevant information;
  4. the parties’ resources;
  5. the importance of the discovery in resolving the issues; and
  6. whether the burden or expense of the proposed discovery outweighs its likely benefit.

Still, although early agreement on scope of review is often impossible, as the requesting party always wants you to spend more, you can usually move past this initial disagreement by agreeing to phased discovery. The requesting party can reserve its objections to your plan, but still agree it is adequate for phase one. Usually we find that after that phase one production is completed the requesting party’s demands for more are either eliminated or considerably tempered. It may well now to possible to reach a reasonable final agreement.

Step Five: Final Review

Here is where you start to carry out your discovery plan. In this stage you finish looking at the documents and coding them for Responsiveness (relevant), Irrelevant (not responsive), Privileged (relevant but privileged, and so logged and withheld) and Confidential (all levels, from just notations and legends, to redactions, to withhold and log. A fifth temporary document code is used for communication purposes throughout a project: Undetermined. Issue tagging is usually a waste of time and should be avoided. Instead, you should rely on search to find documents to support various points. There are typically only a dozen or so documents of importance at trial anyway, no matter what the original corpus size.

 

I highly recommend use of professional document review attorneys to assist you in this step. The so-called “contract lawyers” specialize in electronic document review and do so at a very low cost, typically in the neighborhood of $50 per hour.  The best of them, who may often command slightly higher rates, are speed readers with high comprehension. They also know what to look for in different kinds of cases. Some have impressive backgrounds. Of course, good management of these resources is required. They should have their own management and team leaders. Outside attorneys signing Rule 26(g) will also need to supervise them carefully, especially as to relevance intricacies. The day will come when a court will find it unreasonable not to employ these attorneys in a document review. The savings is dramatic and this in turn increases the persuasiveness of your cost burden argument.

Step Six: Production

The last step is transfer of the appropriate information to the requesting party and designated members of your team. Production is typically followed by later delivery of a Log of all documents withheld, even though responsive or relevant. The withheld logged documents are typically: Attorney-Client Communications protected from disclosure under the client’s privilege; or, Attorney Work-Product documents protected from disclosure under the attorney’s privilege. Two different privileges. The attorney’s work-product privilege is frequently waived in some part, although often very small. The client’s communications with its attorneys is, however, an inviolate privilege that is never waived.

Typically you should produce in stages and not wait until project completion. The only exception might be where the requesting party would rather wait and receive one big production instead of a series of small productions. That is very rare. So plan on multiple productions. We suggest the first production be small and serve as a test of the receiving party’s abilities and otherwise get the bugs out of the system.

Conclusion

In this essay I have shown the method I use in document reviews to control costs by use of estimation and multimodal search. I call this a Bottom Line Driven approach. The six step process is designed to help uncover the costs of review as part of the review itself. This kind of experienced based estimate is an ideal way to meet the evidentiary burdens of a proportionality objection under revised Rules 26(b)(1) and 32(b)(2). It provides the hard facts needed to be specific as to what you will review and what you will not and the likely costs involved.

The six-step approach described here uses the costs incurred at the front end of the project to predict the total expense. The costs are controlled by use of best practices, such as contract review lawyers, but primarily by limiting the number of documents reviewed. Although it is somewhat easier to follow this approach using predictive coding and document ranking, it can still be done without that search feature. You can try this approach using any review software. It works well in small or medium sized projects with fairly simple issues. For large complex projects we still recommend using the eight-step predictive coding approach as taught in the TarCourse.com.


%d bloggers like this: