Cautionary Tale from Brooklyn: Search Terms ‘Designed To Fail’

October 20, 2019

Every lawyer who thinks e-discovery is not important, that you can just delegate it to a vendor, should read Abbott Laboratories, et al. v. Adelphia Supply USA, et al., No. 15 CV 5826 (CBA) (LB) (E.D.N.Y. May 2, 2019). This opinion in a trademark case in Brooklyn District Court (shown here) emphasizes, once again, that e-discovery can be outcome-determinative. If you mess it up, you can doom your case. If a lawyer wants to litigate today, they either have to spend the substantial time it takes to learn the many intricacies of e-discovery, or associate with a specialist who does. The Abbott Labs case shows how easily a law suit can be won or lost on e-discovery alone. Here the numbers did not add up, key custodians were omitted and guessed keywords were used, keywords so bad that opposing counsel called them designed to fail. The defendants reacted by firing their lawyers and blaming everything on them, but the court did not buy it. Instead, discovery fraud was found and judgment was entered for the plaintiff.

Magistrate Judge Lois Bloom (shown right) begins the Opinion by noting that the plaintiff’s motion for case ending sanctions “… presents a cautionary tale about how not to conduct discovery in federal court.” The issues started when defendant made its first electronic document production. The Electronically Stored Information was all produced in paper, as Judge Bloom explained “in hard copy, scanning them all together, and producing them as a single, 1941-page PDF file.” Opinion pg. 3. This is not what the plaintiff Abbott Labs wanted. After Abbott sought relief from the court the defendants on March 24, 2017 were ordered  to “produce an electronic copy of the 2014 emails (1,941 pages)” including metadata. Defendant then “electronically produced 4,074 pages of responsive documents on April 5, 2017.” Note how the page count went from 1,942 to 4,074. There was no explanation of this page count discrepancy, the first of many, but the evidence helped Abbott justify a new product counterfeiting action (Abbott II) where the court ordered a seizure of defendant’s email server. That’s were the fun started. As Judge Bloom put it:

Once plaintiffs had seized H&H’s email server, plaintiffs had the proverbial smoking gun and raised its concerns anew that defendants had failed to comply with the Court’s Order to produce responsive documents in the instant action (hereinafter “Abbott I”). On July 12, 2017, the Court ordered the H&H defendants to “re-run the document search outlined in the Court’s January 17 and January 21 Orders,” “produce the documents from the re-run search to Abbott,” and to produce “an affidavit of someone with personal knowledge” regarding alleged technical errors that affected the production.³ Pursuant to the Court’s July 12, 2017 Order to re-run the search, The H&H defendants produced 3,569 responsive documents.

Opinion pg. 4 (citations to record omitted).

Too Late For Vendor Help and a Search Strategy Designed to Fail

After the seizure order in Abbott II, and after Abbott Labs again raised issues regarding defendants’ original production, Judge Bloom ordered the defendants to re-run the original search. Defendants then retained the services of an outside vendor, Transperfect, to re-run the original search for them. In supposed compliance with that order, the defendants, aka H&H, then produced 3,569 documents. Id. at 8. Defendants also filed an affidavit by Joseph Pochron, Director in the Forensic Technology and Consulting Division at Transperfect (“Pochron Decl.”) to try to help their case. It did not work. According to Judge Bloom the Pochron Decl. states:

… that H&H utilized an email archiving system called Barracuda and that there are two types of Barracuda accounts, Administrator and Auditor. Pochron Decl. ¶ 13. Pochron’s declaration states that the H&H employee who ran the original search, Andrew Sweet, H&H’s general manager, used the Auditor account to run the original search (“Sweet search”). Id. at ¶ 19. When Mr. Pochron replicated the Sweet search using the Auditor account, he obtained 1,540 responsive emails. Id. at ¶ 22. When Mr. Pochron replicated the Sweet search using the Administrator account, he obtained 1,737 responsive emails. Id. Thus, Mr. Pochron attests that 197 messages were not viewable to Mr. Sweet when the original production was made. Id. Plaintiffs state that they have excluded those 197 messages, deemed technical errors, from their instant motion for sanctions. Plaintiffs’ Memorandum of Law at 9; Waters Decl. ¶ 8. However, even when those 197 messages are excluded, defendants’ numbers do not add up. In fact, H&H has repeatedly given plaintiffs and the Court different numbers that do not add up.

Moreover, plaintiffs argue that the H&H defendants purposely used search terms designed to fail, such as “International” and “FreeStyle,” whereas H&H’s internal systems used item numbers and other abbreviations such as “INT” and “INTE” for International and “FRL” and “FSL” for FreeStyle. Plaintiff’s Memorandum of Law at 10–11. Plaintiffs posit that defendants purposely designed and ran the “extremely limited search” which they knew would fail to capture responsive documents …

Opinion pgs. 8-9 (emphasis by bold added). “Search terms designed to fail.” This is the first time I have ever seen such a phrase in a judicial opinion. Is purposefully stupid keyword search yet another bad faith litigation tactic by unscrupulous attorneys and litigants? Or is this just another example of dangerous incompetence? Judge Bloom was not buying the ‘big oops” theory, especially considering the ever-changing numbers of relevant documents found. It looked to her, and me too, that this search strategy was intentionally design to fail, that it was all a shell-game.

This is the wake-up call for all litigators, especially those who do not specialize in e-discovery. Your search strategy had better make sense. Search terms must be designed (and tested) to succeed, not fail! This is not just incompetence.

The Thin Line Between Gross Negligence and Bad Faith

The e-discovery searches you run are important. The “mistakes” made here led to a default judgment. That is the way it is in federal court today. If you think otherwise, that e-discovery is not that important, that you can just hire a vendor and throw stupid keywords at it, then your head is dangerously stuck in the sand. Look around. There are many cases like Abbott Laboratories, et al. v. Adelphia Supply USA.

I say “mistakes” made here in quotes because it was obvious to Judge Bloom that these were not mistakes at all, this was fraud on the court.

E-Discovery is about evidence. About truth. You cannot play games. Either take it seriously and do it right, do it ethically, do it competently; or go home and get out. Retire already. Discovery gamesmanship and lawyer bumbling are no longer tolerated in federal court. The legal profession has no room for dinosaurs like that.

Abbott Labs responded the way they should, the way you should always expect in a situation like this:

Plaintiffs move for case ending sanctions under Federal Rule of Civil Procedure 37 and invoke the Court’s inherent power to hold defendants in default for perpetrating a fraud upon the Court. Plaintiffs move to strike the H&H defendants’ pleadings, to enter a default judgment against them, and for an order directing defendants to pay plaintiffs’ attorney’s fees and costs, for investigating and litigating defendants’ discovery fraud.

Id.

Rule 37(e) was revised in 2015 to make clear that gross negligence alone does not justify a case-ending sanction, that you must prove bad faith. This change should not provide the incompetent with much comfort. As this case shows, the difference between mistake and intent can be a very thin line. Do your numbers add up? Can you explain what you did and why you did it? Did you use good search terms? Did you search all of the key custodians? Or did you just take the ESI the client handed to you and say thank you very much? Did you look with a blind eye? Even if bad faith under Rule 37 is not proven, the court may still find the whole process stinks of fraud and use the court’s inherent powers to sanction misconduct.

As Judge Bloom went on to explain:

Under Rule 37, plaintiffs’ request for sanctions would be limited to my January 17, 2017 and January 27, 2017 Orders which directed defendants to produce documents as set forth therein. While sanctions under Rule 37 would be proper under these circumstances, defendants’ misconduct herein is more egregious and goes well beyond defendants’ failure to comply with the Court’s January 2017 discovery orders. . . .  Rather than viewing the H&H defendants’ failure to comply with the Court’s January 2017 Orders in isolation, plaintiffs’ motion is more properly considered in the context of the Court’s broader inherent power, because such power “extends to a full range of litigation abuses,” most importantly, to fraud upon the court.

Opinion pg. 5.

Judge Bloom went on the explain further the “fraud on the court” and defendant’s e-discovery conduct.

A fraud upon the court occurs where it is established by clear and convincing evidence “that a party has set in motion some unconscionable scheme calculated to interfere with the judicial system’s ability impartially to adjudicate a matter by . . . unfairly hampering the presentation of the opposing party’s claim or defense.” New York Credit & Fin. Mgmt. Grp. v. Parson Ctr. Pharmacy, Inc., 432 Fed. Appx. 25 (2d Cir. 2011) (summary order) (quoting Scholastic, Inc. v. Stouffer, 221 F. Supp. 2d 425, 439 (S.D.N.Y. 2002))

Opinion pgs. 5-6 (subsequent string cites omitted).

Kill All The Lawyers

The defendants here tried to defend by firing and blaming their lawyers. That kind of Shakespearean sentiment is what you should expect when you represent people like that. They will turn on you. They will use you for their nefarious ends, then lose you. Kill you if they could.

Judge Bloom, who was herself a lawyer before becoming a judge, explained the blame-game defendants tried to pull in her court.

Regarding plaintiffs’ assertion that defendants designed and used search terms to fail, defendants proffer that their former counsel, Mr. Yert, formulated and directed the use of the search terms. Id. at 15. The H&H defendants state that “any problems with the search terms was the result of H&H’s good faith reliance on counsel who . . . decided to use parameters that were less robust than those later used[.]” Id. at 18. The H&H defendants further state that the Sweet search results were limited because of Mr. Yert’s incompetence. Id.

Opinion pg. 9.

Specifically defendants alleged:

… the original search parameters were determined by Mr. Yert and that he “relied on Mr. Yert’s expertise as counsel to direct the parameters and methods for a proper search that would fulfill the Court’s Order.” Sweet Decl. ¶ 3–4.  As will be discussed below, the crux of defendants’ arguments throughout their opposition to the instant motion seeks to lay blame on Mr. Yert for their actions; however, defendants cannot absolve themselves of liability here by shifting blame to their former counsel.

Opinion pg. 11.

Here is how Judge Bloom responded to this “blame the lawyers” defense:

Defendants’ attempt to lay blame on former counsel regarding the design and use of search terms is equally unavailing. It is undisputed that numerous responsive documents were not produced by the H&H defendants that should have been produced. Defendants’ prior counsel conceded as much. See generally plaintiffs’ Ex. B, Tr. Of July 11, 2017 telephone conference.

Mr. Yert was asked at his deposition about the terms that H&H used to identify their products and he testified as follows:

Q. Tell me about the general discussions you had with the client in terms of what informed you what search terms you should be using.

A. Those were the terms consistently used by H&H to identify the particular product.

Q. So the client told you that FreeStyle and International are the terms they consistently used to refer to International FreeStyle test strips; is that correct?

A. That’s what I recall.

Q. Did the client tell you that they used the abbreviation FSL to refer to FreeStyle?

A. I don’t recall.

Q. If they had told you that, you would have included that as a search term, correct?

A. I don’t recall if it was or was not included as a search term, sir.

Opinion pgs. 10-11.

The next time you are asked to dream up keywords for searches to find your client’s relevant evidence, remember this case, remember this deposition. Do not simply use keywords that the client suggests, as the attorneys did here. Do not simply use keywords. As I have written here many, many times before, there is a lot more to electronic evidence search and review than keywords. This is the Twenty First Century. You should be using AI, specifically active machine learning, aka Predictive Coding.

You need an expert to help you and you need them at the start of a case, not after sanctions motions.

Judge Lois Bloom went on to explain that, even if defendant’s story of innocent reliance on it lawyers was true:

It has long been held that a client-principal is “bound by the acts of his lawyer agent.” Id. (quoting Link v. Wabash RR. Co., 370 U.S. 626, 634 (1962)). As the Second Circuit stated, “even innocent clients may not benefit from the fraud of their attorney.” Id. . . .

However, notwithstanding defendants’ assertion that the search terms “FreeStyle” and “International” were used in lieu of more comprehensive search terms at the behest of Mr. Yert, it is undisputed that Mr. Sweet, H&H’s general manager, knew that H&H used abbreviations for these terms. Mr. Sweet admitted this at his deposition. See Sweet Dep. 81:2-81:24, Mar. 13, 2018. . . . The Court need not speculate as to why defendants did not use these search terms to comply with defendants’ obligation to produce pursuant to the Court’s Order. Mr. Sweet, by his own admission, states that “on several occasions he contacted Mr. Yert with specific questions about whether to include certain emails in production.” Sweet Decl. ¶ 7. It is inconceivable that H&H’s General Manager, who worked closely with Mr. Yert to respond to the Court’s Order, never mentioned that spelling out the terms used, “International” and “FreeStyle”, would not capture the documents in H&H’s email system. Mr. Sweet knew that H&H was required to produce documents regarding International FreeStyle test strips, regardless of whether H&H’s documents spelled out or abbreviated the terms. Had plaintiffs not seized H&H’s email server in the counterfeiting action, plaintiffs would have never known that defendants failed to produce a trove of responsive documents. H&H would have gotten away with it.

Opinion pgs. 12-13.

Defendants also failed to produce any documents by three custodians Holland Trading, Howard Goldman, and Lori Goldman. Again, they tried to blame that omission on their attorney, who they claim directed the search. Oh yeah, for sure. To me he looks like a mere stooge, a tool of unscrupulous litigants. Judge Bloom did not accept that defense either, holding:

While defendants’ effort to shift blame to Mr. Yert is unconvincing at best, even if defendants’ effort could be credited, counsel’s actions, even if they were found to be negligent, would not shield the H&H defendants from responsibility for their bad faith conduct.

Opinion pgs. 19-20. Then Judge Bloom went on to cite the record at length, including the depositions and affidavits of the attorneys involved, to expose this blame game as a sham. The order then concludes on this point holding:

There is no credible explanation for why the Holland Trading, Howard Goldman, and Lori Goldman documents were not produced except that the documents were willfully withheld. Defendants’ explanation that there were no documents withheld, then that any documents that weren’t produced were due to technical glitches, then that the documents didn’t appear in Mr. Sweet’s original search, then that if documents were intentionally removed, they were removed per Mr. Yert’s instructions cannot all be true. The H&H defendants have always had one more excuse up their sleeve in this “series of episodes of nonfeasance,” which amounts to “deliberate tactical intransigence.” Cine, 602 F.2d at 1067. In light of the H&H defendants’ ever-changing explanations as to the withheld documents, Mr. Sweet’s inconsistent testimony, and assertions of former counsel, the Court finds that the H&H defendants have calculatedly attempted to manipulate the judicial process. See Penthouse, 663 F.2d 376–390 (affirming entry of default where plaintiffs disobeyed an “order to produce in full all of [their] financial statements,” engaged in “prolonged and vexatious obstruction of discovery with respect to closely related and highly relevant records,” and gave “false testimony and representations that [financial records] did not exist.”).

Opinion pgs. 22-23.

The plaintiff, Abbott Labs, went on to argue that “the withheld documents freed David Gulas to commit perjury at his deposition. The Court agrees.” Id. at 24. The Truth has a way of finding itself out, especially with competent counsel on the other side and a good judge.

With this evidence the Court concluded the only adequate sanction was a default judgment in plaintiff’s favor. Message to spoliating defendants, game over, you lose.

Based on the full record of the case, there is clear and convincing evidence that defendants have perpetrated a fraud upon the court. Defendants’ initial conduct of formulating search terms designed to fail in deliberate disregard of the lawful orders of the Court allowed H&H to purposely withhold responsive documents, including the Holland Trading, Howard Goldman, and Lori Goldman documents. Defendants proffered inconsistent positions with three successive counsel as to why the documents were withheld. Mr. Sweet’s testimony is clearly inconsistent if not perjured from his deposition to his declaration in opposition to the instant motion. Mr. Goldman’s deposition testimony is evasive and self-serving at best. Finally, Mr. Gulas’ deposition testimony is clearly perjured. Had plaintiffs never seized H&H’s server pursuant to the Court’s Order in the counterfeiting case, H&H would have gotten away with their fraud upon this Court. H&H only complied with the Court’s orders and their discovery obligations when their backs were against the wall. Their email server had been seized. There was no longer an escape from responsibility for their bad faith conduct. This is, again, similar to Cerruti, where the “defendants did not withdraw the [false] documents on their own. Rather, they waited until the falsity of the documents had been detected.” Cerruti.,169 F.R.D. at 583. But for being caught in a web of irrefutable evidence, H&H would have profited from their misconduct. . . .

The Court finds that the H&H defendants have committed a fraud upon the court, and that the harshest sanction is warranted. Therefore, plaintiffs’ motion for sanctions should be granted and a default judgment should be entered against H&H Wholesale Services, Inc., Howard Goldman, and Lori Goldman.

Conclusion

Attorneys of record sign responses under Rule 26(g) to requests for production, not the client. That is because the rules require them to control the discovery efforts of their clients. That means the attorney’s neck is on the line. Rule 26(g) does not allow you to just take a client’s word for it. Verify. Supervise. The numbers should add up. The search terms, if used, should be designed and tested to succeed, not fail. This is your response, not the client’s. You determine the search method, in consultation with the client for sure, but not by “just following orders.” You must see everything, not nothing. If you see no email from key custodians, dig deeper and ask why. Do this at the beginning of the case. Get vendor help before you start discovery, not after you fail. Apparently the original defense attorneys here did just what they were asked, they went along with the client. Look where it got them. Fired and deposed. Default judgment entered. Cautionary tale indeed.

 

 



Document Review and Proportionality – Part Two

March 28, 2018

This is a continuation of a blog that I started last week. Suggest you read Part One before this.

Simplified Six Step Review Plan for Small and Medium Sized Cases or Otherwise Where Predictive Coding is Not Used

Here is the workflow for the simplified six-step plan. The first three steps repeat until you have a viable plan where the costs estimate is proportional under Rule 26(b)(1).

Step One: Multimodal Search

The document review begins with Multimodal Search of the ESI. Multimodal means that all modes of search are used to try to find relevant documents. Multimodal search uses a variety of techniques in an evolving, iterated process. It is never limited to a single search technique, such as keyword. All methods are used as deemed appropriate based upon the data to be reviewed and the software tools available. The basic types of search are shown in the search pyramid.

search_pyramid_revisedIn Step One we use a multimodal approach, but we typically begin with keyword and concept searches. Also, in most projects we will run similarity searches of all kinds to make the review more complete and broaden the reach of the keyword and concept searches. Sometimes we may even use a linear search, expert manual review at the base of the search pyramid. For instance, it might be helpful to see all communications that a key witness had on a certain day. The two-word stand-alone call me email when seen in context can sometimes be invaluable to proving your case.

I do not want to go into too much detail of the types of searches we do in this first step because each vendor’s document review software has different types of searches built it. Still, the basic types of search shown in the pyramid can be found in most software, although AI, active machine learning on top, is still only found in the best.

History of Multimodal Search

Professor Marcia Bates

Multimodal search, wherein a variety of techniques are used in an evolving, iterated process, is new to the legal profession, but not to Information Science. That is the field of scientific study which is, among many other things, concerned with computer search of large volumes of data. Although the e-Discovery Team’s promotion of multimodal search techniques to find evidence only goes back about ten years, Multimodal is a well-established search technique in Information Science. The pioneer professor who first popularized this search method was Marcia J. Bates, and her article, The Design of Browsing and Berrypicking Techniques for the Online Search Interface, 13 Online Info. Rev. 407, 409–11, 414, 418, 421–22 (1989). Professor Bates of UCLA did not use the term multimodal, that is my own small innovation, instead she coined the word “berrypicking” to describe the use of all types of search to find relevant texts. I prefer the term “multimodal” to “berrypicking,” but they are basically the same techniques.

In 2011 Marcia Bates explained in Quora her classic 1989 article and work on berrypicking:

An important thing we learned early on is that successful searching requires what I called “berrypicking.” . . .

Berrypicking involves 1) searching many different places/sources, 2) using different search techniques in different places, and 3) changing your search goal as you go along and learn things along the way. . . .

This may seem fairly obvious when stated this way, but, in fact, many searchers erroneously think they will find everything they want in just one place, and second, many information systems have been designed to permit only one kind of searching, and inhibit the searcher from using the more effective berrypicking technique.

Marcia J. Bates, Online Search and Berrypicking, Quora (Dec. 21, 2011). Professor Bates also introduced the related concept of an evolving search. In 1989 this was a radical idea in information science because it departed from the established orthodox assumption that an information need (relevance) remains the same, unchanged, throughout a search, no matter what the user might learn from the documents in the preliminary retrieved set. The Design of Browsing and Berrypicking Techniques for the Online Search Interface. Professor Bates dismissed this assumption and wrote in her 1989 article:

In real-life searches in manual sources, end users may begin with just one feature of a broader topic, or just one relevant reference, and move through a variety of sources.  Each new piece of information they encounter gives them new ideas and directions to follow and, consequently, a new conception of the query.  At each stage they are not just modifying the search terms used in order to get a better match for a single query.  Rather the query itself (as well as the search terms used) is continually shifting, in part or whole.   This type of search is here called an evolving search.

Furthermore, at each stage, with each different conception of the query, the user may identify useful information and references. In other words, the query is satisfied not by a single final retrieved set, but by a series of selections of individual references and bits of information at each stage of the ever-modifying search. A bit-at-a-time retrieval of this sort is here called berrypicking. This term is used by analogy to picking huckleberries or blueberries in the forest. The berries are scattered on the bushes; they do not come in bunches. One must pick them one at a time. One could do berrypicking of information without  the search need itself changing (evolving), but in this article the attention is given to searches that combine both of these features.

I independently noticed evolving search as a routine phenomena in legal search and only recently found Professor Bates’ prior descriptions. I have written about this often in the field of legal search (although never previously crediting Professor Bates) under the names “concept drift” or “evolving relevance.” See Eg. Concept Drift and Consistency: Two Keys To Document Review Quality – Part Two (e-Discovery Team, 1/24/16). Also see Voorhees, Variations in Relevance Judgments and the Measurement of Retrieval Effectiveness, 36 Info. Processing & Mgmt  697 (2000) at page 714.

SIDE NOTE: The somewhat related term query drift in information science refers to a different phenomena in machine learning. In query drift  the concept of document relevance unintentionally changes from the use of indiscriminate pseudorelevance feedback. Cormack, Buttcher & Clarke, Information Retrieval Implementation and Evaluation of Search Engines (MIT Press 2010) at pg. 277. This can lead to severe negative relevance feedback loops where the AI is trained incorrectly. Not good. If that happens a lot of other bad things can and usually do happen. It must be avoided.

Yes. That means that skilled humans must still play a key role in all aspects of the delivery and production of goods and services, lawyers too.

UCLA Berkeley Professor Bates first wrote about concept shift when using early computer assisted search in the late 1980s. She found that users might execute a query, skim some of the resulting documents, and then learn things which slightly changes their information need. They then refine their query, not only in order to better express their information need, but also because the information need itself has now changed. This was a new concept at the time because under the Classical Model Of Information Retrieval an information need is single and unchanging. Professor Bates illustrated the old Classical Model with the following diagram.

The Classical Model was misguided. All search projects, including the legal search for evidence, are an evolving process where the understanding of the information need progresses, improves, as the information is reviewed. See diagram below for the multimodal berrypicking type approach. Note the importance of human thinking to this approach.

See Cognitive models of information retrieval (Wikipedia). As this Wikipedia article explains:

Bates argues that searches are evolving and occur bit by bit. That is to say, a person constantly changes his or her search terms in response to the results returned from the information retrieval system. Thus, a simple linear model does not capture the nature of information retrieval because the very act of searching causes feedback which causes the user to modify his or her cognitive model of the information being searched for.

Multimodal search assumes that the information need evolves over the course of a document review. It is never just run one search and then review all of the documents found in the search. That linear approach was used in version 1.0 of predictive coding, and is still used by most lawyers today. The dominant model in law today is linear, wherein a negotiated list of keyword is used to run one search. I called this failed method “Go Fish” and a few judges, like Judge Peck, picked up on that name. Losey, R., Adventures in Electronic Discovery (West 2011); Child’s Game of ‘Go Fish’ is a Poor Model for e-Discovery Search; Moore v. Publicis Groupe & MSL Group, 287 F.R.D. 182, 190-91, 2012 WL 607412, at *10 (S.D.N.Y. Feb. 24, 2012) (J. Peck).

The popular, but ineffective Go Fish approach is like the Classical Information Retrieval Model in that only a single list of keywords is used as the query. The keywords are not refined over time as the documents are reviewed. This is a mono-modal process. It is contradicted by our evolving multimodal process, Step One in our Six-Step plan. In the first step we run many, many searches and review some of the results of each search, some of the documents, and then change the searches accordingly.

Step Two: Tests, Sample

Each search run is sampled by quick reviews and its effectiveness evaluated, tested. For instance, did a search of what you expected would be an unusual word turn up far more hits than anticipated? Did the keyword show up in all kinds of documents that had nothing to do with the case? For example, a couple of minutes of review might show that what you thought would be a carefully and rarely used word, Privileged, was in fact part of the standard signature line of one custodian. All his emails had the keyword Privileged on them. The keyword in these circumstances may be a surprise failure, at least as to that one custodian. These kind of unexpected language usages and surprise failures are commonplace, especially with neophyte lawyers.

Sampling here does not mean random sampling, but rather judgmental sampling, just picking a few representative hit documents and reviewing them. Were a fair number of berries found in that new search bush, or not? In our example, assume that your sample review of the documents with “Privileged” showed that the word was only part of one person’s standard signature on every one of their emails. When a new search is run wherein this custodian is excluded, the search results may now test favorably. You may devise other searches that exclude or limit the keyword “Privileged” whenever it is found in a signature.

There are many computer search tools used in a multimodal search method, but the most important tool of all is not algorithmic, but human. The most important search tool is the human ability to think the whole time you are looking for tasty berries. (The all important “T” in Professor Bates’ diagram above.) This means the ability to improvise, to spontaneously respond and react to unexpected circumstances. This mean ad hoc searches that change with time and experience. It is not a linear, set it and forget it, keyword cull-in and read all documents approach. This was true in the early days of automated search with Professor Bates berrypicking work in the late 1980s, and is still true today. Indeed, since the complexity of ESI has expanded a million times since then, our thinking, improvisation and teamwork are now more important than ever.

The goal in Step Two is to identify effective searches. Typically, that means where most of the results are relevant, greater than 50%. Ideally we would like to see roughly 80% relevancy. Alternatively, search hits that are very few in number, and thus inexpensive to review them all, may be accepted. For instance, you may try a search that only has ten documents, which you could review in just a minute. You may just find one relevant, but it could be important. The acceptable range of number of documents to review in Bottom Line Driven Review will always take cost into consideration. That is where Step-Three comes in, Estimation. What will it costs to review the documents found?

Step Three: Estimates

It is not enough to come up with effective searches, which is the goal of Steps One and Two, the costs involved to review all of the documents returned with these searches must also be considered. It may still cost way too much to review the documents when considering the proportionality factors under 26(b)(1) as discussed in Part One of this article. The plan of review must always take the cost of review into consideration.

In Part One we described an estimation method that I like to use to calculate the cost of an ESI review. When the projected cost, the estimate, is proportional in your judgment (and, where appropriate, in the judge’s judgment), then you conclude your iterative process of refining searches. You can then move onto the next Step-Four of preparing your discovery plan and making disclosures of that plan.

Step Four: Plan, Disclosures

Once you have created effective searches that produce an affordable number of documents to review for production, you articulate the Plan and make some disclosures about your plan. The extent of transparency in this step can vary considerably, depending on the circumstances and people involved. Long talkers like me can go on about legal search for many hours, far past the boredom tolerance level of most non-specialists. You might be fascinated by the various searches I ran to come up with the say 12,472 documents for final review, but most opposing counsel do not care beyond making sure that certain pet keywords they may like were used and tested. You should be prepared to reveal that kind of work-product for purposes of dispute avoidance and to build good will. Typically they want you to review more documents, no matter what you say. They usually save their arguments for the bottom line, the costs. They usually argue for greater expense based on the first five criteria of Rule 26(b)(1):

  1. the importance of the issues at stake in this action;
  2. the amount in controversy;
  3. the parties’ relative access to relevant information;
  4. the parties’ resources;
  5. the importance of the discovery in resolving the issues; and
  6. whether the burden or expense of the proposed discovery outweighs its likely benefit.

Still, although early agreement on scope of review is often impossible, as the requesting party always wants you to spend more, you can usually move past this initial disagreement by agreeing to phased discovery. The requesting party can reserve its objections to your plan, but still agree it is adequate for phase one. Usually we find that after that phase one production is completed the requesting party’s demands for more are either eliminated or considerably tempered. It may well now to possible to reach a reasonable final agreement.

Step Five: Final Review

Here is where you start to carry out your discovery plan. In this stage you finish looking at the documents and coding them for Responsiveness (relevant), Irrelevant (not responsive), Privileged (relevant but privileged, and so logged and withheld) and Confidential (all levels, from just notations and legends, to redactions, to withhold and log. A fifth temporary document code is used for communication purposes throughout a project: Undetermined. Issue tagging is usually a waste of time and should be avoided. Instead, you should rely on search to find documents to support various points. There are typically only a dozen or so documents of importance at trial anyway, no matter what the original corpus size.

 

I highly recommend use of professional document review attorneys to assist you in this step. The so-called “contract lawyers” specialize in electronic document review and do so at a very low cost, typically in the neighborhood of $50 per hour.  The best of them, who may often command slightly higher rates, are speed readers with high comprehension. They also know what to look for in different kinds of cases. Some have impressive backgrounds. Of course, good management of these resources is required. They should have their own management and team leaders. Outside attorneys signing Rule 26(g) will also need to supervise them carefully, especially as to relevance intricacies. The day will come when a court will find it unreasonable not to employ these attorneys in a document review. The savings is dramatic and this in turn increases the persuasiveness of your cost burden argument.

Step Six: Production

The last step is transfer of the appropriate information to the requesting party and designated members of your team. Production is typically followed by later delivery of a Log of all documents withheld, even though responsive or relevant. The withheld logged documents are typically: Attorney-Client Communications protected from disclosure under the client’s privilege; or, Attorney Work-Product documents protected from disclosure under the attorney’s privilege. Two different privileges. The attorney’s work-product privilege is frequently waived in some part, although often very small. The client’s communications with its attorneys is, however, an inviolate privilege that is never waived.

Typically you should produce in stages and not wait until project completion. The only exception might be where the requesting party would rather wait and receive one big production instead of a series of small productions. That is very rare. So plan on multiple productions. We suggest the first production be small and serve as a test of the receiving party’s abilities and otherwise get the bugs out of the system.

Conclusion

In this essay I have shown the method I use in document reviews to control costs by use of estimation and multimodal search. I call this a Bottom Line Driven approach. The six step process is designed to help uncover the costs of review as part of the review itself. This kind of experienced based estimate is an ideal way to meet the evidentiary burdens of a proportionality objection under revised Rules 26(b)(1) and 32(b)(2). It provides the hard facts needed to be specific as to what you will review and what you will not and the likely costs involved.

The six-step approach described here uses the costs incurred at the front end of the project to predict the total expense. The costs are controlled by use of best practices, such as contract review lawyers, but primarily by limiting the number of documents reviewed. Although it is somewhat easier to follow this approach using predictive coding and document ranking, it can still be done without that search feature. You can try this approach using any review software. It works well in small or medium sized projects with fairly simple issues. For large complex projects we still recommend using the eight-step predictive coding approach as taught in the TarCourse.com.


%d bloggers like this: