Editors Note: This is a complete rewrite of earlier descriptions of Ralph Losey’s bottom line driven review method.
I have been focusing on the problem of high e-discovery costs since 2006. At that time I phased out my general trial practice and devoted all of my professional time to electronic discovery law. I focused on the expenses associated with those legal services because it was obvious in 2006 that unpredictable, high e-discovery costs were a core problem in civil litigation. They still are. Uncertain costs keep many attorneys away from e-discovery, even though nearly all original records and documents are electronic, and have been for many years.
The most expensive part of e-discovery is the document search and review process. In most projects it constitutes between 60% to 80% of the total cost. See Where The Money Goes: Understanding Litigant Expenditures for Producing Electronic Discovery (corporate survey found an average of 73%). For that reason, although I practice in all areas of electronic discovery law, my efforts to control costs have focused on document review.
Two Different Review Tasks
The review costs in turn arise primarily from two legal activities:
- Search for and identification of the likely responsive or relevant documents. (For convenience and simplicity purposes this paper will refer only to relevance henceforth, and not responsiveness, although the author recognizes there can be differences.) This is a binary decision, yes or no as to relevance. This type of review is often called first pass review.
- Study of the documents identified as likely relevant to determine which must be withheld, logged, redacted, and/or labeled to protect a client’s confidential information, such as privileged communications. The subsequent reviews can also include specific issue tagging work unrelated to confidentiality concerns. Documents not identified as relevant are not included in these further reviews, but may be subject to sampling for quality assurance purposes.
The second reviews for client confidentiality purposes can be very problematic, expensive, and risk-filled. For instance in Tampa Bay Water v. HDR Engineering, Inc., a large construction case involving millions of documents reviewed for possible production, both sides inadvertently produced thousands of privileged documents to each other. They did so despite expenditures of tens of millions of dollars for traditional attorney review of each document before production.
My analysis and experiments with cost controls since 2006 have focused on both the cost of the initial relevancy review, and the cost of the final protection reviews. Great progress has been made in the last several years in lowering the cost of first-pass relevancy reviews, especially when using artificial intelligence (“AI”) enhanced software. Improvements have also been made in the second review costs, but not nearly as dramatic. Although some advocate for the elimination of second pass review to save expenses, and reliance on confidentiality and clawback agreements only for protection of confidential client documents, most litigants today are unwilling to take the risks involved.
At the present time, in most cases, virtually no corporate clients are willing to dispense with final manual review of documents selected for production and rely solely on automated software for protections. The likelihood of error is simply still too high for this to be an acceptable risk in most cases for most clients. The damage caused by disclosure of some privileged communications cannot be fully repaired by clawback agreements.
As I explained in my series Secrets of Search, Parts One, Two and Three, the latest AI enhanced software is far better than keyword search, but not yet good enough to allow for a fully automated approach to protection review. Based on my informal surveys and discussions with attorneys around the country, I have found that confidentiality reviews are only omitted in certain non-litigation circumstances, such as when making corporate merger related productions to the government, or in cases where the data under review is very unlikely to contain confidential information, such as old data of an acquired company. Also, I have seen it done in bankruptcy cases, or by litigants in any type of case who otherwise simply could not afford to respond properly to discovery requests.
Although AI enhanced review is not good enough, yet, to rely on exclusively for protection review, it is certainly up to the task of first pass relevancy review. Attorneys should also use AI enhanced search to speed up the second pass manual review for protections.
Of course, it is the litigant’s data and their confidential information, so if a litigant (or third party responding to a subpoena), really does not care about disclosure of confidential information in any particular data set, and is not required by law to protect the confidential information, and if they want to rely on claw back orders and confidentiality agreements alone, then that is their right. They may instruct their attorneys to skip this step for cost saving purposes. If you are an attorney receiving such an instruction, I recommend that you confirm that instruction in writing. You should also provide a full, detailed, written disclosure of the risks involved. Make sure the instruction is based on the client’s full understanding that once a bell has been rung, it cannot be un-rung, and that, for instance, waiver of once privileged attorney-client communications may open the door to waiver of others.
The Idea of Bottom Line Driven Review
After two years of analysis of the review cost problem I came up with an idea in 2008 that looked promising. It is simple in concept, which is one of its strengths (although its implementation can sometimes be complex). I have since tested and refined this method in two law firms with multiple types of cases and investigations. I have also spoken about this approach with many other attorneys, judges, and law professors, and taught this method at multiple CLE events around the country. I call it Bottom Line Driven Proportional Review and Production. A more technical description for it, the one I used in a legal methods patent application, is: System and Method for Establishing, Managing, and Controlling the Time, Cost, and Quality of Information Retrieval and Production in Electronic Discovery. But I usually just call it Bottom Line Driven Review.
The Bottom Line of Productions
The bottom line in e-discovery production is what it costs. Believe me fellow lawyers, clients care about that …. a lot! In Bottom Line Driven Proportional Review everything starts with the bottom line. What is the production going to cost? Despite what some lawyers and vendors may tell you, it is not an impossible question to answer. It takes an experienced lawyer’s skill to answer, but after a while, you can get quite good at such estimation. It is basically a matter of man-hours estimation. With my method it becomes a reliable art that you can count on. It may never be exact, but the ranges can usually be predicted, subject of course to the target changing after the estimate is given. If the complaint is amended, or different evidence becomes relevant, then a change order may be required for the new specifications.
Price estimation is second nature to me, and an obvious thing to do before you begin work on any big project. I think that is primarily because I worked as a construction estimator out of college to save up money for law school back in the seventies. Estimating legal review costs is basically the same thing, projecting materials and labor costs. In construction you come up with prices per square foot. In e-discovery you estimate prices per file, as I will explain in detail in this essay.
My new strategy and methodology is based on the bottom line. It is based on projected review costs, defensible culling, and best practices of AI enhanced review. Under this method the producing party determines the number of documents to be subjected to costly reviews by calculating backwards from the bottom line of what they are willing, or required, to pay for the production.
Setting a Budget Proportional to the Case
The process begins by the producing party calculating the maximum amount of money appropriate to spend on the production. A budget. This requires not only an understanding of the production requests, but also a careful evaluation of the value and merits of the case. This is where the all important proportionality element comes in.
The amount selected for the budget should be proportional to the monies and issues in the case. Any more than that is unduly burdensome and prohibited under Rule 26(b)(2)(C), Federal Rules of Civil Procedure and other rules that underlie what is now generally known as the Proportionality Principle.
The budget becomes the bottom line that drives the review and keeps the costs proportional. The producing party seeks to keep the total costs within that budget. The budget should either be by agreement of the parties, or at least without objection, or by court order.
The failure to estimate and project future costs, and to plan and limit reviews so that they stay within budget, accounts for much of today’s out of control e-discovery costs. Once you spend the money, it is very hard to have costs shifted to the requesting party. But if you raise objections and argue proportionality before the spend, then you will have a fair chance to constrain your expenses within a reasonable budget.
Under the Bottom Line Driven proportional approach, after analysis of the case merits, and determination of the maximum expense for production proportional to a case, the responding party makes a good faith estimate of the likely maximum number of documents that can be reviewed within that budget. The document count represents the number of documents that can be reviewed for final decisions of relevance, confidentiality, privilege and other issues, and still remain within budget. The review costs you estimate must be based on best practices and be accurate (no puffing).
Following best practices the producing party then uses advanced search techniques and quality controls to find the documents in a first pass review that are most likely to be relevant and stay within the number of documents allowed by the budget. Since predictive coding type AI enhanced software ranks all documents according to probable probative value, it the perfect tool to facilitate bottom line driven review. The ranking feature makes it far easier to focus on the most important documents and stay within budget.
Good predictive coding software today evaluates the strength of the relevance and irrelevance of every document in the data set analyzed. That is one reason I was especially pleased when AI type software with reliable ranking abilities first came into the market in 2011 and have since started specializing in the best methods for use of this software. Bottom line driven review was, and still can be, done without predictive coding ranking (I did so for years), but it is harder to be accurate without computerized ranking.
Using best methods and AI search with relevancy ranking allows you to get the most bang for your buck, the core truth. This in turn helps persuade the requesting party (or the court, should agreement not be reached), to go along with your proposed budget constraints.
Unfortunately, the use of AI software comes with its own transactional costs, which means it cannot be economically used in cases that are too “small.” Typically this means cases involving less than a $25,000 budget for document review. For these smaller cases the same bottom line approach should still be used, and it can still work fairly well, even if you do not have the benefit of expensive AI software and its ranking properties.
To Be Continued …..
 Report can be found online at http://www.rand.org/pubs/monographs/MG1208.html
 There are essentially ten tasks that lawyers perform in e-discovery, three of which pertain to document review. They are described in the Electronic Discovery Best Practices found at EDBP.com. The three types of services pertaining to document review can be found at: http://www.edbp.com/search-review/. The functions performed by electronic discovery vendors, which includes non-legal services such as data processing, are outlined in the Electronic Discovery Reference Model found at EDRM.net.
 For an example of this industry standard two-step practice see Gabriel Techs., Corp. v. Qualcomm, Inc., No. 08CV1992 AJB (MDD), 2013 WL 410103 at *10 (S.D. Cal. Feb. 1, 2013) (cost award allowed to prevailing party of $2,829,349.10 for first-pass review by SMEs of one million documents and another $391,928.91 for a second review by contract lawyers.)
 It is included in the EDBP as step seven and described as Computer Assisted Review (CAR) found at: http://www.edbp.com/search-review/computer-assisted-review/.
 It is included in the EDBP as step eight and described as Protections found at: http://www.edbp.com/search-review/.
 Mt. Hawley Ins. Co. v. Felman Production, Inc., 2010 WL 1990555 (S.D. W. Va. May 18, 2010) (privilege documents accidentally produced resulting in waiver of privilege in spite of sophisticated counsel employing elaborate safeguards). Also see Diabetes Centers of America, Inc. v. Healthpia America, Inc., 2008 U.S. Dist. LEXIS 8362, 2008 WL 336382 (S.D. Tex. Feb. 5, 2008) (production errors made by both sides); Danis v. USN Communications, Inc., 2000 WL 1694325 (N.D. Ill. 2000) ($10,000 fine imposed against CEO personally when the young general counsel he hired to supervise e-discovery was grossly negligent);
 Tampa Bay Water v. HDR Engineering, Inc. Case No. 8:08-CV-2446-T-27TBM. (M.D. Fl. November 2, 2012) (also found at 2012 U.S. Dist. LEXIS 157631 and 2012 WL 5387830). The plaintiff alone inadvertently produced 23,000 privileged documents. The prevailing defendant in this case was awarded over twenty million dollars in fees and costs. Of this sum $3,100,000 was awarded as a cost for e-discovery vendor processing and hosting of 2.7 million documents for review. Another $4,590,000 ($1.70 per file) is estimated to have been spent by one defendant in attorney fees to review the documents. See Losey, R., $3.1 Million e-Discovery Vendor Fee Was Reasonable in a $30 Million Case (e-Discovery Team, Aug. 4, 2013) found at https://e-discoveryteam.com/2013/08/04/3-1-million-e-discovery-vendor-fee-was-reasonable-in-a-30-million-case/#comment-60139.
 See: FN 4. Also see: Brookfield Asset Management, Inc. v. AIG Financial Products Corp., 2013 WL 142503 (S.D.N.Y. Jan. 7, 2013); Losey, R., Another Clawback Enforcement Order Shows the Importance of the Selection of Quality Vendors found at https://e-discoveryteam.com/2013/03/12/another-clawback-enforcement-order-shows-the-importance-of-the-selection-of-quality-vendors/
 The three part essay on Search is available online in one document found at: https://ralphlosey.files.wordpress.com/2013/03/secrets_of_search_2012_consolidated.pdf
 For a description of what I mean by AI enhanced search, which is also commonly called Predictive Coding, Technology Assisted Review (TAR), or Computer Assisted Review (CAR) see my page describing CAR found at https://e-discoveryteam.com/car/. Also see the descriptions for step seven in the EDBP at http://www.edbp.com/search-review/.
 I am going to explain the idea here with enough detail for attorneys experienced in e-discovery to be able to try it out. I urge you to do so. I have been working and training the attorneys in my current law firm, Jackson Lewis, in this for years. We know that it works in all types of employment law cases, federal and state, large and small, especially when coupled with the doctrine of proportionality (also explained in this essay). In my prior law firm, Akerman Senterfitt, the method was tested and refined in multiple types of commercial litigation. Time and conflict checks permitting, I can consult and assist attorneys outside of my firm on Bottom Line Driven Review, especially when it involves the use of artificial intelligence enhanced software (often called predictive coding) to search for evidence in big data. That is my current area of special interest and Bottom Line Driven Review is particularly effective in large cases.
 See Rule 1, Rule 26(b)(2)(C), Rule 26(b)(2)(B), and Rule 26(g), Federal Rules of Civil Procedure; Commentary on Proportionality in Electronic Discovery, 11 SEDONA CONF. J. 289 (2010); Carroll, Proportionality in Discovery: A Cautionary Tale, 32 Campbell L. Rev. 455, 460 (2010); Losey, R. Good, Better, Best: a Tale of Three Proportionality Cases – Part Two found at https://e-discoveryteam.com/2012/04/15/good-better-best-a-tale-of-three-proportionality-cases-part-two/. Also see: Tamburo v. Dworkin, 2010 WL 4867346 (N.D. Ill. Nov. 17, 2010); Rimkus Consulting Group v. Cammarata, 688 F.Supp. 2d 598, 613 (S.D. Tx. 2010); Apple v.Samsung, 2013 WL 442365412 (N.D.Cal. August 14, 2013); Wood v. Capital One Services, LLC, No. 5:09-CV-1445 (NPM/DEP), 2011 WL 2154279, at *1-3, *7 (N.D.N.Y, 2011); Daugherty v. Murphy, No. 1:06-cv-0878-SEB-DML, 2010 WL 4877720, at *5 (S.D. Ind., 2010); Willnerd v. Sybase, 2010 U.S. Dist. LEXIS 121658 (SD Id., 2010); I-Med Pharma Inc. v. Biomatrix, 2011 WL 6140658 (D.N.J. Dec. 9, 2011); U.S. ex rel McBride v. Halliburton Co.,, 272 F.R.D. 235 (D.D.C. 2011); DCG Sys., Inc. v. Checkpoint Techs, LLC, 2011 WL 5244356 (N.D. Cal. Nov. 2, 2011).
 Losey, R., Relevancy Ranking is the Key Feature of Predictive Coding Software found at https://e-discoveryteam.com/2013/08/25/relevancy-ranking-is-the-key-feature-of-predictive-coding-software/.