Predictive Coding 4.0 – Nine Key Points of Legal Document Review and an Updated Statement of Our Workflow – Part Six

October 16, 2016

This is the sixth installment of the article explaining the e-Discovery Team’s latest enhancements to electronic document review using Predictive Coding. Here are Parts OneTwoThreeFour and Five. This series explains the nine insights behind the latest upgrade to version 4.0 and the slight revisions these insights triggered to the eight-step workflow. We have already covered the nine insights. Now we will begin to review the revised eight-step workflow.

predictive_coding_4-0_full_namesThe eight-step chart provides a model of the Predictive Coding 4.0 methods. (You may download and freely distribute this chart without further permission, so long as you do not change it.) The circular flows depict the iterative steps specific to the predictive coding features. Steps four, five and six iterate until the active machine training reaches satisfactory levels and thereafter final quality control and productions are done.

Although presented as sequential steps for pedantic purposes, Predictive Coding 4.0 is highly adaptive to circumstances and does not necessarily follow a rigid linear order. For instance, some of the quality control procedures are used throughout the search and review, and rolling productions can begin at any time.

CULLING.filters_SME_only_reviewTo fully understand the 4.0 method, it helps to see how it is fits into an overall Dual-Filter Culling process. See License to Cull The Two-Filter Document Culling Method (2015) (see illustrative diagram right). Still more information on predictive coding and electronic document review can be found in the over sixty articles published here on the topic since 2011. Reading helps, but we have found that the most effective way to teach this method, like any other legal method, is by hands-on guidance. Our eight-step workflow can be taught to any legal professional who already has experience with document review by the traditional second-chair type of apprenticeship training.

This final segment of our explanation of Predictive Coding 4.0 will include some of the videos that I made earlier this year describing our document review methods. Document Review and Predictive Coding: an introductory course with 7 videos and 2,982 words. The first video below introduces the eight-step method. Once you get past my attempt at Star Wars humor in the opening credits of the video you will hear my seven-minute talk. It begins with why I think predictive coding and other advanced technologies are important to the legal profession and how we are now at a critical turning point of civilization.



Step One – ESI Communications

Business Discussion --- Image by © Royalty-Free/CorbisGood review projects begin with ESI Communications, they begin with talking. You need to understand and articulate the disputed issues of fact. If you do not know what you are looking for, you will never find it. That does not mean you know of specific documents. If you knew that, it would not be much of a search. It means you understand what needs to be proven at trial and what documents will have impact on judge and jury. It also means you know the legal bounds of relevance, including especially Rule 26(b)(1).


ESI Communications begin and end with the scope of the discovery, relevance and related review procedures. The communications are not only with opposing counsel or other requesting parties, but also with the client and the e-discovery team assigned to the case. These Talks should be facilitated by the lead e-Discovery specialist attorney assigned to the case. But they should include the active participation of the whole team, including all trial lawyers not otherwise very involved in the ESI review.

The purpose of all of this Talk is to give everyone an idea as to the documents sought and the confidentiality protections and other special issues involved. Good lines of communication are critical to that effort. This first step can sometimes be difficult, especially if there are many new members to the group. Still, a common understanding of relevance, the target searched, is critical to the successful outcome of the search. This includes the shared wisdom that the understanding of relevance will evolve and grow as the project progresses.

bullseye_arrow_hitWe need to Talk to understand what we are looking for. What is the target? What is the information need? What documents are relevant? What would a hot document look like? A common understanding of relevance by a review team, of what you are looking for, requires a lot of communication. Silent review projects are doomed to failure. They tend to stagnate and do not enjoy the benefits of Concept Drift, where a team’s understanding of relevance is refined and evolves as the review progresses. Yes, the target may move, and that is a good thing. See: Concept Drift and Consistency: Two Keys To Document Review Quality – Parts One, Two and Three.

Missed_targetReview projects are also doomed where the communications are one way, lecture down projects where only the SME talks. The reviewers must talk back, must ask questions. The input of reviewers is key. Their questions and comments are very important. Dialogue and active listening are required for all review projects, including ones with predictive coding.

You begin with analysis and discussions with your client, your internal team, and then with opposing counsel, as to what it is you are looking for and what the requesting party is looking for. The point is to clarify the information sought, the target. You cannot just stumble around and hope you will know it when you find it (and yet this happens all too often). You must first know what you are looking for. The target of most searches is the information relevant to disputed issues of fact in a case or investigation. But what exactly does that mean? If you encounter unresolvable disputes with opposing counsel on the scope of relevance, which can happen during any stage of the review despite your best efforts up-front, you may have to include the Judge in these discussions and seek a ruling.

Here is my video explaining the first step of ESI Communications.



talk_friendlyESI Discovery Communications” is about talking to your review team, including your client, key witnesses; it is about talking to opposing counsel; and, eventually, if need be, talking to the judge at hearings. Friendly, informal talk is a good method to avoid the tendency to polarize and demonize “the other side,” to build walls and be distrustful and silent.

angry-messageThe amount of distrust today between attorneys is at an all-time high. This trend must be reversed. Mutually respectful talk is part of the solution. Slowing things down helps too. Do not respond to a provocative text or email until you calm down. Take your time to ponder any question, even if you are not upset. Take your time to research and consult with others first. This point is critical. The demand for instant answers is never justified, nor required under the rules of civil procedure. Think first and never respond out of anger. We are all entitled to mutual respect. You have a right to demand that. So do they.

iphonerlThis point about not actually speaking with people in realtime, in person, or by phone or video, is, to some extent, generational. Many younger attorneys seem to have an inherent loathing of the phone and speaking out loud. They let their thumbs do the talking. (This is especially true in e-discovery where the professionals involved tend to be very computer oriented, not people oriented. I know because I am like that.) Meeting in person in real-time is distasteful to many, not just Gen X. Many of us prefer to put everything in emails and texts and tweets and posts, etc. That may make it easier to pause to reflect, especially if you are loathe to say in person that you do not know and will need to get back to them on that. But real time talking is important to full communication. You may need to force yourself to real-time interpersonal interactions. Many people are better at real-time talk than others, just like many are better at fast comprehension of documents than others. It is often a good idea for a team to have a designated talker, especially when it comes to speaking with opposing counsel or the client.

In e-discovery, where the knowledge levels are often extremely different, with one side knowing more about the subject than the other, the fist step of ESI Communications or Talk usually requires patient explanations. ESI Communications often require some amount of educational efforts by the attorneys with greater expertise. The trick is to do that without being condescending or too pedantic, and, in my case at least, without losing your patience.


Some object to the whole idea of helping opposing counsel by educating them, but the truth is, this helps your clients too. You are going to have to explain everything when you take a dispute to the judge, so you might as well start upfront. It helps save money and moves the case along. Trust building is a process best facilitated by honest, open talk.

ralph_listening_4I use of the term Talk to invoke the term listen as well. That is one reason we also refer to the first step as “Relevance Dialogues” because that is exactly what it should be, a back and forth exchange. Top down lecturing is not intended here. Even when a judge talks, where the relationship is truly top down, the judge always listens before rendering his or her decision. You are given the right to be heard at a hearing, to talk and be listened to. Judges listen a lot and usually ask many questions. Attorneys should do the same. Never just talk to hear the sound of your own voice. As Judge David Waxse likes to say, talk to opposing counsel as if the judge were listening.

judge_friendlyThe same rules apply when communicating about discovery with the judge. I personally prefer in-person hearings, or at least telephonic, as opposed to just throwing memos back and forth. This is especially true when the memorandums have very short page limits. Dear Judges: e-discovery issues are important and can quickly spiral out of control without your prompt attention. Please give us the hearings and time needed. Issuing easy orders that just split the baby will do nothing but pour gas on a fire.

In my many years of lawyering I have found that hearings and meetings are much more effective than exchanging papers. Dear brothers and sisters in the BAR: stop hating, stop distrusting and vilifying, and start talking to each other. That means listening too. Understand the other-side. Be professional. Try to cooperate. And stop taking extreme positions that assume the judge will just split the baby. 

talking_hearingIt bears emphasis that by Talk in this first step we intend dialogue. A true back and forth. We do not intend argument, nor winners and losers. We do intend mutual respect. That includes respectful disagreement, but only after we have heard each other out and understood our respective positions. Then, if our talks with the other side have reached an impasse, at least on some issues, we request a hearing from the judge and set out the issues for the judge to decide. That is how our system of justice and discovery are designed to work. If you fail to talk, you not only doom the document review project, you doom the whole case to unnecessary expense and frustration.

Richard BramanThis dialogue method is based on a Cooperative approach to discovery that was promoted by the late, great Richard Braman of The Sedona Conference. Cooperation is not only a best practice, but is, to a certain extent, a minimum standard required by rules of professional ethics and civil procedure. The primary goal of these dialogues for document review purposes is to obtain a common understanding of the e-discovery requests and reach agreement on the scope of relevancy and production.

ESI Communications in this first step may, in some cases, require disclosure of the actual search techniques used, which is traditionally protected by work product. The disclosures may also sometimes include limited disclosure of some of the training documents used, typically just the relevant documents. SAndrew J. Peckee Judge Andrew Peck’s 2015 ruling on predictive coding, Rio Tinto v. Vale, 2015 WL 872294 (March 2, 2015, SDNY). In Rio Tinto Judge Peck wisely modified somewhat his original views stated in Da Silva on the issue of disclosure. Moore v. Publicis Groupe, 2012 WL 607412 (S.D.N.Y. Feb. 24, 2012) (approved and adopted in Da Silva Moore v. Publicis Groupe, 2012 WL 1446534, at *2 (S.D.N.Y. Apr. 26, 2012)). Judge Peck no longer thinks that parties should necessarily disclose any training documents, and may instead:

… insure that training and review was done appropriately by other means, such as statistical estimation of recall at the conclusion of the review as well as by whether there are gaps in the production, and quality control review of samples from the documents categorized as non-responsive. See generally Grossman & Cormack, Comments, supra, 7 Fed. Cts. L.Rev. at 301-12.

The Court, however, need not rule on the need for seed set transparency in this case, because the parties agreed to a protocol that discloses all non-privileged documents in the control sets. (Attached Protocol, ¶¶ 4(b)-(c).) One point must be stressed — it is inappropriate to hold TAR to a higher standard than keywords or manual review. Doing so discourages parties from using TAR for fear of spending more in motion practice than the savings from using TAR for review.

Id. at *3. Also see Rio Tinto v. Vale, Stipulation and Order Re: Revised Validation and Audit Protocols for the use of Predictive Coding in Discovery, 14 Civ. 3042 (RMB) (AJP), (order dated 9/2/15 by Maura Grossman, Special Master, and adopted and ordered by Judge Peck on 9/8/15).

Judge Peck here follows the current prevailing view on disclosure that I also endorse. Disclose the relevant documents used in active machine learning, but not the irrelevant documents used in training. If there are borderline, grey area documents classified as irrelevant, you may need to disclose these type of documents by description, not actual production. Again, talk to the requesting party on where you are drawing the line. Talk about the grey area documents that you encounter. If they disagree, ask for a ruling before your training is complete.


The goals of Rule 1 of the Federal Rules of Civil Procedure (just, speedy and inexpensive) are impossible in all phases of litigation, not just discovery, unless attorneys communicate with each other. The parties may hate each other and refuse to talk. That sometimes happens. But the attorneys must be above the fray. That is a key purpose and function of an attorney in a dispute. It is sad that so many attorneys do not seem to understand that. If you are faced with such an attorney, my best advice is to lead by example, document the belligerence and seek the help of your presiding judge.

vulcan-mind-meldAlthough Talk to opposing counsel is important, even more important is talking within the team. It is an important method of quality control and efficient project management. Everyone needs to be on the same page of relevance and discoverability. Work needs to be coordinated. Internal team Talk needs to be very close. Although a Vulcan mind meld might be ideal, it is not really necessary. Still, during a project a steady flow of talk, usually in the form of emails or chats, is normal and efficient. Clients should never complain about time spent communicating to manage a document review project. It can save a tremendous amount of money in the long run, so long as it is focused on the task at hand.

Step Two – Multimodal ECA

Multimodal Early Case Assessment – ECA – summarizes the second step in our 8-step work flow. We used to call the second step “Multimodal Search Review.” It is still the same activity, but we tweaked the name to emphasize the ECA significance of this step. After we have an idea of what we are looking for from ESI Communications in step one, we start to use every tool at our disposal to try to find the relevant documents. Every tool that is, except for active machine learning. Our first look at the documents is our look, not the machine’s. That is not because we do not trust the AI’s input. We do. It is because there is no AI yet. The predictive coding only begins after you feed training documents into the machine. That happens in step four.


NIST-Logo_RLOur Multimodal ECA step-two does not take that long, so the delay in bringing in our AI is usually short. In our experiments at TREC in 2015 and 2016 under the auspicious of NIST, where we skipped steps three and seven to save time, and necessarily had little ESI Communications in step one, we would often complete simple document reviews of several hundred thousand documents in just a few hours. We cannot match these results in real-life legal document review projects because the issues in law suits are usually much more complicated than the issues presented by most topics at TREC. Also, we cannot take the risk of making mistakes in a real legal project that we did in an academic event like TREC.

Again, the terminology revision to say Multimodal ECA is more a change of style than substance. We have always worked in this manner. The name change is just to better convey the idea that we are looking for the low hanging fruit, the easy to find documents. We are getting an initial assessment of the data by using all of the tools of the search pyramid except for the top tier active machine learning. The AI comes into play soon enough in steps four and five, sometimes as early as the same day.


I have seen projects where key documents are found during the first ten minutes of looking around. Usually the secrets are not revealed so easily, but it does happen. Step two is the time to get to know the data, run some obvious searches, including any keyword requests for opposing counsel. You use the relevant and irrelevant documents you find in step two as the documents you select in step four to train the AI.

In the process of this initial document review you start to get a better understanding of the custodians, their data and relevance. This is what early case assessment is all about. You will find the rest of the still hidden relevant documents in the iterated rounds of machine training and other searches that follow. Here is my video description of step two.



Although we speak of searching for relevant documents in step two, it is important to understand that many irrelevant documents are also incidentally found and coded in that process. Active machine learning does not work by training on relevant documents alone. It must also include examples of irrelevant documents. For that reason we sometimes actively search for certain kinds of irrelevant documents to use in training. One of our current research experiments with Kroll Ontrack is to determine the best ratios between relevant and irrelevant documents for effective document ranking. See TREC reports at Mr. EDR as updated from time to time. At this point we have that issue nailed.

The multimodal ECA review in step two is carried out under the supervision of the Subject Matter Experts on the case. They make final decisions where there is doubt concerning the relevance of a document or document type. The SME role is typically performed by a team, including the partner in charge of the case – the senior SME – and senior associates, and e-Discovery specialist attorney(s) assigned to the case. It is, or should be, a team effort, at least in most large projects. As previously described, the final arbitrator on scope is made by the senior SME, who in turn is acting as the predictor of the court’s views. The final, final authority is always the Judge. The chart below summarizes the analysis of the SME and judge on the discoverability of any document. See Predictive Coding 4.0, Part Five.


When I do a project, acting as the e-Discovery specialist attorney for the case, I listen carefully to the trial lawyer SME as he or she explains the case. By extensive Q&A the members of the team understand what is relevant. We learn from the SME. It is not exactly a Vulcan mind-meld, but it can work pretty well with a cohesive team.  Most trial lawyers love to teach and opine on relevance and their theory of the case.

Helmuth Karl Bernhard von Moltke

General Moltke

Although a good SME team communicates and plans well, they also understand, typically from years of experience, that the intended relevance scope is like a battle plan before the battle. As the famous German military strategist, General Moltke the Elder said: No battle plan ever survives contact with the enemy. So too no relevance scope plan ever survives contact with the corpus of data. The understanding of relevance will evolve as the documents are studied, the evidence is assessed, and understanding of what really happened matures. If not, someone is not paying attention. In litigation that is usually a recipe for defeat. See Concept Drift and Consistency: Two Keys To Document Review Quality – Parts One, Two and Three.

Army of One: Multimodal Single-SME Approach To Machine LearningThe SME team trains and supervises the document review specialists, aka, contract review attorneys, who usually then do a large part of the manual reviews (step-six), and few if any searches. Working with review attorneys is a constant iterative process where communication is critical. Although I sometimes use an army-of-one approach where I do everything myself (that is how I did the EDI Oracle competition and most of the TREC topics), my preference now is to use two or three reviewers to help with the document review. With good methods, including culling methods, and good software, it is rarely necessary to use more reviewers than that. With the help of strong AI, say that included in Mr. EDR, we can easily classify a million or so documents for relevance with that size team. More reviewers than that may well be needed for complex redaction projects and other production issues, but not for a well-designed first-pass relevance search.

One word of warning when using document reviewers, it is very important for all members of the SME team to have direct and substantial contact with the actual documents, not just the reviewers. For instance, everyone involved in the project should see all hot documents found in any step of the process. It is especially important for the SME trial lawyer at the top of the expert pyramid to see them, but that is rarely more than a few hundred documents, often just a few dozen. Otherwise, the top SME need only see the novel and grey area documents that are encountered, where it is unclear on which side of the relevance line they should fall in accord with the last instructions. Again, the burden on the senior, and often technologically challenged senior SME attorneys, is fairly light under these Version 4.0 procedures.

The SME team relies on a primary SME, who is typically the trial lawyer in charge of the whole case, including all communications on relevance to the judge and opposing counsel. Thereafter, the head SME is sometimes only consulted on an as-needed basis to answer questions and make specific decisions on the grey area documents. There are always a few uncertain documents that need elevation to confirm relevance, but as the review progresses, their number usually decreases, and so the time and attention of the senior SME decreases accordingly.

Step Three – Random Prevalence

Control-SetsThere has been no change in this step from Version 3.0 to Version 4.0. The third step, which is not necessarily chronological, is essentially a computer function with statistical analysis. Here you create a random sample and analyze the results of expert review of the sample. Some review is thus involved in this step and you have to be very careful that it is correctly done. This sample is taken for statistical purposes to establish a baseline for quality control in step seven. Typically prevalence calculations are made at this point. Some software also uses this random sampling selection to create a control set. As explained at length in Predictive Coding 3.0, we do not use a control set because it is so unreliable. It is a complete waste of time and money and does not produce reliable recall estimates. Instead, we take a random sample near the beginning of a project solely to get an idea on Prevalence, meaning the approximate number of relevant documents in the collection.


Unless we are in a very rushed situation, such as in the TREC projects, where we would do a complete review in a day or two, or sometimes just a few hours, we like to take the time for the sample and prevalence estimate.

It is all about getting a statistical idea as to the range of relevant documents that likely exist in the data collected. This is very helpful for a number of reasons, including proportionality analysis (importance of the ESI to the litigation and cost estimates) and knowing when to stop your search, which is part of step seven. Knowing the number of relevant documents in your dataset can be very helpful, even if that number is a range, not exact. For example, you can know from a random sample that there are between four thousand and six thousand relevant documents. You cannot know there are exactly five thousand relevant documents. See: In Legal Search Exact Recall Can Never Be Known. Still, knowledge of the range of relevant documents (red in the diagram below) is helpful, albeit not critical to a successful search.


In step three an SME is only needed to verify the classifications of any grey area documents found in the random sample. The random sample review should be done by one reviewer, typically your best contract reviewer. They should be instructed to code as Uncertain any documents that are not obviously relevant or irrelevant based on their instructions and step one. All relevance codings should be double checked, as well as Uncertain documents. The senior SME is only consulted on an as-needed basis.

Document review in step three is limited to the sample documents. Aside from that, this step is a computer function and mathematical analysis. Pretty simple after you do it a few times. If you do not know anything about statistics, and your vendor is also clueless on this (rare), then you might need a consulting statistician. Most of the time this is not necessary and any competent Version 4.0 vendor expert should be able to help you through it.

thumb_ruleIt is not important to understand all of the math, just that random sampling produces a range, not an exact number. If your sample size is small, then the range will be very high. If you want to reduce your range in half, which is a function in statistics known as a confidence interval, you have to quadruple your sample size. This is a general rule of thumb that I explained in tedious mathematical detail several years ago in Random Sample Calculations And My Prediction That 300,000 Lawyers Will Be Using Random Sampling By 2022. Our Team likes to use a fairly large sample size of about 1,533 documents that creates a confidence interval of plus or minus 2.5%, subject to a confidence level of 95% (meaning the true value will lie within that range 95 times out of 100). More information on sample size is summarized in the graph below. Id.


The picture below this paragraph illustrates a data cloud where the yellow dots are the sampled documents from the grey dot total, and the hard to see red dots are the relevant documents found in that sample. Although this illustration is from a real project we had, it shows a dataset that is unusual in legal search because the prevalence here was high, between 22.5% and 27.5%. In most data collections searched in the law today, where the custodian data has not been filtered by keywords, the prevalence is far less than that, typically less than 5%, maybe even less that 0.5%. The low prevalence increases the range size, the uncertainties, and requires a binomial calculation adjustment to determine the statistically valid confidence interval, and thus the true document range.


For example, in a typical legal project with a few percent prevalence range, it would be common to see a range between 20,000 and 60,000 relevant documents in a 1,000,000 collection. Still, even with this very large range, we find it useful to at least have some idea of the number of relevant documents that we are looking for. That is what the Baseline step can provide to you, nothing more nor less.

95 Percent Confidence Level with Normal Distribution 1.96As mentioned, your vendor can probably help you with these statistical estimates. Just do not let them tell you that it is one exact number. It is always a range. The one number approach is just a shorthand for the range. It is simply a point projection near the middle of the range. The one number point projection is the top of the typical probability bell curve range shown right, which illustrates a 95% confidence level distribution. The top is just one possibility, albeit slightly more likely than either end points. The true value could be anywhere in the blue range.

To repeat, the step three prevalence baseline number is always a range, never just one number. Going back to the relatively high prevalence example, the below bell cure shows a point projection of 25% prevalence, with a range of 22.2% and 27.5%, creating a range of relevant documents of from between 225,000 and 275,000. This is shown below.


confidence interval graph showing standard distribution and 50% prevalenceThe important point that many vendors and other “experts” often forget to mention, is that you can never know exactly where within that range the true value may lie. Plus, there is always a small possibility, 5% when using a sample size based on a 95% confidence level, that the true value may fall outside of that range. It may, for example, only have 200,000 relevant documents. This means that even with a high prevalence project with datasets that approach the Normal Distribution of 50% (here meaning half of the documents are relevant), you can never know that there are exactly 250,000 documents, just because it is the mid-point or point projection. You can only know that there are between 225,000 and 275,000 relevant documents, and even that range may be wrong 5% of the time. Those uncertainties are inherent limitations to random sampling.

Shame on the vendors who still perpetuate that myth of certainty. Lawyers can handle the truth. We are used to dealing with uncertainties. All trial lawyers talk in terms of probable results at trial, and risks of loss, and often calculate a case’s settlement value based on such risk estimates. Do not insult our intelligence by a simplification of statistics that is plain wrong. Reliance on such erroneous point projections alone can lead to incorrect estimates as to the level of recall that we have attained in a project. We do not need to know the math, but we do need to know the truth.

The short video that follows will briefly explain the Random Baseline step, but does not go into the technical details of the math or statistics, such as the use of the binomial calculator for low prevalence. I have previously written extensively on this subject. See for instance:

Byte and Switch

If you prefer to learn stuff like this by watching cute animated robots, then you might like: Robots From The Not-Too-Distant Future Explain How They Use Random Sampling For Artificial Intelligence Based Evidence Search. But be careful, their view is version 1.0 as to control sets.

Thanks again to William Webber and other scientists in this field who helped me out over the years to understand the Bayesian nature of statistics (and reality).



To be continued …

The Law’s “Reasonable Man,” Judge Haight, Love, Truth, Justice, “Go Fish” and Why the Legal Profession Is Not Doomed to be Replaced by Robots

June 29, 2016

Reasonable_guageReasonability is a core concept of the law and foundation of our system of justice. Reason, according to accepted legal doctrine, is how we judge the actions of others and determine right from wrong. We do not look to Truth and Love for Justice, we look to Truth and Reason. If a person’s actions are reasonable, then, as a general matter, they are good and should not be punished, no matter what the emotional motives behind the actions. It is an objective standard. Actions judged as unreasonable are not good, no matter the emotional motive (think mercy killing).

Irrational actions are discouraged by law, and, if they cause damages, they are punished. The degree of punishment slides according to how unreasonable the behavior was and the extent of damages caused. Bad behavior ranges from the barely negligent – a close question – to intentionally bad, scienter. Analysis of reasonability in turn always depends on the facts and circumstances surrounding the actions being judged.

Reasonability Depends on the Circumstances

Justice_scaleWhenever a lawyer is asked a legal question they love to start the answer by pointing that it all depends. We are trained to see both sides, to weigh the evidence. We dissect, access and evaluate degrees of reasonability according to the surrounding circumstances. We deal with reason, logic and cold hard facts. Our recipe for justice is simple: add reason to facts and stir well.

The core concept of reasonability not only permeates negligence and criminal law, it underlies discovery law as well. We are constantly called upon the evaluate the reasonability of efforts to save, find and produce electronically stored information. This evaluation of reasonability always depends on the facts. It requires more than information. It requires knowledge of what the information means.

Perfect efforts are not required in the law, but reasonable efforts are. Failure to make such efforts can be punished by the court, with the severity of the punishment contingent on the degree of unreasonability and extent of damages. Again, this requires knowledge of the true facts of the efforts, the circumstances.

justice_guage_negligenceIn discovery litigants and their lawyers are not permitted to make anything less than reasonable efforts to find the information requested. They are not permitted to make sub-standard, negligent efforts, and certainly not grossly negligence efforts. Let us not even talk about intentionally obstructive or defiant efforts. The difference between good enough practice – meaning reasonable efforts – and malpractice is where the red line of negligence is drawn.

Bagely v. Yale

Yale Law Professor Constance Bagley

Professor Constance Bagley

One of my favorite district court judges – 86-year old Charles S. Haight – pointed out the need to evaluate reasonability of e-discovery efforts in a well-known, at this time still ongoing employment discrimination case. Bagely v. Yale, Civil Action No. 3:13-CV-1890 (CSH). See eg. Bagley v. Yale University, 42 F. Supp. 3d 332 (DC, Conn. 2014). On April 27, 2015, Judge Haight considered Defendant’s Motion for Protective Order.

The plaintiff, Constance Bagley, wanted her former employer, Yale University, to look through the emails of more witness to respond to her request for production. The defendant, Yale University, said it had already done enough, that it had reviewed the emails of several custodians, and should not be required to do more. Judge Haight correctly analyzed this dispute as requiring his judgment on the reasonability of Yale’s efforts. He focused on Rule 26(b)(2)(B) involving the “reasonable accessibility” of certain ESI and the reasonable efforts requirements under then Rule 26(b)(2)(C) (now 26(b)(1) – proportionality factors under the 2015 Rules Amendments). In the judge’s words:

Yale can — indeed, it has — shown that the custodians’ responsive ESI is not readily accessible. That is not the test. The question is whether this information is not reasonably accessible: a condition that necessarily implies some degree of effort in accessing the information. So long as that creature of the common law, the reasonable man,[6] paces the corridors of our jurisprudence, surrounding circumstances matter.

[6] The phrase is not gender neutral because that is not the way Lord Coke spoke.

Bagley v. Yale, Ruling on Defendant’s Motion for Protective Order (Doc. 108) (April 27, 2015) (emphasis added).

The Pertinent e-Discovery Facts of Bagley v. Yale

kiss_me_im_a_custodian_keychainJudge Haight went on to deny the motion for protective order by defendant Yale University, his alma mater, by evaluation of the facts and circumstances. Here the plaintiff originally wanted defendant to review for relevant documents the ESI that contained certain search terms of 24 custodians. The parties later narrowed the list of terms and reduced the custodian count from 24 to 10. The defendant began a linear review of each and every document. (Yes, their plan was to have a paralegal or attorney look at each any every document with a hit, instead of more sophisticated approaches, i.e. – concept search or predictive coding.) Here is Judge Haight’s description:

Defendants’ responsive process began when University staff or attorneys commandeered — a more appropriate word than seized — the computer of each of the named custodians. The process of ESI identification and production then “required the application of keyword searches to the computers of these custodians, extracting the documents containing any of those keywords, and then reading every single document extracted to determine whether it is responsive to any of the plaintiff’s production requests and further to determine whether the document is privileged.” Defendants’ Reply Brief [Doc. 124], at 2-3. This labor was performed by Yale in-house paralegals and lawyers, and a third-party vendor the University retained for the project.

Go FishIt appears from the opinion that Yale was a victim of a poorly played game of Go Fish where each side tries to find relevant documents by guessing keywords without study of the data, much less other search methods. Losey, R., Adventures in Electronic Discovery (West 2011); Child’s Game of ‘Go Fish’ is a Poor Model for e-Discovery Search. This is a very poor practice, as I have often argued, and frequently results in surprise burdens on the producing party.

This is what happened here. As Judge Haight explained, Yale did not complain of these keywords and custodian count (ten instead of five), until months later when the review was well underway:

[I]t was not until the parties had some experience with the designated custodians and search terms that the futility of the exercise and the burdens of compliance became sufficiently apparent to Defendants to complain of them.

go fishToo bad. If they had tested the keywords first before agreeing to review all hits, instead of following the Go Fish approach, none of this would have happened. National Day Laborer Organizing Network v. US Immigration and Customs Enforcement Agency, 877 F.Supp.2d 87 (SDNY, 2012) (J. Scheindlin) (“As Judge Andrew Peck — one of this Court’s experts in e-discovery — recently put it: “In too many cases, however, the way lawyers choose keywords is the equivalent of the child’s game of `Go Fish’ … keyword searches usually are not very effective.” FN 113“); Losey, R., Poor Plaintiff’s Counsel, Can’t Even Find a CAR, Much Less Drive One (9/1/13).

After reviewing the documents of only three custodians, following the old-fashioned, buggy-whip method of looking at one document after another (linear review), the defendant complained as to the futility of their effort to the judge. They alleged that the effort:

… required paralegals and lawyers to review approximately 13,393 files, totaling 4.5 gigabytes, or the equivalent of about 450,000 pages of emails. Only 6% of this data was responsive to Plaintiff’s discovery request: about 300 megabytes, or about 29,300 pages of emails. In excess of 95% of this information, while responsive to the ESI request, has absolutely nothing to do with any of the issues in this case. Thus, defendants’ lawyers and paralegals reviewed approximately 450,000 pages of material in order to produce less than 1,500 pages of information which have any relationship whatsoever to this dispute; and the majority of the 1,500 pages are only marginally relevant.

ShiraScheindlin_sketchI do not doubt that at all. It is typical in cases like this. What do you expect from blind negotiated keyword search and linear review? For less effort try driving a CAR instead of walking. As Judge Scheindlin said in National Day Laborer back in 2012:

There are emerging best practices for dealing with these shortcomings and they are explained in detail elsewhere.[114] There is a “need for careful thought, quality control, testing, and cooperation with opposing counsel in designing search terms or `keywords’ to be used to produce emails or other electronically stored information.”[115] And beyond the use of keyword search, parties can (and frequently should) rely on latent semantic indexing, statistical probability models, and machine learning tools to find responsive documents.[116] Through iterative learning, these methods (known as “computer-assisted” or “predictive” coding) allow humans to teach computers what documents are and are not responsive to a particular FOIA or discovery request and they can significantly increase the effectiveness and efficiency of searches. In short, a review of the literature makes it abundantly clear that a court cannot simply trust the defendant agencies’ unsupported assertions that their lay custodians have designed and conducted a reasonable search.

National Day Laborer Organizing Network, supra 877 F.Supp.2d at pgs. 109-110.

Putting aside the reasonability of search and review methods selected, an issue never raised by the parties and not before the court, Judge Haight addressed whether the defendant should be required to review all ten custodians in these circumstances. Here is Judge Haight’s analysis:

Prior to making this motion, Yale had reviewed the ESI of a number of custodians and produced the fruits of those labors to counsel for Bagley. Now, seeking protection from — which in practical terms means cessation of — any further ESI discovery, the University describes in vivid, near-accusatory prose the considerable amount of time and treasure it has already expended responding to Bagley’s ESI discovery requests: an exercise which, in Yale’s non-objective and non-binding evaluation, has unearthed no or very little information relevant to the lawsuit. Yale’s position is that given those circumstances, it should not be required to review any additional ESI with a view toward producing any additional information in discovery. The contention is reminiscent of a beleaguered prizefighter’s memorable utterance some years ago: “No mas!” Is the University entitled to that relief? Whether the cost of additional ESI discovery warrants condemnation of the total as undue, thereby rendering the requested information not reasonably accessible to Yale, presents a legitimate issue and, in my view, a close question.

Judge Charles Haight (“Terry” to his friends) analyzed the facts and circumstances to decide whether Yale should continue its search and review of four more custodians. (It was five more, but Yale reviewed one while the motion was pending.) Here is his summary:

Defendants sum up the result of the ESI discovery they have produced to Plaintiff to date in these terms: “In other words, of the 11.88 gigabytes of information[3](which is the equivalent of more than 1 million pages of email files) that has so far been reviewed by the defendant, only about 8% of that information has been responsive and non-privileged. Furthermore, only a small percentage of those documents that are responsive and non-privileged actually have any relevance to the issues in this lawsuit.” Id., at 4-5.  . . .

[3] 11.88 gigabytes is the total of 4.5 gigabytes (produced by review of the computers of Defendant custodians Snyder, Metrick and Rae) and 7.38 gigabytes (produced by review of the computers of the additional five custodians named in text).

Defendants assert on this motion that on the basis of the present record, “the review of these remaining documents will amount to nothing more than a waste of time and money. This Court should therefore enter a protective order relieving the defendant[s] from performing the requested ESI review.” Id.  . . .

Ruling in Bagley v. Yale

gavelJudge Haight, a wise senior judge who has seen and heard it all before, found that under these facts Yale had not yet made a reasonable effort to satisfy their discovery obligations in this case. He ordered Yale to review the email of four more custodians. That, he decided, would be a reasonable effort. Here is Judge Haight’s explanation of his analysis of reasonability, which, in my view, is unaffected by the 2015 Rule Amendments, specifically the change to Rule 26(b)(1).

In the case at bar, the custodians’ electronically stored information in its raw form was immediately accessible to Yale: all the University had to do was tell a professor or a dean to hand over his or her computer. But Bagley’s objective is to discover, and Defendants’ obligation is to produce, non-privileged information relevant to the issues: Yale must review the custodians’ ESI and winnow it down. That process takes time and effort; time and effort can be expensive; and the Rule measures the phrase “not reasonably accessible” by whether it exposes the responding party to “undue cost.” Not some cost: undue cost, an adjective Black’s Law Dictionary (10th ed. 2014 at 1759) defines as “excessive or unwarranted.” . . .

In the totality of circumstances displayed by the case at bar, I think it would be an abuse of discretion to cut off Plaintiff’s discovery of Defendants’ electronically stored information at this stage of the litigation. Plaintiff’s reduction of custodians, from the original 24 targeted by Defendants’ furiously worded Main Brief to the present ten, can be interpreted as a good-faith effort by Plaintiff to keep the ESI discovery within permissible bounds. Plaintiff’s counsel say in their Opposing Brief [Doc. 113] at 2: “Ironically, this last production includes some of the most relevant documents produced to date.” While relevance, like beauty, often lies in the eyes of the beholder, and Defendants’ counsel may not share the impressions of their adversaries, I take the quoted remark to be a representation by an officer of the Court with respect to the value and timing of certain evidence which has come to light during this discovery process. The sense of irritated resignation conveyed by the familiar aphorism — “it’s like looking for a needle in a haystack” — does not exclude the possibility that there may actually be a needle (or two or three) somewhere in the haystack, and sharp needles at that. Plaintiff is presumptively entitled to search for them.

As Judge Haight understood when he said that the “Plaintiff is presumptively entitled to search for them,” the search effort is actually upon the defendant, not the plaintiff. The law requires the defendant to expend reasonable efforts to search for the needles in the haystack that the plaintiff would like to be found. Of course, if those needles are not there, no amount of effort can find them. Still, no one knows that in advance (although probabilities can be calculated), whether there are hot documents left to be found, so reasonable efforts are often required to show they are not there. This can be difficult as any e-discovery lawyer well knows.

Faced with this situation most e-discovery specialists will tell you the best solution is to cooperate, or at least try. If your cooperative efforts fail and you seek relief from the court, it needs to be clear to the judge that you did try. If the judge thinks you are just another unreasonable, over-assertive lawyer, your efforts are doomed. This is apparently part of what was driving Judge Haight’s analysis of “reasonable” as the following colorful, one might say “tasty,” quote from the opinion shows:

A recipe for a massive and contentious adventure in ESI discovery would read: “Select a large and complex institution which generates vast quantities of documents; blend as many custodians as come to mind with a full page of search terms; flavor with animosity, resentment, suspicion and ill will; add a sauce of skillful advocacy; stir, cover, set over high heat, and bring to boil. Serves a district court 2-6 motions to compel discovery or for protection from it.”


You have got to love a judge with wit and wisdom like that. My only comment is that truly skillful advocacy here would include cooperation, and lots of it. The sauce added in that case would be sweet and sour, not just hot and spicy. It should not give a judge any indigestion at all, much less six motions. That is one reason why Electronic Discovery Best Practices ( puts such an emphasis on skillful cooperation. You are free to use this chart in any manner so long as you do not chnage it.

What is Reasonable?

Reasonable_man_cloudBagley shows that the dividing line between what is reasonable and thus acceptable efforts, and what is not, can often be difficult to determine. It depends on a careful evaluation of the facts, to be sure, but this evaluation in turn depends on many subjective factors, including whether one side or another was trying to cooperate. These factors include all kinds of prevailing social norms, not just cooperativeness. It also includes personal values, prejudices, education, intelligence, and even how the mind itself works, the hidden psychological influences. They all influence a judge’s evaluation in any particular case as to which side of the acceptable behavior line a particular course of conduct falls.

In close questions the subjectivity inherent in determinations of reasonability is obvious. This is especially true for the attorneys involved, the ones paid to be independent analysts and objective advisors. People can, and often do, disagree on what is reasonable and what is not. They disagree on what is negligent and what is not. On what is acceptable and what is not.

All trial lawyers know that certain tricks of argument and appeals to emotion can have a profound effect on a judge’s resolution of these supposedly reason-based disagreements. They can have an even more profound affect on a jury’s decision. (That is the primary reason that there are so many rules on what can and cannot be said to a jury.)

Study of Legal Psychology

Every good student of the law knows this, but how many attempt to study the psychological dynamics of persuasion? How many attempt to study perceptions of reasonability? Of cognitive bias? Not many, and there are good reasons for this.

First and foremost, few law professors exist that have this kind of knowledge. The only attorneys that I know of with this knowledge are experienced trial lawyers and experienced judges. They know quite a lot about this, but not from any formal or systematic study. They pick up information, and eventually knowledge on the psychological underpinnings of justice by many long years of practice. They learn about the psychology of reasonability through thousands of test cases. They learn what is reasonable by involvement in thousands of disputes. Whatever I know of the subject was learned that way, although I have also read numerous books and articles on the psychology of legal persuasion written by still more senior trial lawyers.

That is not to say that experience, trial and error, is the quickest or best way to learn these insights. Perhaps there is an even quicker and more effective way? Perhaps we could turn to psychologists and see what they have to say about the psychological foundations of perception of reasonability. After all, this is, or should be, a part of their field.

Up until now, not very much has been said from psychologists on law and reasonability, at least not to my knowledge. There are a few books on the psychology of persuasion. I made a point in my early years as a litigator to study them to try to become a better trial lawyer. But in fact, the field is surprisingly thin. There is not much there. It turns out that the fields of Law and Psychology have not overlapped much, at least not in that way.

Perhaps this is because so few psychologists have been involved with legal arguments on reasonability. When psychologists are in the legal system, they are usually focused on legal issues of sanity, not negligence, or in cases involving issues of medial diagnoses.

The blame for the wide gulf between the two fields falls on both sides. Most psychologists, especially research psychologists, have not been interested in the law and legal process. Or when they have, it has involved criminal law, not civil. See eg: Tunnel Vision in the Criminal Justice System (May 2010, Psychology Today). This disinterest has been reciprocal. Most lawyers and judges are not really interested in hearing what psychologists have to say about reasonability. They consider their work to be above such subjective vagaries.

Myth of Objectivity

Myth_ObjectivityLawyers and judges consider reasonability of conduct to be an exclusively legal issue. Most lawyers and judges like to pretend that reasonability exists in some sort of objective, platonic plane of ideas, above all subjective influences. The just decision can be reached by deep, impartial reasoning. This is the myth of objectivity. It is an article of faith in the legal profession.

The myth continues to this day in legal culture, even though all experienced trial lawyers and judges know it is total nonsense, or nearly so. They know full well the importance of psychology and social norms. They know the impact of cognitive biases of all kinds, even transitory ones. As trial lawyers like to quip – What did the judge have for breakfast?

Experienced lawyers take advantage of these biases to win cases for their clients. They know how to push the buttons of judge and jury. See Cory S. Clements, Perception and Persuasion in Legal Argumentation: Using Informal Fallacies and Cognitive Biases to Win the War of Words, 2013 BYU L. Rev. 319 (2013)Justice is sometimes denied as a result. But this does not mean judges should be replaced by robots. No indeed. There is far more to justice than reason. Still a little help from robots is surely part of the future we are making together.

More often than not the operation of cognitive biases happen unconsciously without any puppet masters intentionally pulling the strings. There is more to this than just rhetoric and sophistry. Justice is hard. So is objective ratiocination.

Even assuming that the lawyers and judges in the know could articulate their knowledge of decisional bias, they have little incentive to do so. (The very few law professors with such knowledge do have an incentive, as we see in Professor Clements’ article cited above, but these articles are rare and too academic.) Moreover, most judges and lawyers are incapable of explaining these insights in a systematic manner. They lack the vocabulary of psychology to do so, and, since they learned by long, haphazard experience, that is their style of teaching as well.

Shattering the Myth

One psychologist I know has studies these issues and share his insights. They are myth shattering to be sure, and thus will be unwelcome to some idealists. But for me this is a much-needed analysis. The psychologist who has dared to expose the myth, to lift the curtain, has worked with lawyers for over a decade on discovery issues. He has even co-authored a law review article on reasonability with two distinguished lawyers. Oot, Kershaw, Roitblat, Mandating Reasonableness in a Reasonable Inquiry, Denver University Law Review, 87:2, 522-559 (2010).

Herb RoitblatI am talking about Herbert L. Roitbalt, who has a PhD in psychology. Herb did research and taught psychology for many years at the University of Hawaii. Only after a distinguished career as a research psychologist and professor did Herb turn his attention to computer search in general and then ultimately to law and legal search. He is also a great admirer of dolphins.

Schlemiel and Schlimazel

Herb has written a small gem of a paper on law and reasonability that is a must read for everyone, especially those who do discovery. The Schlemiel and the Schlimazel and the Psychology of Reasonableness (Jan. 10, 2014, LTN) (link is to republication by a vendor without attribution). I will not spoil the article by telling you Herb’s explanation of the Yiddish terms, Schlemiel and Schlimazel, nor what they have to do with reasonability and the law, especially the law of spoliation and sanctions. Only a schmuck would do that. It is a short article; be a mensch and go read it yourself. I will, however, tell you the Huffington Post definition:

A Schlemiel is an inept clumsy person and a Schlimazel is a very unlucky person. There’s a Yiddish saying that translates to a funny way of explaining them both. A schlemiel is somebody who often spills his soup and a schlimazel is the person it lands on.

This is folk wisdom for what social psychologists today call attribution error. It is the tendency to blame your own misfortune on outside circumstances beyond your control (the schlimazel) and blame the misfortune of others on their own negligence (the schlemiel). Thus, for example, when I make a mistake, it is in spite of my reasonable efforts, but when you make a mistake it is because of your unreasonably lame efforts. It is a common bias that we all have. The other guy is often unreasonable, whereas you are not.

Herb Roitblat’s article should be required reading for all judges and lawyers, especially new ones. Understanding the many inherent vagaries of reasonability could, for instance, lead to a much more civil discourse on the subject of sanctions. Who knows, it could even lead to cooperation, instead of the theatre and politics we now see everywhere instead.

Hindsight Bias

Roitblat’s article contains a two paragraph introduction to another important psychological factor at work in many evaluations of reasonability: Hindsight Bias. This has to do with the fact that most legal issues consider past decisions and actions that have gone bad. The law almost never considers good decisions, much less great decisions with terrific outcomes. Instead it focuses on situations gone bad, where it turns out that wrong decisions were made. But were they necessarily negligent decisions?

The mere fact that a decision led to an unexpected, poor outcome does not mean that the decision was negligent. But when we examine the decision with the benefit of 20/20 hindsight, we are naturally inclined towards a finding of negligence. In the same way, if the results prove to be terrific, the hindsight bias is inclined to perceive most any crazy decision as reasonable.

Due to hindsight bias, we all have, in Rotiblat’s words:

[A] tendency to see events that have already occurred as being more predictable than they were before they actually took place. We over-estimate the predictability of the events that actually happened and under-estimate the predictability of events that did not happen.  A related phenomenon is “blame the victim,” where we often argue that the events that occurred should have been predicted, and therefore, reasonably avoided.

Hindsight bias is well known among experienced lawyers and you will often see it argued, especially in negligence and sanctions cases. Every good lawyer defending such a charge will try to cloak all of the mistakes as seemingly reasonable at the time, and any counter-evaluation as merely the result of hindsight bias. They will argue, for instance, that while it may now seem obvious that wiping the hard drives would delete relevant evidence, that is only because of the benefit of hindsight, and that it was not at all obvious at the time.

Judge_Lee_RosenthalGood judges will also sometimes mention the impact of 20/20 hindsight, either on their own initiative, or in response to defense argument. See for instance the following analysis by Judge Lee H. Rosenthal in Rimkus v Cammarata, 688 F. Supp. 2d 598 (S.D. Tex. 2010):

These general rules [of spoliation] are not controversial. But applying them to determine when a duty to preserve arises in a particular case and the extent of that duty requires careful analysis of the specific facts and circumstances. It can be difficult to draw bright-line distinctions between acceptable and unacceptable conduct in preserving information and in conducting discovery, either prospectively or with the benefit (and distortion) of hindsight. Whether preservation or discovery conduct is acceptable in a case depends on what is reasonable ,and that in turn depends on whether what was done–or not done–was proportional to that case and consistent with clearly established applicable standards.  [FN8] (emphasis added)

Judge Shira A. Scheindlin also recognized the impact hindsight in Pension Committee of the University of Montreal Pension Plan, et al. v. Banc of America Securities, LLC, et al., 685 F. Supp. 2d 456 (S.D.N.Y. Jan. 15, 2010 as amended May 28, 2010) at pgs. 463-464:

While many treatises and cases routinely define negligence, gross negligence, and willfulness in the context of tortious conduct, I have found no clear definition of these terms in the context of discovery misconduct. It is apparent to me that these terms simply describe a continuum. FN9 Conduct is either acceptable or unacceptable. Once it is unacceptable the only question is how bad is the conduct. That is a judgment call that must be made by a court reviewing the conduct through the backward lens known as hindsight. It is also a call that cannot be measured with exactitude and might be called differently by a different judge. That said, it is well established that negligence involves unreasonable conduct in that it creates a risk of harm to others, but willfulness involves intentional or reckless conduct that is so unreasonable that harm is highly likely to occur. (emphasis added)

The relatively well-known backward lens known as hindsight can impact anyone’s evaluation of reasonability. But there are many other less obvious psychological factors that can alter a judge or jury’s perception. Herb Roitblat mentions a few more such as the overconfidence effect, where people tend to inflate their own knowledge and abilities, and framing, an example of cognitive bias where the outcome of questions is impacted by the way they are asked. The later is one reason that trial lawyers fight so hard on jury instructions and jury interrogatories.


Ralph_4-25-16Many lawyers are interested in this law-psych intersection and the benefits that might be gained by cross-pollination of knowledge. I have a life-long interest in psychology, and so do many others, some with advanced degrees. That includes my fellow predictive coding expert, Maura R. Grossman, an attorney who also has a Ph.D. in Clinical/School Psychology. A good discovery team can use all of the psychological insights it can get.

The myth of objectivity and the “Reasonable Man” in the law should be exposed. Many naive people still put all of their faith in legal rules and the operation of objective, unemotional logic. The system does no really work that way. Outsiders trying to automate the law are misguided. The Law is far more than logic and reason. It is more than the facts, the surrounding circumstances.nit is more than evidence. It is about people and by people. It is about emotion and empathy too. It is about fairness and equity. It’s prime directive is justice, not reason.

That is the key reason why AI cannot automate law, nor legal decision making. Judge Charles (“Terry”) Haight could be augmented and enhanced by smart machines, by AI, but never replaced. The role of AI in the Law is to improve our reasoning, minimize our schlemiel biases. But the robots will never replace lawyers and judges. In spite of the myth of the Reasonable Man, there is far more to law then reason and facts. I for one am glad about that. If it were otherwise the legal profession would be doomed.

e-Discovery Team’s Best Practices Education Program

May 8, 2016


EDBP                   Mr.EDR         Predictive Coding 3.0
59 TAR Articles
Doc Review  Videos



e-Discovery Team Training

Information → Knowledge → Wisdom

Ralph_4-25-16Education is the clearest path from Information to Knowledge in all fields of contemporary culture, including electronic discovery. The above links take you to the key components of the best-practices teaching program I have been working on since 2006. It is my hope that these education programs will help move the Law out of the dangerous information flood, where it is now drowning, to a safer refuge of knowledge. Information → Knowledge → Wisdom: Progression of Society in the Age of Computers; and How The 12 Predictions Are Doing That We Made In “Information → Knowledge → Wisdom.” For more of my thoughts on e-discovery education, see the e-Discovery Team School Page.

justice_guage_negligenceThe best practices and general educational curriculum that I have developed over the years focuses on the legal services provided by attorneys. The non-legal, engineering and project management practices of e-discovery vendors are only collaterally mentioned. They are important too, but students have the EDRM and other commercial organizations and certifications for that. Vendors are part of any e-Discovery Team, but the programs I have developed are intended for law firms and corporate law departments.

LIFE_magazine_Losey_acceleratesThe e-Discovery Team program, both general educational and legal best-practices, is online and available 24/7. It uses lots of imagination, creative mixes, symbols, photos, hyperlinks, interactive comments, polls, tweets, posts, news, charts, drawings, videos, video lectures, slide lectures, video skits, video slide shows, music, animations, cartoons, humor, stories, cultural themes and analogies, inside baseball references, rants, opinions, bad jokes, questions, homework assignments, word-clouds, links for further research, a touch of math, and every lawyer’s favorite tools: words (lots of them), logic, arguments, case law and precedent.

All of this to try to take the e-Discovery Team approach from just information to knowledge →. In spite of these efforts, most of the legal community still does not know e-discovery very well. What they do know is often misinformation. Scenes like the following in a law firm lit-support department are all too common.

supervising-tipsThe e-Discovery Team’s education program has an emphasis on document review. That is because the fees for lawyers reviewing documents is by far the most expensive part of e-discovery, even when contract lawyers are used. The lawyer review fees, and review supervision fees, including SME fees, have always been much more costly than all vendor costs and expenses put together. Still, the latest AI technologies, especially active machine learning using our Predictive Coding 3.0 methods, are now making it possible to significantly reduce review fees. We believe this is a critical application of best practices. The three steps we identify for this area in the EDBP chart are shown in green, to signify money. The reference to C.A. Review is to Computer Assisted Review or CAR, using our Hybrid Multimodal methods.



Predictive Coding 3.0 Hybrid Multimodal Document Search and Review

Control-SetsOur new version 3.0 techniques for predictive coding makes it far easier than ever before to include AI in a document review project. The secret control set has been eliminated, so too has the seed set and SMEs wasting their time reviewing random samples of mostly irrelevant junk. It is a much simpler technique now, although we still call it Hybrid Multimodal.

robot-friendHybrid is a reference to the Man/Machine interactive nature of our methods. A skilled attorney uses a type of continuous active learning to train an AI to help them to find the documents they are looking for. This Hybrid method greatly augments the speed and accuracy of the human attorneys in charge. This leads to cost savings and improved recall. A lawyer with an AI helper at their side is far more effective than lawyers working on their own. This means that every e-discovery team today could use a robot like Kroll Ontrack’s Mr. EDR to help them to do document review.

Search_pyramidMultimodal is a reference to the use of a variety of search methods to find target documents, including, but not limited to, predictive coding type ranked searches. We encourage humans in the loop running a variety of searches of their own invention, especially at the beginning of a project. This always makes for a quick start in finding relevant and hot documents. Why the ‘Google Car’ Has No Place in Legal Search. The multimodal approach also makes for precise, efficient reviews with broad scope. The latest active machine learning software when fully integrated with a full suite of other search tools is attaining higher levels of recall than ever before. That is one reason Why I Love Predictive Coding.

Mr_EDRI have found that Kroll Ontrack’s EDR software is ideally suited for these Hybrid, Multimodal techniques. Try using it on your next large project and see for yourself. The Kroll Ontrack consultant specialists in predictive coding, Jim and Tony, have been trained in this method (and many others). They are well qualified to assist you in every step of the way and their rates are reasonable. With you calling the shots on relevancy, they can do most of the search work for you and still save your client’s money. If the matter is big and important enough, then, if I have a time opening, and it clears my firm’s conflicts, I can also be brought in for a full turn-key operation. Whether you want to include extra time for training your best experts is your option, but our preference.



Embrace e-Discovery Team Education to Escape Information Overload


2015 e-Discovery Rule Amendments: Dawning of the “Goldilocks Era”

November 11, 2015

Ralph_Losey_2013_abaThis blog presents my summary, analysis, and personal editorial comments on the 2015 Amendments to the Federal Rules of Civil Procedure that pertain to electronic discovery. The new rules go into effect on December 1st, 2015.

Overall Impressions

Overall the new Rules will be helpful, especially to newbies, but hardly the godsend that many hope for. The amendments will have very little impact on my legal practice. But that is only because the doctrines of proportionality and cooperation that the Amendments incorporate are already well-established in my firm. Many experienced attorneys say the same thing. The rule changes may make it a little easier to explain our positions to opposing counsel, and the court, but that is all.

Still, these rule amendments were not designed for experts. They were written for the vast majority of U.S. lawyers who are still struggling to do discovery in the modern age. Our profession remains embarrassingly computer challenged, so these rules are a necessary and good thing.

My only regret is that new Rule 37(e) may make it a little more difficult to catch and punish the few bad guys out there who try to cheat by destroying evidence. Still, we will get them. Fraudsters are never as smart as they think they are. When judges get the drift of what is happening, they will work around vagaries in new Rule 37(e) and send them packing. I am not overly concerned about that. Experienced federal judges can sniff out fraud a mile away and they do not hesitate to sanction bad faith attempts to game the system. We have defeated plenty of spoliating plaintiffs under the old rules and I am confident we will continue to do so under the new. The protests of some commentators on this issue seem a bit over-stated to me, although I do agree that the wording of new Rule 37(e) leaves much to be desired.

An Overly-Hard-Fought Victory for Proportionality

beavis-and-butt-head-fightingThe 2015 FRCP Rules Amendments were the most politicized and hard-fought in history. E-Discovery was the focus of all the battles. (Other changes are not-controversial and, not really that important, and will not be addressed here.) Large corporate and Plaintiffs attorney groups lobbied the supposedly independent Rules Committee for years. The well-funded defense Bar largely won, but the plaintiffs’ Bar still retained some bite and won several small victories. It was classic lobbying at its worst by both sides.

The Rules Committee should never have let itself get sucked into that kind of politics. They meant well, I’m sure, but they ended up with way too many conferences and bickering everywhere you looked. I personally got sick of it and cut way down on my schedule. I even quit one well-known group that allowed this infection to spoil its true purpose. Sad, but life is short. It is full of choices on what to do and who to do it with. I decided not to waste my time with silly games, nor watch the fall of a once great dynasty. I am consoled by the words of Churchill: “History will be kind to me for I intend to write it.”

Bottom line, partisan politics for court rule making must end with these 2015 Amendments. The judiciary and Rules Committee should be above politics. Make all of your meetings closed-door if you have to, but stop the circus. Hang your heads and learn from what happened.

As a result of the contentiousness of the proceedings, the final wording of most of the rules represent sausage making at its worst. The language is just filled with compromises. Years of interpretation litigation are assured. Maybe that can never be avoided, but certainly a better job could have been done by scholars working above the fray.

Proportionality Doctrine and the Beginning of the Goldilocks Era

In spite of the sordid background two high-minded themes emerged, much like flowers growing out of manure. The primary theme of all of the Amendments is clearly Proportionality. The secondary theme is an attempt to further attorney Cooperation by communication. Two doctrines promoted by the late, great founder of The Sedona Conference, Richard Braman.

goldilocksThe victory of proportionality proponents, myself included, may well usher in a new Goldilocks era for the Bar. Everyone who bothers to read the rules will know that they must look for discovery that is not too big, and not too small, but is just right. The just right Goldilocks zone of permitted discovery will balance out well-worn considerations, including costs, which are outlined by the rules.

This is not really new, of course. Old Rule 26(b)(2)(C) had the same intent for decades to avoid undue burden, by balance with benefits. But at least now the proportionality concerns to avoid undue expenses for discovery are up front and center to all discovery disputes. This forces judges to be more cost-conscious, and not allow liberal discovery, regardless of costs, delays and other burdens.

I have been promoting the proportionality doctrine at the heart of these amendments since at least 2010. So too did Richard Braman and his Sedona Conference. I recommend to you their latest version of the Commentary on Proportionality in Electronic Discovery (2013) that can be downloaded without cost at

For some of my articles on or featuring proportionality, please see:

Some contend that the changes in the rules embodying proportionality will make a big difference. Many long-term observers say that there are no real changes at all. It is just window dressing. So nothing will change. I think a “just rightGoldilocks type analysis suggests that the truth is somewhere in the middle, but inclined towards the “little change” side. See: Losey, R., One Man’s Trash is Another Man’s Evidence: Why We Don’t Need New Rules, We Need Understanding, Diligence, and Enforcement of Existing Rules (e-Discovery Team, 9/6/11) (criticizing the drive to solve the problems of e-discovery by just adding more rules, and suggesting instead that education and enforcement of existing rules were a better response).

Still, although a small change, and a sausage-like one at that, it is an important change. It should help all fair-minded attorneys to better serve their clients by protecting them from undue burdens in discovery, and also from undue burdens in preservation.

Goldilocks_chasedWe will all be arguing about the Goldilocks Zone now, where the burden is just right, is proportional, considering the criteria stated in the rules and the facts of the case. One size fits all is a thing of the past, especially when the one size is save everything and produce everything. Papa Bear’s big chair is way too large for most cases. And, small chair or not, every litigant is entitled to a seat at the discovery table, even a trespasser like Goldilocks.

New Rule 26(b)(1) – Discovery Scope and Limits

Here is the new language of 26(b)(1) which serves as the key provision in the 2015 Amendments implementing proportionality. Note that I have added the bullet-points here for clarity of reference and the bold. The original rules are, as usual, just one long run-on sentence.

Parties may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case, considering the

  • the importance of the issues at stake in the action,
  • amount in controversy,
  • the parties’ relative access to relevant information,
  • the parties’ resources,
  • the importance of the discovery in resolving the issues, and
  • whether the burden or expense of the proposed discovery outweighs its likely benefit.

Information within this scope of  discovery need not be admissible in evidence to be discoverable.

SubJect MatterThe first big change to 26(b)(1) is not seen here because it is an omission. The scope is now limited to “any  party’s claim or defense.” Previously a court could expand relevance scope to “SUBJECT MATTER” of the case, not just specific claims. This expansion was supposed to require a good cause showing, but, in practice, this condition was given little weight by judges and poorly understood by the Bar. Full subject matter discovery was commonly allowed with little or no real showing of cause. Often responding parties would simply capitulate and not demand a good cause showing. This could, in my experience, often lead to greatly expanded discovery. Now relevance cannot be expanded beyond actual claims made. This is a big improvement.

I am proud to say that this is a revision that I suggested to the Committee for adoption. I accomplished this without lobbying. My one direct conservation with the big-name Committee chair at a Bar event was about two minutes long. I outlined the idea and suggested the Committee at least consider it. The elevator-speech proposal was instantly rebuffed by her. She smiled and said that had been considered many times before over the years and simply was not “politically doable.” Silly me, to resurrect such an old, stale idea.

Still, I had a beginners mind on rule changes. I was convinced we needed to tighten the spigot of relevance to help counter-act the information deluge. I updated my prior blog on the proposal, added some more legal citations and analysis to make it more scholarly, and put forth my best argument. Rethinking Relevancy: A Call to Change the Rules to Narrow the Scope of ESI Relevance (e-Discovery Team, 1/24/2011). That’s it. I wrote a 3,700 word article. Nothing more. I knew the Committee would at least know about the article, and maybe some would read it, as I knew that some of them were regular readers.

Since the proposal had merit, as far as I was concerned, that was all that was required. No politics. No lobbying, just one chat where the chair said no-way, and then submission of an  article making my case for elimination of “subject matter” discovery. In my case that was all that was necessary. It worked. That is how it should work. I was actually completely surprised to see the elimination of the old subject matter provisions when an early draft was published by the Rule Committee. All the Committee Note says about this change is as follows:

The amendment deletes the former provision authorizing the court, for good cause, to order discovery of any matter relevant to the subject matter involved in the action. Proportional discovery relevant to any party’s claim or defense suffices. Such discovery may support amendment of the pleadings to add a new claim or defense that affects the scope of discovery.

The attention and politics of the Committee was focused on the new wording added to Rule 26(b)(1), which outlined the six criteria to consider to determine proportionality:

  1. the importance of the issues at stake in the action,
  2. amount in controversy,
  3. the parties’ relative access to relevant information,
  4. the parties’ resources,
  5. the importance of the discovery in resolving the issues, and
  6. whether the burden or expense of the proposed discovery outweighs its likely benefit.

Not sure why this was such a big deal for them because in fact none of this language is new at all. All the Committee ended up doing was use the exact same language that appeared in Rule 26(b)(2)(C) and then add the parties’ relative access to relevant information.  This last addition was a last minute addition. All it does is add the accessibility provision already in Rule 26(b)(2)(B) that was added in the 2006 Amendments. You would think the Committee would improve upon the language more to give the Bench and Bar more guidance. Still, the importance of the proportionality requirement is intended to be elevated by this move to the defined scope of relevance section, the section where discoverability is limited to information that is proportional to the needs of the case.

Here is the Committee Note explaining their revision, one that many now seize upon as some bold godsend:

The considerations that bear on proportionality are moved from present Rule 26(b)(2)(C)(iii). Although the considerations are familiar, and have measured the court’s duty to limit the frequency or extent of discovery, the change incorporates them into the scope of discovery that must be observed by the parties without court order.

Dear Committee, they are more than just familiar as your Note says, they are exactly the same! Please. Many had hoped for more, myself included. Oh well, what do you expect from political sausage?

In any in-person presentation of these rules I would now go through how these four factors play into discovery in various types of cases. In my firm I discourse at length on how this plays out in employment cases.

  1. the importance of the issues at stake in the action,
  2. amount in controversy,
  3. the parties’ relative access to relevant information,
  4. the parties’ resources,
  5. the importance of the discovery in resolving the issues.

Goldilocks ZoneThere is really nothing new here except the third point about relative access, and, in my opinion, that last minute addition by the Committee adds nothing. It was already in 26(b)(2)(B). I have been applying these factors and analysis for over thirty-five years. I have been calling it proportionality for over five years.

In most cases, but certainly not all, the main factor that comes into play is expense. Does the burden or expense of the proposed discovery outweighs its likely benefit? And what is the real, non-inflated, amount in controversy? The main change the proportionality labeled rules now force is a shift in thinking, to try to get the Bench and Bar to look at discovery as the tradeoff that it has always been, to try to get everyone thinking proportionally.

Reasonably_CalculatedThe final relevant change to 26(b)(1) already seems to be widely misunderstood by the Bar, namely the rewording of provisions in the rule pertaining to discovery and admissibility. The old rule, which many lawyers disliked for good reason, said: “Relevant information need not be admissible at the trial if the discovery appears reasonably calculated to lead to the discovery of admissible evidence.” It is true that this sentence was deleted, but it is not true that discovery is limited to admissible evidence. I have already seen at least one CLE, sponsored by the ABA no less, that incorrectly states that the old standard is dead. It is not. Weakened perhaps, but not gone. Remember, we are dealing with politics again and compromise language. The Plaintiffs’ Bar managed to keep the idea alive, but the sentence was modified and its placement shuffled. Rule 26(b)(1) still says:

Information within this scope of discovery need not be admissible in evidence to be discoverable.

Here is how the Committee Note explains this revision:

The former provision for discovery of relevant but inadmissible information that appears reasonably calculated to lead to the discovery of admissible evidence is also amended. Discovery of nonprivileged information not admissible in evidence remains available so long as it is otherwise within the scope of discovery. Hearsay is a common illustration. The qualifying phrase — “if the discovery appears reasonably calculated to lead to the discovery of admissible evidence” — is omitted. Discovery of inadmissible information is limited to matter that is otherwise within the scope of discovery, namely that which is relevant to a party’s claim or defense and proportional to the needs of the case. The discovery of inadmissible evidence should not extend beyond the permissible scope of discovery simply because it is “reasonably calculated” to lead to the discovery of admissible evidence.

I predict that we will be litigating that oh so subtle distinction for years. It remains to be seen what the Magistrates who usually rule of such issues will make of this change. It also remains to be seen what the practical impact of this change will be. I think that the “claims made” versus “subject matter of the litigation” change will have a far greater impact.

What is a Proportional Spend on e-Discovery?

Assuming that monetary factors are the primary considerations in a case, how much should be spent on electronic discovery? Do not just brush the question aside by saying every case is different. They are many similarities too. The longer you practice the more aware you become of the recurring patterns. What I want the Bench and Bar to do is start thinking proportionally. To start thinking from an over-all budgetary perspective.

Consider this hypothetical, one where all other factors being equal, money was the primary criteria to evaluate proportionality.

  • Assume a case with a real-world true value of $1,000,000.
  • What would be a proportional total cost of defense?
    • Assume $381,966 (@38%)
  • What would be a proportional total cost of all discovery in such a case?
    • Assume $145,147 (@14.6%)
  • Now for the punchline, what would be a proportional cost of e-discovery in a case like that?
    • Assume $55,157 (@5.6%)

Where am I getting these numbers?

In part I am getting these dollar amounts from my 35 years of experience as an attorney in private practice handling a wide variety of commercial litigation vases, mainly large ones, but also many smaller cases too, and lately including many, many employment law cases.

These numbers may not hold true in single plaintiff discrimination cases, or other small matters. But there may be some general truth here. You can see that from the fact that most Bar associations allow 40% recovery for fees in contingency cases. That compares to the 38% proportional expense assumed here: $381,966 in total fees and costs for a million dollar case (remember, assume true settlement value here, after weighing and discounting risks, not b.s. positions or demands). What do you think? Is approximately 38% of the true case value a proportional total expense in complex litigation? Does 38% seem appropriate? Too high, too low? What do you think is a proportional percentage? Please leave your comments below or send me a private email.

What about my assumption of a total cost for all discovery of $145,147 in a case where total fees are $381,966. Is it reasonable, proportional, to assume a spend of $145,147 for discovery of all types, including e-discovery? That represents around 14.6% of the total amount in controversy. Is that number too low? Too high?

Is it proportional to assume a spend of around 5.6% of the total amount in controversy for all e-discovery related activities in a case? Under this million dollar scenario that would be $55,157. Again, what do you think? And a different but related question, what has your experience been?

Now consider the bigger question, does a general metric for Proportionality of expenditures to true case value make sense in the law? Be it 38% or whatever?

Assuming it makes sense to talk of a general ratio for proportionality: Is the 6% of the total value of a case a reasonable amount for a party to pay for ESI discovery alone? If not, higher or lower? What ranges are you seeing in practice? I am seeing a wide variety, but I think that is because we are still in an early stage of maturity, and that it will eventually settle down into a pattern.

Did you know that Intel has gone on record many times as reporting that its e-Discovery spend in large cases averages 80% of its total spend for cost of defense? Since Intel says an average of 80% of its total litigation cost go to e-Discovery, if they spent $400,000 to defend a $1 Million case, that would be a spend of $320,000 on e-discovery, which is 32% of total case value, not 6%. Does this seem fair? Appropriate? Proportional?

I can see patterns and costs ranges, but at the same time I see outliers in cost, especially on the high-end. In my experience these are usually due to external factors such as extreme belligerence by one side or the other, attorney personality disorders, or intense spoliation issues. Sometimes it just may have to do with information management issues. But if you discount the low and high outliers a pattern starts to emerge. Hopefully someday an objective, systematic study will be done. My observations so far are just ad hoc.

Might the Golden Ratio Help Us to Navigate the Goldilocks Zone?

Aside from my long experience with what lawsuits cost, and the experience of others, the primary source of my hypothetical numbers here is from a famous ratio in math and art known as the Golden Ratio or Golden Mean: 1.61803399 to 1.0.


I came up with the numbers in the hypothetical by use of this ratio:

$1,000,000 – $381,966 (38%) – $145,147 (14.6%) – $55,1567 (5.6%)

These numbers are progressively smaller by .3819661%, and in this manner follows the proportion of the Golden Ratio.

The Golden Ratio is mathematically defined as the division where a+b is to a, as a is to b, as shown below. In other words, “two quantities are in the golden ratio if the ratio of the sum of the quantities to the larger quantity is equal to the ratio of the larger quantity to the smaller one.”


In Math this is known as PHI – Φ.

golden ratio shown in a model's faceIt does not matter if the math confuses you, just know that this proportion has been considered the perfect ratio to evoke feelings of beauty and contentment for thousands of years. It is well-known in all of the Arts, including especially Painting, Music and Architecture. It is the division that feels right, seems beautiful, and creates a deep intuitive sense of perfect division. It still does. Just look at the designs of most Apple products. This ratio is also found everywhere in nature, including the human body and structure of galaxies. It seems to be hardwired into our brains and all of nature.

I put together the two videos below to illustrate what I mean. There is far more to this than a math trick. The Golden Ratio seems to embody a basic truth. Every educated person should at least be familiar with this interesting phenomenon.


Perhaps the idea of perfect proportionality in art, math and science may also apply to law? Maybe it is a basic component of human reasonability and fairness? What do you think?

After giving a presentation much like this at a CLE I asked the question of whether the Golden Ratio in art, math and science might also apply to the law? I wanted to know what everyone thought and to get some interaction going. It was a daylong conference sponsored by Capital One and dedicated solely to the topic of Proportionality. Maura Grossman and I co-chaired the event in the Fall of 2010, when the doctrine was still new. Everyone had a clicker and answered yes or no to the question.

There was an eerie silence in the large auditorium after the results were quickly tabulated and shown on screen. Do you know what the proportion of yes and no answers were? 38% said NO, and 62% said Yes. The Golden Ratio came though in the opinion of the 200 or so attorneys and judges in attendance. You cannot make this stuff up. At first I thought maybe the boys in the tech booth were messing with me, but no. It was automatic and they were not paying attention. It was real. It was beautiful.

Facciola has proportionalityAsk Judge Facciola about it sometime. He was there and right after my opening spoke of the Golden Ratio as used in music. I remember he played an example from Bach. Judge Peck, Judge Mass and Judge Hedges were also there. So too was Judge Grimm, but by video appearance. Jason Baron, Conor Crowley, and Patrick Oot were also in attendance. So, I have a lot of witnesses to confirm what happened. It was a landmark event in many ways. One I will never forget for a whole host of reasons.

golden ratio is based in the Parthenon

A natural ratio clearly exists for proportionality in nature, art and math. I am not saying that this 38/62 ratio, 1.61803399, should apply in the law too as a general guide for proportionality. But it might. It is at least something to think about. What do you think? Again, let me know.

russian dollsFYI, I have written about this before with several examples, but never before described the Capital One conference and spooky Golden Ratio vote. My Basic Plan for Document Reviews: The “Bottom Line Driven” Approach (see especially footnote 15 and supporting text where I compare this drill-down proportionality analysis to Russian nesting dolls).

Change to Rule 37(e)

Rule 37(e) was completely rewritten and was the focus of most of the politics. That explains why the wording is such a mess. The Sedona Conference recommendations on how to revise the rules were largely ignored.

Judge Shira ScheindlinA large part of the politics here, from what I could see, was to counter-act Judge Shira Scheindlin (and a few other powerful judges, mostly in SDNY) who interpreted 2nd Circuit law to assert their right to impose case dispositive sanctions on the basis of gross-negligence alone. See: Pension Committee of the University of Montreal Pension Plan v. Banc of America SecuritiesLLC, 685 F.Supp.2d 456, 465 (S.D.N.Y.2010). Many in the defense Bar argued that there was a dangerous conflict in the Circuits, but since any large company can get sued in New York City, the SDNY views took practical priority over all conflicting views. They complained that the SDNY outlier views forced large corporations to over-preserve is a disproportionate manner.

Naturally Judge Scheindlin opposed these revisions and vigorously articulated the need to protect the judicial system from fraudsters. She proposed alternative language. The plaintiffs Bar stood behind her, but they lost. Sedona tried to moderate and failed for reasons I would rather not go into.

Other Circuits outside of New York make clear that case dispositive sanctions should only be imposed if INTENTIONAL or BAD FAITH destruction of evidence was proven. Many defense Bar types thought that this distinction with GROSS NEGLIGENCE was a big deal. So they fought hard and now pat themselves on the back. I think their celebration is overblown.

I personally do not think the difference between Bad Faith and Gross Negligence is all that meaningful in practice. For that reason I do not think that this rule change will have a big impact. Still, it is likely to make it somewhat easier for parties accused of spoliation to defend themselves and avoid sanctions, especially strong sanctions. If you think this is a good thing, then celebrate away. I don’t. The reality is this revision may well harm parties on both sides of the v., defendants and plaintiffs alike. I know we now see many Plaintiffs destroying evidence, especially cloud emails and Facebook posts. I expect they will rely upon this rule change to try to get away with it.

We will be litigating these issues for years, but as mentioned, I have faith in our federal judiciary. No matter what the rules, if they sniff out fraud, they will take appropriate action. The exact wording of the rules will not matter much. What was once labeled gross negligence will now be called bad faith. These concepts are so flexible and the entire pursuit of fraud like this is very fact intensive.

I think the best thing to do at this point is all of Rule 37(e) in full, as it bears repetitive reading:

If electronically stored information that should have been preserved in the anticipation or conduct of litigation is lost because a party failed to take reasonable steps to preserve it, and it cannot be restored or replaced through additional discovery, the court:

(1) upon finding prejudice to another party from loss of the information, may order measures no greater than necessary to cure the prejudice; or

(2) only upon finding that the party acted with the intent to deprive another party of the information’s use in the litigation may:

(A) presume that the lost information was unfavorable to the party;

(B) instruct the jury that it may or must presume the information was unfavorable to the party; or

(C) dismiss the action or enter a default judgment.

Judge Lee RosenthalI could talk up each paragraph, but this article is already overly long. I only pause to note how the rule now makes proportionality expressly relevant to preservation for the first time. Before this change our primary authority was the Order of the former Rules Committee chair, Judge Lee Rosenthal. Rimkus v Cammarata, 688 F. Supp. 2d 598 (S.D. Tex. 2010):

Whether preservation or discovery conduct is acceptable in a case depends on what is reasonable, and that in turn depends on whether what was done – or not done – was proportional to that case and consistent with clearly established applicable standards.

I strongly recommend that you read the extensive Committee Note that tries to explain this rule. The Notes can be cited and are often found to be persuasive, although, of course, never technically binding authority. Still, until we have case law on Rule 37(e), the Notes will be very important.

Minor Changes to Rules 26 & 34

Under Modified Rule 26(d)(2) a Request to Produce can be served anytime AFTER 21 days from the service of process.  You do not have to wait for the 26(f) conference. Under Modified Rule 34(b)(2)(A) a response to an early RFP is not due until 30 days after the parties’ first Rule 26(f) conference. This early service change was designed to encourage meaningful ESI discussion at 26(f) meet and greets.

Rule 34(b)(2)(B) was modified to require specific objections to request categories and “state whether any responsive materials are being withheld on the basis of that objection.” No duh, right? Yet I have seen this time and again, an objection is stated where no documents exist to begin with. Why?

Rule 26(f) was modified to include discussion of preservation, but also to include discussion of Evidence Rule 502 – Clawback Orders.

Change to Rule 16(b)

claws dementorNew language was added to Rule 16(b) as follows:

Scheduling Order may …

(iv) “… include any “agreements reached under Federal Rule of Evidence 502.”

(v) “direct that before moving for an order relating to discovery, the movant must request a conference with the court.”

Everyone is encouraged to entered into clawback agreements and 502(d) orders.

Change to Rule 1: An Already Great, But Underused Rule, Is Now Even Better

Waxse_LoseyI saved the best rule change for last, the change to Rule 1.0. Judge Waxse, the great promoter of Rule 1.0, should be happy (he often is anyway). Rule 1.0 FRCP – the JUST, SPEEDY AND INEXPENSIVE rule – is one of the most important rules in the book and yet, at the same time, one of the most overlooked and under-cited. All of the e-discovery knowledgable judges, not just David Waxse, can and do wax on and on about this rule. 

The 2015 Amendments are designed to further strengthen this important rule.  Rule 1 has long required judges to “construe and administer” all of the other rules in such a way as to not only secure “justice,” as you would expect, but also to secure “speedy and inexpensive” determinations. Surprised?

This dictate has long been an important policy for rule construction. It has been helpful to those who used it to oppose expensive, burdensome e-discovery. Nothing drives up expense more than “discovery about discovery” or detours into whether efforts to preserve, search or produce ESI have been extensive enough. Courts may allow this kind of expensive discovery if justice requires it, but only after balancing the other two equally important considerations of speed and expense. Here we have another proportionality analysis, one that applies indirectly to every other rule in the book.

The 2015 Amendments enlarged and strengthened the “just, speedy and inexpensive” dictates by making it clear that this duty not only applies to the court, the judges, but also to the parties and their attorneys. Moreover, the revised rule not only requires judges and parties to construe and administer the rules to attain just, speedy and inexpensive determinations, it also requires them to employ all of the other rules in this manner. The revised rule 1.0 reads as follows (new language in bold):

They (all of the rules) should be construed, administered, and employed by the court and the parties to secure the just, speedy, and inexpensive determination of every action and proceeding.

These revisions squarely place the burden of efficient litigation upon counsel, as well as the court. It is now a clear rule violation for an attorney to demand justice, no matter what it costs or how long it takes. All three criteria must be considered and implemented.

The Rule 1.0 change perfectly frames all of the other 2015 Amendments on proportionality and cooperation in ESI discovery and preservation.  As the Rules Committee Commentary to the Rule 1.0 amendments explains:

Most lawyers and parties cooperate to achieve these ends. But discussions of ways to improve the administration of civil justice regularly include pleas to discourage overuse, misuse, and abuse of procedural tools that increase cost and result in delay. Effective advocacy is consistent with — and indeed depends upon — cooperative and proportional use of procedure.”

Rule 1.0 now stands as powerful shield to oppose any counsel’s improper use of discovery as a weapon. Cost and delay must always be taken into consideration. Every motion for protective order should begin with a recitation to Rule One.


Ralph_x-mas_2013Update all of your discovery briefs to incorporate the new rules. Think proportional and act proportional. Sherlock Holmes was famous for his 7% solution, try mixing up your own 5.6% solution. That would be beautiful wouldn’t it, to only spend $5,600 total on e-Discovery in a $100,000 case? Try to come up with an overall budget. Figure out what you think is proportional to the case. Do not wait to respond to excessive demands. Be proactive. How many custodians are proportional? What is an appropriate date range? What ESI is really important and necessary to the case? How many files need be reviewed under a realistic cost-benefit analysis? What are the benefits? The burdens?

Talk about true case value with opposing counsel. Never bad mouth your client, but be honest. Get beyond the b.s. and posturing that does nothing but cause delay and expense. That is the only way your proportionality discussions will get real. The only way the judge will ever see things your way.

Other side won’t cooperate? Dealing with inept phonies? Have these discussions with the judge. Ask for a 16(b) conference to work out disagreements that surface in the 26(f) conference. Most judges have a pretty good feel for what certain kinds of cases are usually worth. Have the wake-up call early and try to save your client money. Analyze and argue benefit/burden. Also, be real and do not exaggerate what your e-discovery expenses will be. Back up your estimates with realistic numbers. Get vendors or specialists to help you.

All of this means that you must front-end your e-discovery work. It should come right after the retainer clears. The new Rule 37(e) is not a free pass to let up on preservation efforts or data collection. Find out what your problems are, if any, and talk about them asap. Bring them to the attention of the judge. Show that you are in good faith. The law never demands perfection, but does demand honest, reasonable efforts.

Make your discovery plan early. What do you want the other side to produce? Be specific. Have concrete discussions at the 26(f). The judges are getting fed up with drive-by meet and greets. It is dangerous to put off these discussions. Arrive at a fair balance between risk mitigation and cost control and move things along counsel. Speed counts, right up there with expense and justice. Your clients will appreciate that too.

Use honesty and diligence to navigate your way to the Goldilocks zone. Steer with realistic analysis. Be driven not only by the desire for justice, but also for quickness and sparse expense. Learn the new analytics, the new language and concepts of proportion. Master these new rules, as well as the e-discovery rules that remain from the 2006 Amendments, and you will prosper in the new Goldilocks Era.

%d bloggers like this: