Document Review and Proportionality – Part One

March 18, 2018

In 2013 I wrote a law review article on how the costs of document review could be controlled using predictive coding and cost estimation. Predictive Coding and the Proportionality Doctrine: a Marriage Made in Big Data, 26 Regent U. Law Review 1 (2013-2014). Today I write on how it can be controlled in document review, even without predictive coding. Here is the opening paragraph of my earlier article:

The search of electronic data to try to find evidence for use at trial has always been difficult and expensive. Over the past few years, the advent of Big Data, where both individuals and organizations retain vast amounts of complex electronic information, has significantly compounded these problems. The legal doctrine of proportionality responds to these problems by attempting to constrain the costs and burdens of discovery to what are reasonable. A balance is sought between the projected burdens and likely benefits of proposed discovery, considering the issues and value of the case. Several software programs on the market today have responded to the challenges of Big Data by implementing a form of artificial intelligence (“AI”) known as active machine learning to help lawyers review electronic documents. This Article discusses these issues and shows that AI-enhanced document review directly supports the doctrine of proportionality. When used together, proportionality and predictive coding provide a viable, long-term solution to the problems and opportunities of the legal search of Big Data.

The 2013 article was based on version 1.0 Predictive Coding. Under this first method you train and rank documents and then review only the higher ranking documents. Here is a more detailed description from pages 23, 24 of the article:

This kind of AI-enhanced legal review is typically described today in legal literature by the term
predictive coding. This is because the computer predicts how an entire body of documents should be coded (classified) based on how the lawyer has coded the smaller training sets. The prediction places a probability ranking on each document, typically ranging from 0% to 100% probability. Thus, in a
relevancy classification, each and every document in the entire dataset (the corpus) is ranked with a percentage of likely relevance and irrelevance. …
As will be shown, this ranking feature is key to the use of the legal doctrine of proportionality. The ability to rank all documents in a corpus on probable relevance is a new feature that no other legal search software has previously provided.71

It was a two phase procedure: train then review. Yes, some review would take place in the first training phase, but this would be a relatively small number, say 10-20% of the total documents reviewed. Most of the human review of documents would take place in phase two. The workflow of version 1.0 is shown in the diagram below and is described in detail in the article, starting at page 31.

Predictive Coding and the Proportionality Doctrine argued that attorneys should scale the number of documents for the second phase of document review based on estimated costs constrained to a proportional amount. No more spending $100,000 for document review in a $200,000 case. The number of documents you selected for review would be limited to proportional costs. Predictive coding and its ranking features allowed you to select the documents for review that were most likely to be relevant. If you could only afford to spend $20,000 on a document review project, then you would limit the number of documents reviewed to those within that scope that were the highest ranked as probable relevant. Here is the article’s description at pages 54-55 of the process and link between the doctrine of proportionality and predictive coding.

What makes this a marriage truly made in heaven is the document-ranking capabilities of predictive coding. This allows parties to limit the documents considered for final production to those that the computer determines have the highest probative value. This key ranking feature of AI-enhanced document review allows the producing party to provide the requesting party with the most bang for the buck. This not only saves the producing party money, and thus keeps its costs proportional, but it saves time and expenses for the requesting party. It makes the production much more precise, and thus faster and easier to review. It avoids what can be a costly exercise to a requesting party to wade through a document dump 192, a production that contains a high number of irrelevant or marginally relevant documents. Most importantly, it gives the requesting party what it really wants—the documents that are the most important to the case.

In the article, pages 58-60, I called this method Bottom-Line-Driven Proportional Review and describe the process in greater detail.

The bottom line in e-discovery production is what it costs. Despite what some lawyers and vendors may say, total cost is not an impossible question to answer. It takes an experienced lawyer’s skill to answer, but,
after a while, you can get quite good at such estimation. It is basically a matter of estimating attorney billable hours plus vendor costs. With practice, cost estimation can become a reliable art, a projection that you can count on for budgeting purposes, and, as we will see, for proportionality arguments.  …
The new strategy and methodology is based on a bottom line approach where you estimate what review costs will be, make a proportionality analysis as to what should be spent, and then engage in defensible culling to bring the review costs within the proportional budget. The producing party determines the number of documents to be subjected to final review by calculating backwards from the bottom line of what they are willing, or required, to pay for the production.  …
Under the Bottom-Line-Driven Proportional approach, after analyzing the case merits and determining the maximum proportional expense, the responding party makes a good faith estimate of the likely
maximum number of documents that can be reviewed within that budget. The document count represents the number of documents that you estimate can be reviewed for final decisions of relevance, confidentiality, privilege, and other issues and still remain within budget. The review costs you estimate must be based on best practices, which in all large review projects today means predictive coding, and the estimates must be accurate (i.e., no puffing or mere guesswork).
Note this last quote (emphasis added) from Predictive Coding and the Proportionality Doctrine: a Marriage Made in Big Data shows an important limitation to the article’s budgetary proposal, it was limited to large review projects where predictive coding was used. Without this marriage to predictive coding, the promise of proportionality by cost estimations was lost. My article today fills this gap.
Here I will explain how document review can be structured to provide estimates and review constraints, even when predictive coding and its ranking are not used. This is, in effect, the single lawyers guide, one where there has not been a marriage with predictive coding. It is a guide for small and medium sized document review projects, which are, after all, the vast majority of projects faced by the legal profession.

To be honest, back when I first wrote the law review article I did not think it would be necessary to develop such a proportionality method, one that does not use AI document ranking. I assumed predictive coding would take off and by now would be used in almost all projects, no matter what the size. I assumed that since active machine learning and document ranking was such good new technology, that even our conservative profession would embrace it within the next few years. Boy was I wrong about that. The closing lines of Predictive Coding and the Proportionality Doctrine: a Marriage Made in Big Data have been proven naive.

The key facts needed to try a case and to do justice can be found in any size case, big and small, at an affordable price, but you have to embrace change and adopt new legal and technical methodologies. The Bottom-Line-Driven Proportional Review method is part of that answer, and so too is advanced-review software at affordable prices. When the two are used together, it is a marriage made in heaven.

I blame both lawyers and e-discovery vendors for this failure, as well as myself for misjudging my peers. Law schools and corporate counsel have not helped much either. Only the judiciary seems to have caught on and kept up.

Proportionality as a legal doctrine took off as expected after 2013, but not the marriage with predictive coding. Lawyers have proven to be much more stubborn than anticipated. They will barely even go out with predictive coding, no matter how attractive she is, much less marry her. The profession as a whole remains remarkably slow to adopt new technology. The judges are tempted to use their shotgun to force a wedding, but so far have refrained from ordering a party to use predictive coding. Hyles v. New York City, No. 10 Civ. 3119 (AT)(AJP), 2016 WL 4077114 (S.D.N.Y. Aug. 1, 2016) (J. Peck: “There may come a time when TAR is so widely used that it might be unreasonable for a party to decline to use TAR. We are not there yet.”)

Changes Since 2013

A lot has happened since 2013 when Predictive Coding and the Proportionality Doctrine was written. In December 2015 Rule 26(b) on relevance was revised to strengthen proportionality and we have made substantial improvements to Predictive Coding methods. In the ensuing years most experts have abandoned this early two-step method of train then review in favor of a method where training continues throughout the review process. In other words, today we keep training until the end. See Eg. the e-Discovery Team’s Predictive Coding, version 4.0, with its Intelligently Spaced Training. (This is similar to a method popularized by Maura Grossman and Gordon Cormack, which they called Continuous Active Learning or CAL for short, a term they later trademarked.)

The 2015 revision to Rule 26(b) on relevance has spurred case law and clarified that undue burden, the sixth factor of proportionality under Rule 26(b)(1), must be argued in detail with facts proven.

  1. the importance of the issues at stake in this action;
  2. the amount in controversy;
  3. the parties’ relative access to relevant information;
  4. the parties’ resources;
  5. the importance of the discovery in resolving the issues; and
  6. whether the burden or expense of the proposed discovery outweighs its likely benefit.”

Oxbow Carbon & Minerals LLC v. Union Pacific Railroad Company, No. 11-cv-1049 (PLF/GMH), 2017 WL 4011136, (D.D.C. Sept. 11, 2017). Although all factors are important and should be addressed, the last factor is usually the most important one in a discovery dispute. It is also the factor that can be addressed generally for all cases and is the core of proportionality.

Proportional “Bottom Line Driven” Method of Document Review that Does Not Require Use of Predictive Coding

I have shared how I use predictive coding with continuous training in my TARcourse.com online instruction program. The eight-step workflow is shown below.

I have not previously shared any information on the document review workflow that I follow in small and medium seized cases where predictive coding is not used. The rest of this article will do so now.

Please note that I have a well-developed and articulated list of steps and procedures for attorneys in my law firm to follow in such small cases. I keep this as a trade-secret and will not reveal them here. Although they are widely known in my firm, and slightly revised and updated each year, they are not public. Still, any expert in document review should be able to create their own particular rules and implementation methods. Warning, if you are not such an expert, be careful in relying on these high-level explanations alone. The devil is in the details and you should retain an expert to assist.

Here is a chart summarizing the SIX-Step Workflow and six basic concepts that must be understood for the process to work at maximum efficiency.

The first three steps iterate with searches to cull out the irrelevant documents, and then culminate with Disclosures of the plan developed for Steps Five and Six, Final Review and Production.  The sixth production step is always in phases according to proportional planning.

A key skill that must be learned is project cost estimation, including fees and expenses. The attorneys involved must also learn how to communicate with themselves, the vendors, opposing counsel and the court. Rigid enforcement of work-product confidentiality is counter-productive to the goal of cost efficient projects. Agree on the small stuff and save your arguments for the cost-saving questions that are worth the effort.

 

The Proportionality Doctrine

The doctrine of proportionality as a legal initiative was launched by The Sedona Conference in 2010 as a reaction to the exploding costs of e-discovery. The Sedona Conference, The Sedona Conference Commentary on Proportionality in Electronic Discovery, 11 SEDONA CONF. J. 289, 292–94 (2010). See also John L. Carroll, Proportionality in Discovery: A Cautionary Tale, 32 CAMPBELL L. REV. 455, 460 (2010) (“If courts and litigants approach discovery with the mindset of proportionality, there is the potential for real savings in both dollars and time to resolution.”); Maura Grossman & Gordon Cormack, Some Thoughts on Incentives, Rules, and Ethics Concerning the Use of Search Technology in E-Discovery, 12 SEDONA CONF. J. 89, 94–95, 101–02 (2011).

The doctrine received a big boost with the adoption of the 2015 Amendment to Rule 26. The rule was changed to provide discovery must be both relevant and “proportional to the needs of the case.” Fed. R. Civ. P. 26(b)(1). To determine whether a discovery request is proportional, you are required weigh the following six factors: “(1) the importance of the issues at stake in this action; (2) the amount in controversy; (3) the parties’ relative access to relevant information; (4) the parties’ resources; (5) the importance of the discovery in resolving the issues; and (6) whether the burden or expense of the proposed discovery outweighs its likely benefit.” Williams v. BASF Catalysts, LLC, Civ. Action No. 11-1754, 2017 WL 3317295, at *4 (D.N.J. Aug. 3, 2017) (citing Fed. R. Civ. P. 26(b)(1)); Arrow Enter. Computing Solutions, Inc. v. BlueAlly, LLC, No. 5:15-CV-37-FL, 2017 WL 876266, at *4 (E.D.N.C. Mar. 3, 2017); FTC v. Staples, Inc., Civ. Action No. 15-2115 (EGS), 2016 WL 4194045, at *2 (D.D.C. Feb. 26, 2016).

“[N]o single factor is designed to outweigh the other factors in determining whether the discovery sought is proportional,” and all proportionality determinations must be made on a case-by-case basis. Williams, 2017 WL 3317295, at *4 (internal citations omitted); see also Bell v. Reading Hosp., Civ. Action No. 13-5927, 2016 WL 162991, at *2 (E.D. Pa. Jan. 14, 2016). To be sure, however, “the amendments to Rule 26(b) do not alter the basic allocation of the burden on the party resisting discovery to—in order to successfully resist a motion to compel—specifically object and show that . . . a discovery request would impose an undue burden or expense or is otherwise objectionable.” Mir v. L-3 Commc’ns Integrated Sys., L.P., 319 F.R.D. 220, 226 (N.D. Tex. 2016), as quoted by Oxbow Carbon & Minerals LLC v. Union Pacific Railroad Company, No. 11-cv-1049 (PLF/GMH), 2017 WL 4011136, (D.D.C. Sept. 11, 2017).

The Oxbow case is discussed at length in my recent blog Judge Facciola’s Successor, Judge Michael Harvey, Provides Excellent Proportionality Analysis in an Order to Compel (e-Discovery Team,3/1/18). Judge Harvey carefully examined the costs and burdens claimed by plaintiffs and rejected the overly burdensome argument.

Plaintiffs’ counsel explained at the second hearing in this matter that Oxbow has spent $1.391 million to date on reviewing and producing approximately 584,000 documents from its nineteen other custodians and Oxbow’s email archive. See 8/24/17 TR. at 44:22-45:10. And again, Oxbow seeks tens of millions of dollars from Defendants. Through that lens, the estimated cost of reviewing and producing Koch’s responsive documents—even considering the total approximate cost of $142,000 for that effort, which includes the expense of the sampling effort—while certainly high, is not so unreasonably high as to warrant rejecting Defendants’ request out of hand. See Zubulake v. UBS Warburg, LLC, 217 F.R.D. 309, 321 (S.D.N.Y. 2003) (explaining, in the context of a cost-shifting request, that “[a] response to a discovery request costing $100,000 sounds (and is) costly, but in a case potentially worth millions of dollars, the cost of responding may not be unduly burdensome”); Xpedior Creditor Trust v. Credit Suisse First Boston (USA), Inc., 309 F. Supp. 2d 459, 466 (S.D.N.Y. 2003) (finding no “undue burden or expense” to justify cost-shifting where the requested discovery cost approximately $400,000 but the litigation involved at least $68.7 million in damages). …

In light of the above analysis—including the undersigned’s assessment of each of the Rule 26 proportionality factors, all of which weigh in favor of granting Defendants’ motion—the Court is unwilling to find that the burden of reviewing the remaining 65,000 responsive documents for a fraction of the cost of discovery to date should preclude Defendants’ proposed request. See BlueAlly, 2017 WL 876266, at *5 (“This [last Rule 26] factor may combine all the previous factors into a final analysis of burdens versus benefits.” (citing Fed. R. Civ. P. 26 advisory committee’s notes)).

For more analysis and case law on proportionality see Proportionality Φ and Making It Easy To Play “e-Discovery: Small, Medium or Large?” in Your Own Group or Class, (e-Discovery Team, 11/26/17). Also see The Sedona Conference Commentary on Proportionality, May 2017.

Learning How to Estimate Document Review Costs

The best way to determine a total cost of a project is by projection from experience and analysis on a cost per file basis. General experience of review costs can be very helpful, but the gold standard comes from measurement of costs actually incurred in the same project, usually after several hours of work, or days, depending on the size of the project. You calculate costs incurred to date and then project forward on a cost per file basis.  The is the core idea of the Six Step document review protocol that this article begins to explain.

The actual project costs are the best possible metrics for estimation. Apparently that was never done in Oxbow because the plaintiffs counsel’s projected document review cost estimates varied so much. A per file cost analysis of the information in the Oxbow opinion shows that the parties missed a key metric. The costs projected ranged from an actual cost of $2.38 per file for the first 584,000 files, to an 1.17 per file estimate to review 214,000 additional files, to an estimate of $1.73 per file to review 82,000 more files, to an actual cost of $4.74 per file to review 12,074 files, to final estimate of $1.22 per file to review the remaining 69,926 files. The actual costs are way higher than the estimated costs meaning the moving party cheated themselves by failing to do the math.

Here is how I explained the estimation process in Predictive Coding and the Proportionality Doctrine at pages 60-61:

Under the Bottom-Line-Driven Proportional approach, after analyzing the case merits and determining the maximum proportional expense, the responding party makes a good faith estimate of the likely maximum number of documents that can be reviewed within that budget. The document count represents the number of documents that you estimate can be reviewed for final decisions of relevance, confidentiality, privilege, and other issues and still remain within budget.
A few examples may help clarify how this method works. Assume a case where you determine a proportional cost of production to be $50,000, and estimate, based on sampling and other hard facts, that it will cost you $1.25 per file for both the automated and manual review before production of the ESI at issue … Then you can review no more than 40,000 documents and stay within budget. It is that simple. No higher math is required.

Estimation for bottom-line-driven review is essentially a method for marshaling evidence to support an undue burden argument under Rule 26(b)(1). Let’s run through it again with greater detail and make a simple formula to illustrate the process.

First, estimate the total number of documents remaining to be reviewed after culling by your tested keywords and other searches (hereinafter “T”). This is the most difficult step but is something most attorney experts and vendors are well qualified to help you with. Essentially “T” represents is the number of documents left unreviewed for Step Five, Final Review.  These are the documents found in Steps One and Two, ECA Multimodal Search and Testing. These steps, along with the estimate calculation, usually repeat several times to cull-in the documents that are most likely relevant to the claims and defenses. The T – Total Documents Left for Review – are the documents in the final revised keyword search folders and concept, similarity search folders. The goal is to obtain a relevance precision in these folders greater than 50%, preferably at least 75%.

To begin an example hypothetical, assume that the total document count in the folders set-up for final review is 5,000 documents. T=5,000. Next count how many relevant and highly relevant files have already been located (hereinafter “R”).  Assume for our example that 1,000 relevant and highly relevant documents have been found. R=1,000.

Next, look up the total attorney fees already incurred in the matter to date for the actual document search and review work by attorneys and paralegals (hereinafter collectively “F”). Include the vendor charges in this total related to the review, but excluding forensics and collection fees. To do this more easily, make sure that the time descriptions that your legal team inputs are clear on what fees are for review. Always remember that you may be required to provide an affidavit or testimony someday to support this cost estimate in a motion for protective order. For our example assume that a total in $1,500 in costs and fees have already been incurred for document search and review work only. F=$1,500. The F divided by R creates the cost per file. Here it is $1.50 per file (F/R).

Finally, multiply the cost per file (F/R) by the number of documents still remaining to be reviewed, T. In other words T * (F/R).  Here that is 5,000 (T) times the $1.50 cost per file (F/R), which equals $7,500. You can then disclose this calculation to opposing counsel to help establish the reasonableness (proportionality) of your plan. Step Four – Work Product Disclosure. Note you are estimating a total spend here for this review project of $9,000; $1,500 already spent, plus an estimated additional $7,500 to complete the project.

There are many ways to calculate probable fees to complete document review project. This simple formula method has the advantage of being based on actual experience and costs incurred. It is also simple and easy to understand compared to most other methods. The method could be criticized for inflating expected costs by observing that the work initially performed to find relevant documents is usually slower and more expensive than concluding work to review the tested search folders. This is generally true, but is countered by the fact that: (1) many of the initial relevant documents found in ECA (Step-One) were “low hanging fruit” and easier to locate than what remains; (2) the precision rate of the documents remaining to be reviewed after culling – T – will be much higher than the document folders previously reviewed (the higher the precision rate, the slower the rate of review, because it takes longer to code a relevant document than an irrelevant document); and, (3) additional time is necessarily incurred in the remaining review for redaction, privilege analysis, and quality control efforts not performed in the review to date.

To be concluded …  In the conclusion of this article I will review the Six Steps and complete discussion of the related concepts.




WHY I LOVE PREDICTIVE CODING: Making Document Review Fun Again with Mr. EDR and Predictive Coding 4.0

December 3, 2017

Many lawyers and technologists like predictive coding and recommend it to their colleagues. They have good reasons. It has worked for them. It has allowed them to do e-discovery reviews in an effective, cost efficient manner, especially the big projects. That is true for me too, but that is not why I love predictive coding. My feelings come from the excitement, fun, and amazement that often arise from seeing it in action, first hand. I love watching the predictive coding features in my software find documents that I could never have found on my own. I love the way the AI in the software helps me to do the impossible. I really love how it makes me far smarter and skilled than I really am.

I have been getting those kinds of positive feelings consistently by using the latest Predictive Coding 4.0 methodology (shown right) and KrolLDiscovery’s latest eDiscovery.com Review software (“EDR”). So too have my e-Discovery Team members who helped me to participate in TREC 2015 and 2016 (the great science experiment for the latest text search techniques sponsored by the National Institute of Standards and Technology). During our grueling forty-five days of experiments in 2015, and again for sixty days in 2016, we came to admire the intelligence of the new EDR software so much that we decided to personalize the AI as a robot. We named him Mr. EDR out of respect. He even has his own website now, MrEDR.com, where he explains how he helped my e-Discovery Team in the 2015 and 2015 TREC Total Recall Track experiments.

Bottom line for us from this research was to prove and improve our methods. Our latest version 4.0 of Predictive Coding, Hybrid Multimodal IST Method is the result. We have even open-sourced this method, well most of it, and teach it in a free seventeen-class online program: TARcourse.com. Aside from testing and improving our methods, another, perhaps even more important result of TREC for us was our rediscovery that with good teamwork, and good software like Mr. EDR at your side, document review need never be boring again. The documents themselves may well be boring as hell, that’s another matter, but the search for them need not be.

How and Why Predictive Coding is Fun

Steps Four, Five and Six of the standard eight-step workflow for Predictive Coding 4.0 is where we work with the active machine-learning features of Mr. EDR. These are its predictive coding features, a type of artificial intelligence. We train the computer on our conception of relevance by showing it relevant and irrelevant documents that we have found. The software is designed to then go out and find all other relevant documents in the total dataset. One of the skills we learn is when we have taught enough and can stop the training and complete the document review. At TREC we call that the Stop decision. It is important to keep down the costs of document review.

We use a multimodal approach to find training documents, meaning we use all of the other search features of Mr. EDR to find relevant ESI, such as keyword searches, similarity and concept. We iterate the training by sample documents, both relevant and irrelevant, until the computer starts to understand the scope of relevance we have in mind. It is a training exercise to make our AI smart, to get it to understand the basic ideas of relevance for that case. It usually takes multiple rounds of training for Mr. EDR to understand what we have in mind. But he is a fast learner, and by using the latest hybrid multimodal IST (“intelligently spaced learning“) techniques, we can usually complete his training in a few days. At TREC, where we were moving fast after hours with the Ã-Team, we completed some of the training experiments in just a few hours.

After a while Mr. EDR starts to “get it,” he starts to really understand what we are after, what we think is relevant in the case. That is when a happy shock and awe type moment can happen. That is when Mr. EDR’s intelligence and search abilities start to exceed our own. Yes. It happens. The pupil then starts to evolve beyond his teachers. The smart algorithms start to see patterns and find evidence invisible to us. At that point we sometimes even let him train himself by automatically accepting his top-ranked predicted relevant documents without even looking at them. Our main role then is to determine a good range for the automatic acceptance and do some spot-checking. We are, in effect, allowing Mr. EDR to take over the review. Oh what a feeling to then watch what happens, to see him keep finding new relevant documents and keep getting smarter and smarter by his own self-programming. That is the special AI-high that makes it so much fun to work with Predictive Coding 4.0 and Mr. EDR.

It does not happen in every project, but with the new Predictive Coding 4.0 methods and the latest Mr. EDR, we are seeing this kind of transformation happen more and more often. It is a tipping point in the review when we see Mr. EDR go beyond us. He starts to unearth relevant documents that my team would never even have thought to look for. The relevant documents he finds are sometimes completely dissimilar to any others we found before. They do not have the same keywords, or even the same known concepts. Still, Mr. EDR sees patterns in these documents that we do not. He can find the hidden gems of relevance, even outliers and black swans, if they exist. When he starts to train himself, that is the point in the review when we think of Mr. EDR as going into superhero mode. At least, that is the way my young e-Discovery Team members likes to talk about him.

By the end of many projects the algorithmic functions of Mr. EDR have attained a higher intelligence and skill level than our own (at least on the task of finding the relevant evidence in the document collection). He is always lighting fast and inexhaustible, even untrained, but by the end of his training, he becomes a search genius. Watching Mr. EDR in that kind of superhero mode is what makes Predictive Coding 4.0 a pleasure.

The Empowerment of AI Augmented Search

It is hard to describe the combination of pride and excitement you feel when Mr. EDR, your student, takes your training and then goes beyond you. More than that, the super-AI you created then empowers you to do things that would have been impossible before, absurd even. That feels pretty good too. You may not be Iron Man, or look like Robert Downey, but you will be capable of remarkable feats of legal search strength.

For instance, using Mr. EDR as our Iron Man-like suits, my e-discovery Ã-Team of three attorneys was able to do thirty different review projects and classify 17,014,085 documents in 45 days. See 2015 TREC experiment summary at Mr. EDR. We did these projects mostly at nights, and on weekends, while holding down our regular jobs. What makes this crazy impossible, is that we were able to accomplish this by only personally reviewing 32,916 documents. That is less than 0.2% of the total collection. That means we relied on predictive coding to do 99.8% of our review work. Incredible, but true.

Using traditional linear review methods it would have taken us 45 years to review that many documents! Instead, we did it in 45 days. Plus our recall and precision rates were insanely good. We even scored 100% precision and 100% recall in one TREC project in 2015 and two more in 2016. You read that right. Perfection. Many of our other projects attained scores in the high and mid nineties. We are not saying you will get results like that. Every project is different, and some are much more difficult than others. But we are saying that this kind of AI-enhanced review is not only fast and efficient, it is effective.

Yes, it’s pretty cool when your little AI creation does all the work for you and makes you look good. Still, no robot could do this without your training and supervision. We are a team, which is why we call it hybrid multimodal, man and machine.

Having Fun with Scientific Research at TREC 2015 and 2016

During the 2015 TREC Total Recall Track experiments my team would sometimes get totally lost on a few of the really hard Topics. We were not given legal issues to search, as usual. They were arcane technical hacker issues, political issues, or local news stories. Not only were we in new fields, the scope of relevance of the thirty Topics was never really explained. (We were given one to three word explanations in 2015, in 2016 we got a whole sentence!) We had to figure out intended relevance during the project based on feedback from the automated TREC document adjudication system. We would have some limited understanding of relevance based on our suppositions of the initial keyword hints, and so we could begin to train Mr. EDR with that. But, in several Topics, we never had any real understanding of exactly what TREC thought was relevant.

This was a very frustrating situation at first, but, and here is the cool thing, even though we did not know, Mr. EDR knew. That’s right. He saw the TREC patterns of relevance hidden to us mere mortals. In many of the thirty Topics we would just sit back and let him do all of the driving, like a Google car. We would often just cheer him on (and each other) as the TREC systems kept saying Mr. EDR was right, the documents he selected were relevant. The truth is, during much of the 45 days of TREC we were like kids in a candy store having a great time. That is when we decided to give Mr. EDR a cape and superhero status. He never let us down. It is a great feeling to create an AI with greater intelligence than your own and then see it augment and improve your legal work. It is truly a hybrid human-machine partnership at its best.

I hope you get the opportunity to experience this for yourself someday. The TREC experiments in 2015 and 2016 on recall in predictive coding are over, but the search for truth and justice goes on in lawsuits across the country. Try it on your next document review project.

Do What You Love and Love What You Do

Mr. EDR, and other good predictive coding software like it, can augment our own abilities and make us incredibly productive. This is why I love predictive coding and would not trade it for any other legal activity I have ever done (although I have had similar highs from oral arguments that went great, or the rush that comes from winning a big case).

The excitement of predictive coding comes through clearly when Mr. EDR is fully trained and able to carry on without you. It is a kind of Kurzweilian mini-singularity event. It usually happens near the end of the project, but can happen earlier when your computer catches on to what you want and starts to find the hidden gems you missed. I suggest you give Predictive Coding 4.0 and Mr. EDR a try. To make it easier I open-sourced our latest method and created an online course. TARcourse.com. It will teach anyone our method, if they have the right software. Learn the method, get the software and then you too can have fun with evidence search. You too can love what you do. Document review need never be boring again.

Caution

One note of caution: most e-discovery vendors, including the largest, do not have active machine learning features built into their document review software. Even the few that have active machine learning do not necessarily follow the Hybrid Multimodal IST Predictive Coding 4.0 approach that we used to attain these results. They instead rely entirely on machine-selected documents for training, or even worse, rely entirely on random selected documents to train the software, or have elaborate unnecessary secret control sets.

The algorithms used by some vendors who say they have “predictive coding” or “artificial intelligence” are not very good. Scientists tell me that some are only dressed-up concept search or unsupervised document clustering. Only bona fide active machine learning algorithms create the kind of AI experience that I am talking about. Software for document review that does not have any active machine learning features may be cheap, and may be popular, but they lack the power that I love. Without active machine learning, which is fundamentally different from just “analytics,” it is not possible to boost your intelligence with AI. So beware of software that just says it has advanced analytics. Ask if it has “active machine learning“?

It is impossible to do the things described in this essay unless the software you are using has active machine learning features.  This is clearly the way of the future. It is what makes document review enjoyable and why I love to do big projects. It turns scary to fun.

So, if you tried “predictive coding” or “advanced analytics” before, and it did not work for you, it could well be the software’s fault, not yours. Or it could be the poor method you were following. The method that we developed in Da Silva Moore, where my firm represented the defense, was a version 1.0 method. Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182, 183 (S.D.N.Y. 2012). We have come a long way since then. We have eliminated unnecessary random control sets and gone to continuous training, instead of train then review. This is spelled out in the TARcourse.com that teaches our latest version 4.0 techniques.

The new 4.0 methods are not hard to follow. The TARcourse.com puts our methods online and even teaches the theory and practice. And the 4.0 methods certainly will work. We have proven that at TREC, but only if you have good software. With just a little training, and some help at first from consultants (most vendors with bona fide active machine learning features will have good ones to help), you can have the kind of success and excitement that I am talking about.

Do not give up if it does not work for you the first time, especially in a complex project. Try another vendor instead, one that may have better software and better consultants. Also, be sure that your consultants are Predictive Coding 4.0 experts, and that you follow their advice. Finally, remember that the cheapest software is almost never the best, and, in the long run will cost you a small fortune in wasted time and frustration.

Conclusion

Love what you do. It is a great feeling and sure fire way to job satisfaction and success. With these new predictive coding technologies it is easier than ever to love e-discovery. Try them out. Treat yourself to the AI high that comes from using smart machine learning software and fast computers. There is nothing else like it. If you switch to the 4.0 methods and software, you too can know that thrill. You can watch an advanced intelligence, which you helped create, exceed your own abilities, exceed anyone’s abilities. You can sit back and watch Mr. EDR complete your search for you. You can watch him do so in record time and with record results. It is amazing to see good software find documents that you know you would never have found on your own.

Predictive coding AI in superhero mode can be exciting to watch. Why deprive yourself of that? Who says document review has to be slow and boring? Start making the practice of law fun again.

Here is the PDF version of this article, which you may download and distribute, so long as you do not revise it or charge for it.

 

 


%d bloggers like this: