Document Review and Proportionality – Part One

March 18, 2018

In 2013 I wrote a law review article on how the costs of document review could be controlled using predictive coding and cost estimation. Predictive Coding and the Proportionality Doctrine: a Marriage Made in Big Data, 26 Regent U. Law Review 1 (2013-2014). Today I write on how it can be controlled in document review, even without predictive coding. Here is the opening paragraph of my earlier article:

The search of electronic data to try to find evidence for use at trial has always been difficult and expensive. Over the past few years, the advent of Big Data, where both individuals and organizations retain vast amounts of complex electronic information, has significantly compounded these problems. The legal doctrine of proportionality responds to these problems by attempting to constrain the costs and burdens of discovery to what are reasonable. A balance is sought between the projected burdens and likely benefits of proposed discovery, considering the issues and value of the case. Several software programs on the market today have responded to the challenges of Big Data by implementing a form of artificial intelligence (“AI”) known as active machine learning to help lawyers review electronic documents. This Article discusses these issues and shows that AI-enhanced document review directly supports the doctrine of proportionality. When used together, proportionality and predictive coding provide a viable, long-term solution to the problems and opportunities of the legal search of Big Data.

The 2013 article was based on version 1.0 Predictive Coding. Under this first method you train and rank documents and then review only the higher ranking documents. Here is a more detailed description from pages 23, 24 of the article:

This kind of AI-enhanced legal review is typically described today in legal literature by the term
predictive coding. This is because the computer predicts how an entire body of documents should be coded (classified) based on how the lawyer has coded the smaller training sets. The prediction places a probability ranking on each document, typically ranging from 0% to 100% probability. Thus, in a
relevancy classification, each and every document in the entire dataset (the corpus) is ranked with a percentage of likely relevance and irrelevance. …
As will be shown, this ranking feature is key to the use of the legal doctrine of proportionality. The ability to rank all documents in a corpus on probable relevance is a new feature that no other legal search software has previously provided.71

It was a two phase procedure: train then review. Yes, some review would take place in the first training phase, but this would be a relatively small number, say 10-20% of the total documents reviewed. Most of the human review of documents would take place in phase two. The workflow of version 1.0 is shown in the diagram below and is described in detail in the article, starting at page 31.

Predictive Coding and the Proportionality Doctrine argued that attorneys should scale the number of documents for the second phase of document review based on estimated costs constrained to a proportional amount. No more spending $100,000 for document review in a $200,000 case. The number of documents you selected for review would be limited to proportional costs. Predictive coding and its ranking features allowed you to select the documents for review that were most likely to be relevant. If you could only afford to spend $20,000 on a document review project, then you would limit the number of documents reviewed to those within that scope that were the highest ranked as probable relevant. Here is the article’s description at pages 54-55 of the process and link between the doctrine of proportionality and predictive coding.

What makes this a marriage truly made in heaven is the document-ranking capabilities of predictive coding. This allows parties to limit the documents considered for final production to those that the computer determines have the highest probative value. This key ranking feature of AI-enhanced document review allows the producing party to provide the requesting party with the most bang for the buck. This not only saves the producing party money, and thus keeps its costs proportional, but it saves time and expenses for the requesting party. It makes the production much more precise, and thus faster and easier to review. It avoids what can be a costly exercise to a requesting party to wade through a document dump 192, a production that contains a high number of irrelevant or marginally relevant documents. Most importantly, it gives the requesting party what it really wants—the documents that are the most important to the case.

In the article, pages 58-60, I called this method Bottom-Line-Driven Proportional Review and describe the process in greater detail.

The bottom line in e-discovery production is what it costs. Despite what some lawyers and vendors may say, total cost is not an impossible question to answer. It takes an experienced lawyer’s skill to answer, but,
after a while, you can get quite good at such estimation. It is basically a matter of estimating attorney billable hours plus vendor costs. With practice, cost estimation can become a reliable art, a projection that you can count on for budgeting purposes, and, as we will see, for proportionality arguments.  …
The new strategy and methodology is based on a bottom line approach where you estimate what review costs will be, make a proportionality analysis as to what should be spent, and then engage in defensible culling to bring the review costs within the proportional budget. The producing party determines the number of documents to be subjected to final review by calculating backwards from the bottom line of what they are willing, or required, to pay for the production.  …
Under the Bottom-Line-Driven Proportional approach, after analyzing the case merits and determining the maximum proportional expense, the responding party makes a good faith estimate of the likely
maximum number of documents that can be reviewed within that budget. The document count represents the number of documents that you estimate can be reviewed for final decisions of relevance, confidentiality, privilege, and other issues and still remain within budget. The review costs you estimate must be based on best practices, which in all large review projects today means predictive coding, and the estimates must be accurate (i.e., no puffing or mere guesswork).
Note this last quote (emphasis added) from Predictive Coding and the Proportionality Doctrine: a Marriage Made in Big Data shows an important limitation to the article’s budgetary proposal, it was limited to large review projects where predictive coding was used. Without this marriage to predictive coding, the promise of proportionality by cost estimations was lost. My article today fills this gap.
Here I will explain how document review can be structured to provide estimates and review constraints, even when predictive coding and its ranking are not used. This is, in effect, the single lawyers guide, one where there has not been a marriage with predictive coding. It is a guide for small and medium sized document review projects, which are, after all, the vast majority of projects faced by the legal profession.

To be honest, back when I first wrote the law review article I did not think it would be necessary to develop such a proportionality method, one that does not use AI document ranking. I assumed predictive coding would take off and by now would be used in almost all projects, no matter what the size. I assumed that since active machine learning and document ranking was such good new technology, that even our conservative profession would embrace it within the next few years. Boy was I wrong about that. The closing lines of Predictive Coding and the Proportionality Doctrine: a Marriage Made in Big Data have been proven naive.

The key facts needed to try a case and to do justice can be found in any size case, big and small, at an affordable price, but you have to embrace change and adopt new legal and technical methodologies. The Bottom-Line-Driven Proportional Review method is part of that answer, and so too is advanced-review software at affordable prices. When the two are used together, it is a marriage made in heaven.

I blame both lawyers and e-discovery vendors for this failure, as well as myself for misjudging my peers. Law schools and corporate counsel have not helped much either. Only the judiciary seems to have caught on and kept up.

Proportionality as a legal doctrine took off as expected after 2013, but not the marriage with predictive coding. Lawyers have proven to be much more stubborn than anticipated. They will barely even go out with predictive coding, no matter how attractive she is, much less marry her. The profession as a whole remains remarkably slow to adopt new technology. The judges are tempted to use their shotgun to force a wedding, but so far have refrained from ordering a party to use predictive coding. Hyles v. New York City, No. 10 Civ. 3119 (AT)(AJP), 2016 WL 4077114 (S.D.N.Y. Aug. 1, 2016) (J. Peck: “There may come a time when TAR is so widely used that it might be unreasonable for a party to decline to use TAR. We are not there yet.”)

Changes Since 2013

A lot has happened since 2013 when Predictive Coding and the Proportionality Doctrine was written. In December 2015 Rule 26(b) on relevance was revised to strengthen proportionality and we have made substantial improvements to Predictive Coding methods. In the ensuing years most experts have abandoned this early two-step method of train then review in favor of a method where training continues throughout the review process. In other words, today we keep training until the end. See Eg. the e-Discovery Team’s Predictive Coding, version 4.0, with its Intelligently Spaced Training. (This is similar to a method popularized by Maura Grossman and Gordon Cormack, which they called Continuous Active Learning or CAL for short, a term they later trademarked.)

The 2015 revision to Rule 26(b) on relevance has spurred case law and clarified that undue burden, the sixth factor of proportionality under Rule 26(b)(1), must be argued in detail with facts proven.

  1. the importance of the issues at stake in this action;
  2. the amount in controversy;
  3. the parties’ relative access to relevant information;
  4. the parties’ resources;
  5. the importance of the discovery in resolving the issues; and
  6. whether the burden or expense of the proposed discovery outweighs its likely benefit.”

Oxbow Carbon & Minerals LLC v. Union Pacific Railroad Company, No. 11-cv-1049 (PLF/GMH), 2017 WL 4011136, (D.D.C. Sept. 11, 2017). Although all factors are important and should be addressed, the last factor is usually the most important one in a discovery dispute. It is also the factor that can be addressed generally for all cases and is the core of proportionality.

Proportional “Bottom Line Driven” Method of Document Review that Does Not Require Use of Predictive Coding

I have shared how I use predictive coding with continuous training in my TARcourse.com online instruction program. The eight-step workflow is shown below.

I have not previously shared any information on the document review workflow that I follow in small and medium seized cases where predictive coding is not used. The rest of this article will do so now.

Please note that I have a well-developed and articulated list of steps and procedures for attorneys in my law firm to follow in such small cases. I keep this as a trade-secret and will not reveal them here. Although they are widely known in my firm, and slightly revised and updated each year, they are not public. Still, any expert in document review should be able to create their own particular rules and implementation methods. Warning, if you are not such an expert, be careful in relying on these high-level explanations alone. The devil is in the details and you should retain an expert to assist.

Here is a chart summarizing the SIX-Step Workflow and six basic concepts that must be understood for the process to work at maximum efficiency.

The first three steps iterate with searches to cull out the irrelevant documents, and then culminate with Disclosures of the plan developed for Steps Five and Six, Final Review and Production.  The sixth production step is always in phases according to proportional planning.

A key skill that must be learned is project cost estimation, including fees and expenses. The attorneys involved must also learn how to communicate with themselves, the vendors, opposing counsel and the court. Rigid enforcement of work-product confidentiality is counter-productive to the goal of cost efficient projects. Agree on the small stuff and save your arguments for the cost-saving questions that are worth the effort.

 

The Proportionality Doctrine

The doctrine of proportionality as a legal initiative was launched by The Sedona Conference in 2010 as a reaction to the exploding costs of e-discovery. The Sedona Conference, The Sedona Conference Commentary on Proportionality in Electronic Discovery, 11 SEDONA CONF. J. 289, 292–94 (2010). See also John L. Carroll, Proportionality in Discovery: A Cautionary Tale, 32 CAMPBELL L. REV. 455, 460 (2010) (“If courts and litigants approach discovery with the mindset of proportionality, there is the potential for real savings in both dollars and time to resolution.”); Maura Grossman & Gordon Cormack, Some Thoughts on Incentives, Rules, and Ethics Concerning the Use of Search Technology in E-Discovery, 12 SEDONA CONF. J. 89, 94–95, 101–02 (2011).

The doctrine received a big boost with the adoption of the 2015 Amendment to Rule 26. The rule was changed to provide discovery must be both relevant and “proportional to the needs of the case.” Fed. R. Civ. P. 26(b)(1). To determine whether a discovery request is proportional, you are required weigh the following six factors: “(1) the importance of the issues at stake in this action; (2) the amount in controversy; (3) the parties’ relative access to relevant information; (4) the parties’ resources; (5) the importance of the discovery in resolving the issues; and (6) whether the burden or expense of the proposed discovery outweighs its likely benefit.” Williams v. BASF Catalysts, LLC, Civ. Action No. 11-1754, 2017 WL 3317295, at *4 (D.N.J. Aug. 3, 2017) (citing Fed. R. Civ. P. 26(b)(1)); Arrow Enter. Computing Solutions, Inc. v. BlueAlly, LLC, No. 5:15-CV-37-FL, 2017 WL 876266, at *4 (E.D.N.C. Mar. 3, 2017); FTC v. Staples, Inc., Civ. Action No. 15-2115 (EGS), 2016 WL 4194045, at *2 (D.D.C. Feb. 26, 2016).

“[N]o single factor is designed to outweigh the other factors in determining whether the discovery sought is proportional,” and all proportionality determinations must be made on a case-by-case basis. Williams, 2017 WL 3317295, at *4 (internal citations omitted); see also Bell v. Reading Hosp., Civ. Action No. 13-5927, 2016 WL 162991, at *2 (E.D. Pa. Jan. 14, 2016). To be sure, however, “the amendments to Rule 26(b) do not alter the basic allocation of the burden on the party resisting discovery to—in order to successfully resist a motion to compel—specifically object and show that . . . a discovery request would impose an undue burden or expense or is otherwise objectionable.” Mir v. L-3 Commc’ns Integrated Sys., L.P., 319 F.R.D. 220, 226 (N.D. Tex. 2016), as quoted by Oxbow Carbon & Minerals LLC v. Union Pacific Railroad Company, No. 11-cv-1049 (PLF/GMH), 2017 WL 4011136, (D.D.C. Sept. 11, 2017).

The Oxbow case is discussed at length in my recent blog Judge Facciola’s Successor, Judge Michael Harvey, Provides Excellent Proportionality Analysis in an Order to Compel (e-Discovery Team,3/1/18). Judge Harvey carefully examined the costs and burdens claimed by plaintiffs and rejected the overly burdensome argument.

Plaintiffs’ counsel explained at the second hearing in this matter that Oxbow has spent $1.391 million to date on reviewing and producing approximately 584,000 documents from its nineteen other custodians and Oxbow’s email archive. See 8/24/17 TR. at 44:22-45:10. And again, Oxbow seeks tens of millions of dollars from Defendants. Through that lens, the estimated cost of reviewing and producing Koch’s responsive documents—even considering the total approximate cost of $142,000 for that effort, which includes the expense of the sampling effort—while certainly high, is not so unreasonably high as to warrant rejecting Defendants’ request out of hand. See Zubulake v. UBS Warburg, LLC, 217 F.R.D. 309, 321 (S.D.N.Y. 2003) (explaining, in the context of a cost-shifting request, that “[a] response to a discovery request costing $100,000 sounds (and is) costly, but in a case potentially worth millions of dollars, the cost of responding may not be unduly burdensome”); Xpedior Creditor Trust v. Credit Suisse First Boston (USA), Inc., 309 F. Supp. 2d 459, 466 (S.D.N.Y. 2003) (finding no “undue burden or expense” to justify cost-shifting where the requested discovery cost approximately $400,000 but the litigation involved at least $68.7 million in damages). …

In light of the above analysis—including the undersigned’s assessment of each of the Rule 26 proportionality factors, all of which weigh in favor of granting Defendants’ motion—the Court is unwilling to find that the burden of reviewing the remaining 65,000 responsive documents for a fraction of the cost of discovery to date should preclude Defendants’ proposed request. See BlueAlly, 2017 WL 876266, at *5 (“This [last Rule 26] factor may combine all the previous factors into a final analysis of burdens versus benefits.” (citing Fed. R. Civ. P. 26 advisory committee’s notes)).

For more analysis and case law on proportionality see Proportionality Φ and Making It Easy To Play “e-Discovery: Small, Medium or Large?” in Your Own Group or Class, (e-Discovery Team, 11/26/17). Also see The Sedona Conference Commentary on Proportionality, May 2017.

Learning How to Estimate Document Review Costs

The best way to determine a total cost of a project is by projection from experience and analysis on a cost per file basis. General experience of review costs can be very helpful, but the gold standard comes from measurement of costs actually incurred in the same project, usually after several hours of work, or days, depending on the size of the project. You calculate costs incurred to date and then project forward on a cost per file basis.  The is the core idea of the Six Step document review protocol that this article begins to explain.

The actual project costs are the best possible metrics for estimation. Apparently that was never done in Oxbow because the plaintiffs counsel’s projected document review cost estimates varied so much. A per file cost analysis of the information in the Oxbow opinion shows that the parties missed a key metric. The costs projected ranged from an actual cost of $2.38 per file for the first 584,000 files, to an 1.17 per file estimate to review 214,000 additional files, to an estimate of $1.73 per file to review 82,000 more files, to an actual cost of $4.74 per file to review 12,074 files, to final estimate of $1.22 per file to review the remaining 69,926 files. The actual costs are way higher than the estimated costs meaning the moving party cheated themselves by failing to do the math.

Here is how I explained the estimation process in Predictive Coding and the Proportionality Doctrine at pages 60-61:

Under the Bottom-Line-Driven Proportional approach, after analyzing the case merits and determining the maximum proportional expense, the responding party makes a good faith estimate of the likely maximum number of documents that can be reviewed within that budget. The document count represents the number of documents that you estimate can be reviewed for final decisions of relevance, confidentiality, privilege, and other issues and still remain within budget.
A few examples may help clarify how this method works. Assume a case where you determine a proportional cost of production to be $50,000, and estimate, based on sampling and other hard facts, that it will cost you $1.25 per file for both the automated and manual review before production of the ESI at issue … Then you can review no more than 40,000 documents and stay within budget. It is that simple. No higher math is required.

Estimation for bottom-line-driven review is essentially a method for marshaling evidence to support an undue burden argument under Rule 26(b)(1). Let’s run through it again with greater detail and make a simple formula to illustrate the process.

First, estimate the total number of documents remaining to be reviewed after culling by your tested keywords and other searches (hereinafter “T”). This is the most difficult step but is something most attorney experts and vendors are well qualified to help you with. Essentially “T” represents is the number of documents left unreviewed for Step Five, Final Review.  These are the documents found in Steps One and Two, ECA Multimodal Search and Testing. These steps, along with the estimate calculation, usually repeat several times to cull-in the documents that are most likely relevant to the claims and defenses. The T – Total Documents Left for Review – are the documents in the final revised keyword search folders and concept, similarity search folders. The goal is to obtain a relevance precision in these folders greater than 50%, preferably at least 75%.

To begin an example hypothetical, assume that the total document count in the folders set-up for final review is 5,000 documents. T=5,000. Next count how many relevant and highly relevant files have already been located (hereinafter “R”).  Assume for our example that 1,000 relevant and highly relevant documents have been found. R=1,000.

Next, look up the total attorney fees already incurred in the matter to date for the actual document search and review work by attorneys and paralegals (hereinafter collectively “F”). Include the vendor charges in this total related to the review, but excluding forensics and collection fees. To do this more easily, make sure that the time descriptions that your legal team inputs are clear on what fees are for review. Always remember that you may be required to provide an affidavit or testimony someday to support this cost estimate in a motion for protective order. For our example assume that a total in $1,500 in costs and fees have already been incurred for document search and review work only. F=$1,500. The F divided by R creates the cost per file. Here it is $1.50 per file (F/R).

Finally, multiply the cost per file (F/R) by the number of documents still remaining to be reviewed, T. In other words T * (F/R).  Here that is 5,000 (T) times the $1.50 cost per file (F/R), which equals $7,500. You can then disclose this calculation to opposing counsel to help establish the reasonableness (proportionality) of your plan. Step Four – Work Product Disclosure. Note you are estimating a total spend here for this review project of $9,000; $1,500 already spent, plus an estimated additional $7,500 to complete the project.

There are many ways to calculate probable fees to complete document review project. This simple formula method has the advantage of being based on actual experience and costs incurred. It is also simple and easy to understand compared to most other methods. The method could be criticized for inflating expected costs by observing that the work initially performed to find relevant documents is usually slower and more expensive than concluding work to review the tested search folders. This is generally true, but is countered by the fact that: (1) many of the initial relevant documents found in ECA (Step-One) were “low hanging fruit” and easier to locate than what remains; (2) the precision rate of the documents remaining to be reviewed after culling – T – will be much higher than the document folders previously reviewed (the higher the precision rate, the slower the rate of review, because it takes longer to code a relevant document than an irrelevant document); and, (3) additional time is necessarily incurred in the remaining review for redaction, privilege analysis, and quality control efforts not performed in the review to date.

To be concluded …  In the conclusion of this article I will review the Six Steps and complete discussion of the related concepts.



Dumb and Dumber Strike Again: New case out of California provides a timely lesson in legal search stupidity

February 18, 2018

An interesting, albeit dumb, case out of California provides some good cautionary instruction for anybody doing discovery. Youngevity Int’l Corp. v. Smith, 2017 U.S. Dist. LEXIS 210386 (S.D. Cal. Dec. 21, 2017). Youngevity is essentially an unfair competition dispute that arose when some multi-level nutritional marketing sales types left one company to form their own. Yup, multi-level nutritional sales; the case has sleaze written all over it. The actions of the Plaintiff in this case are, in my opinion, and that of the judge, especially embarrassing. In fact both sides remind me of a classic movie Dumb and Dumber. It has a line in it favored by all students of statistics: So you’re telling me there’s a chance.

One in a million is about the chance that the Plaintiff’s discovery plan in Youngevity had of succeeding in federal court in front of the smart United States Magistrate Judge assigned to the case, Jill L. Burkhardt.

Dumb and Dumber

So what did the Plaintiff do that is so dumb? So timely? They confused documents that have a “hit” in them with documents that are relevant. As if having a keyword in a document could somehow magically make it relevant under the rules, or responsive to a request for relevant information under the rules. Not not only that, and here is the dumber part, the Plaintiff produced 4.2 Million pages of such “hit” documents to defendant without reviewing them. They produced the documents without review, but tried to protect their privilege by designating them all “Attorney Eyes Only.” Dumb and dumber. But, in fairness to Plaintiff’s counsel, not something I am especially known for doing, I know, but still, in fairness to the eight attorneys of record for the plaintiffs, this is something that clients sometimes make their attorneys do as a “cost saving” maneuver.

Fellow Blogger Comment

As my bow-tied friend, , put it in his blog on this case:

Just because ESI is a hit to a search term, does NOT mean that data is responsive to any discovery request. Moreover, designating all ESI as Attorney-Eyes Only should not be done as a tactic to avoid conducting document review. …

Responding to discovery requests should not ignore requests for production. Parties often get lost in search terms, focusing on document review as process independent of the claims of the lawsuit. Lawyers should resist that quagmire and focus document review to respond to the requests for production. Developing searches is the first step in responding, however, a search strategy should not simply be keywords. Searches should be built with the requests, including date ranges, messages sent between individuals, and other methods to focus on the merits of the case, not document review for the sake of document review.

The occurrence of a keyword term in a paper document, or a computer file, or any other ESI does not make the file relevant. A ESI file is relevant depending on the overall content of the file, not just one word.

Procedural Background

Here is Judge Jill L. Burkhardt concise explanation of the factual, procedural background of the keyword dispute (citations to the record omitted).

On May 9, 2017, Wakaya emailed Youngevity to discuss the use of search terms to identify and collect potentially responsive electronically-stored information (ESI) from the substantial amount of ESI both parties possessed. Wakaya proposed a three-step process by which: “(i) each side proposes a list of search terms for their own documents; (ii) each side offers any supplemental terms to be added to the other side’s proposed list; and (iii) each side may review the total number of results generated by each term in the supplemented lists (i.e., a ‘hit list’ from our third-party vendors) and request that the other side omit any terms appearing to generate a disproportionate number of results.” On May 10, 2017, while providing a date to exchange search terms, Youngevity stated that the “use of key words as search aids may not be used to justify non-disclosure of responsive information.” On May 15, 2017, Youngevity stated that “[w]e are amenable to the three step process described in your May 9 e-mail….” Later that day, the parties exchanged lists of proposed search terms to be run across their own ESI. On May 17, 2017, the parties exchanged lists of additional search terms that each side proposed be run across the opposing party’s ESI.

The plaintiffs never produced their hit list as promised and as demanded by Defendants several times after the agreement was reached. Instead, they produced all documents on the hit list, some 4.2 Million pages, and labeled them all AEO. The defendants primarily objected to calling the plaintiffs’ labeling all documents Attorneys Eyes Only, instead of Confidential. The complaint about the production defect by producing all documents with hits, instead of all documents that were responsive, seems like an after thought.

Keyword Search Was New in the 1980s

The focus in this case on keyword search alone, instead of using a Hybrid Multimodal approach, is how a majority of ill-informed lawyers today still handle legal search today. I think keywords are an acceptable way to start a conversation, and begin a review, but to use keyword search alone  hearkens back to the dark ages of document review, the mid-nineteen eighties. That is when lawyers first started using keyword search. Remember the Blair & Maron study of the San Francisco subway litigation document search? The study was completed in 1985. It found that when the lawyers and paralegals thought they had found over 75% of the relevant documents using keyword search, that they had in fact only found 20%. Blair, David C., & Maron, M. E., An evaluation of retrieval effectiveness for a full-text document-retrieval system; Communications of the ACM Volume 28, Issue 3 (March 1985).

The Blair Maron study is thirty-three years old and yet today we still have lawyers using keyword search alone, like it was the latest and greatest. The technology gap in the law is incredibly large. This is especially true when it comes to document review where the latest AI enhanced technologies are truly great. WHY I LOVE PREDICTIVE CODING: Making Document Review Fun Again with Mr. EDR and Predictive Coding 4.0. Wake up lawyers. We have come a long was since the 1980s and keyword search.

Judge Burkhardt’s Ruling

Back to the Dumb and Dumber story in Youngevity as told to us by the smartest person in that room, by far, Judge Burkhardt:

The Court suggested that a technology-assisted review (TAR) may be the most efficient way to resolve the myriad disputes surrounding Youngevity’s productions.

Note this suggestion seems to have been ignored by both sides. Are you surprised? At least the judge tried. Not back to the rest of the Dumb and Dumber story:

designated as AEO. Youngevity does not claim that the documents are all properly designated AEO, but asserts that this mass designation was the only way to timely meet its production obligations when it produced documents on July 21, 2017 and August 22, 2017. It offers no explanation as to why it has not used the intervening five months to conduct a review and properly designate the documents, except to say, “Youngevity believes that the parties reached an agreement on de-designation of Youngevity’s production which will occur upon the resolution of the matters underlying this briefing.” Why that de-designation is being held up while this motion is pending is not evident.

Oh yeah. Try to BS the judge. Another dumb move. Back to the story:

Wakaya argues that Youngevity failed to review any documents prior to production and instead provided Wakaya with a “document dump” containing masses of irrelevant documents, including privileged information, and missing “critical” documents. Youngevity’s productions contain documents such as Business Wire news emails, emails reminding employees to clean out the office
refrigerator, EBay transaction emails, UPS tracking emails, emails from StubHub, and employee file and benefits information. Youngevity argues that it simply provided the documents Wakaya requested in the manner that Wakaya instructed.  …

Wakaya demanded that Youngevity review its production and remove irrelevant and non-responsive documents.

The poor judge is now being bothered by motions and phone calls as the many lawyers for both sides bill like crazy and ask for her help. Judge Burkhardt again does the smart thing and pushed the attorneys to use TAR and, since it is obvious they are clueless, to hire vendors to help them to do it.

[T]he Court suggested that conducting a TAR of Youngevity’s productions might be an efficient way to resolve the issues. On October 5, 2017, the parties participated in another informal discovery conference with the Court because they were unable to resolve their disputes relating to the TAR process and the payment of costs associated with TAR. The Court suggested that counsel meet and confer again with both parties’ discovery vendors participating. Wakaya states that on October 6, 2017, the parties participated in a joint call with their discovery vendors to discuss the TAR process.  The parties could not agree on who would bear the costs of the TAR process. Youngevity states that it offered to pay half the costs associated with the TAR process, but Wakaya would not agree that TAR alone would result in a document production that satisfied Youngevity’s discovery obligations. Wakaya argued that it should not have bear the costs of fixing Youngevity’s improper productions. On October 9, 2017, the parties left a joint voicemail with the Court stating that they had reached a partial agreement to conduct a TAR of Youngevity’s production, but could not resolve the issue of which party would bear the TAR costs. In response to the parties’ joint voicemail, the Court issued a briefing schedule for the instant motion.

Makes you want to tear your hair out just to read it, doesn’t it? Yet the judge has to deal with junk like this every day. Patience of a saint.

More from Judge Burkhardt, who does a very good survey of the relevant law, starting at page four of the opinion (I suggest you read it). Skipping to the Analysis segment of the opinion at pages five through nine, here are the highlights, starting with a zinger against all counsel concerning the Rule 26(g) arguments:

Wakaya fails to establish that Youngevity violated Rule 26(g). Wakaya does not specifically claim that certificates signed by Youngevity or its counsel violate Rule 26(g). Neither party, despite filing over 1,600 pages of briefing and exhibits for this motion, provided the Court with Youngevity’s written discovery responses and certification. The Court declines to find that Youngevity improperly certified its discovery responses when the record before it does not indicate the content of Youngevity’s written responses, its certification, or a declaration stating that Youngevity in fact certified its responses. See Cherrington Asia Ltd. v. A & L Underground, Inc., 263 F.R.D. 653, 658 (D. Kan. 2010) (declining to impose sanctions under Rule 26(g) when plaintiffs do not specifically claim that certificates signed by defendant’s counsel violated the provisions of Rule 26(g)(1)). Accordingly, Wakaya is not entitled to relief under Rule 26(g).

Wow! Over 1,600 pages of memos and nobody provided the Rule 26(g) certification to the court that plaintiffs’ counsel allegedly violated. Back to the Dumb and Dumber story as told to us by Judge Burkhardt:

Besides establishing that Youngevity’s production exceeded Wakaya’s requests, the record indicates that Youngevity did not produce documents following the protocol to which the parties agreed.  … Youngevity failed to produce its hit list to Wakaya, and instead produced every document that hit upon any proposed search term. Had Youngevity provided its hit list to Wakaya as agreed and repeatedly requested, Wakaya might have proposed a modification to the search terms that generated disproportionate results, thus potentially substantially reducing the number of documents requiring further review and ultimate production. …

Second, Youngevity conflates a hit on the parties’ proposed search terms with responsiveness.[11] The two are not synonymous. Youngevity admits that it has an obligation to produce responsive documents. Youngevity argues that because each document hit on a search term, “the documents Youngevity produced are necessarily responsive to Wakaya’s Requests.” Search terms are an important tool parties may use to identify potentially responsive documents in cases involving substantial amounts of ESI. Search terms do not, however, replace a party’s requests for production. See In re Lithium Ion Batteries Antitrust Litig., No. 13MD02420 YGR (DMR), 2015 WL 833681, at *3 (N.D. Cal. Feb. 24, 2015) (noting that “a problem with keywords ‘is that they often are over inclusive, that is, they find responsive documents but also large numbers of irrelevant documents’”) (quoting Moore v. Publicis Groupe , 287 F.R.D. 182, 191 7 of 11 (S.D.N.Y. 2012)). UPS tracking emails and notices that employees must clean out the refrigerator are not responsive to Wakaya’s requests for production solely because they hit on a search term the parties’ agreed upon.

It was nice to see my Da Silva Moore case quoted on keyword defects, not just approval of predictive coding. The quote refers to what know known as the lack of PRECISION in using untested keyword search. One of the main advantages of active machine learning it to improve precision and keep lawyers from wasting their time reading messages about refrigerator cleaning.

Now Judge Burkhardt is ready to rule:

The Court is persuaded that running proposed search terms across Youngevity’s ESI, refusing to honor a negotiated agreement to provide a hit list which Wakaya was to use to narrow its requested search terms, and then producing all documents hit upon without reviewing a single document prior to production or engaging in any other quality control measures, does not satisfy Youngevity’s discovery obligations. Further, as is discussed below, mass designation of every document in both productions as AEO clearly violates the Stipulated Protective Order in this case. Youngevity may not frustrate the spirit of the discovery rules by producing a flood of documents it never reviewed, designate all the documents as AEO without regard to whether they meet the standard for such a designation, and thus bury responsive documents among millions of produced pages. See Queensridge Towers, LLC v. Allianz Glob. Risks US Ins. Co. , No. 2:13-CV-00197-JCM, 2014 WL 496952, at *6-7 (D. Nev. Feb. 4, 2014) (ordering plaintiff to supplement its discovery responses by specifying which documents are responsive to each of defendant’s discovery requests when plaintiff responded to requests for production and interrogatories by stating that the answers are somewhere among the millions of pages produced). Youngevity’s productions were such a mystery, even to itself, that it not only designated the entirety of both productions as AEO, but notified Wakaya that the productions might contain privileged documents. Accordingly, Wakaya’s request to compel proper productions is granted, as outlined below. See infra Section IV.

Judge Jill Burkhardt went on the award fees and costs to be taxed against the plaintiffs.

Conclusion

A document is never responsive, never relevant, just because it has a keyword in it. As Judge Burkhardt put it, that conflates a hit on the parties’ proposed search terms with responsiveness. In some cases, but not this one, a request for production may explicitly demand production of all documents that contain certain keywords. If such a request is made, then you should object. We are seeing more and more improper requests like this. The rules do not allow for a request to produce documents with certain keywords regardless of the relevance of the documents. (The reasonably calculated phrase was killed in 2015 and is no longer good law.) The rules and case law do not define relevance in terms of keywords. They define relevance in terms of proportional probative value to claims and defense raised. Again, as Judge Burkhardt out it, search terms do not …replace a party’s requests for production.

I agree with Josh Gilliland who said parties often get lost in search terms, focusing on document review as process independent of the claims of the lawsuit. The first step in my TAR process is ESI communications or Talk. This includes speaking with the requesting party to clarify the documents sought. This should mean discussion of the claims of the lawsuit and what the requesting party hopes to find. Keywords are just a secondary byproduct of this kind of discussion. Keywords are not an end in themselves. Avoid that quagmire as Josh says and focus on clarifying the requests for production. Focus on Rule 26(b)(1) relevance and proportionality.

Another lesson, do not get stuck with just using keywords. We have come up with many other search tools since the 1980s. Use them. Use all of them. Go Multimodal. In a big complex case like Youngevity Int’l Corp. v. Smith, be sure to go Hybrid too. Be sure to use the most powerful search tool of all,  predictive coding. See TAR Course for detailed instruction on Hybrid Multimodal. The robots will eat your keywords for lunch.

The AI power of active machine learning was the right solution available to the plaintiffs all along. Judge Burkhardt tried to tell them. Plaintiffs did not have to resort to dangerous production without review just to avoid paying their lawyers to read about their refrigerator cleanings. Let the AI read about all of that. It reads at near the speed of light and never forgets. If you have a good AI trainer, which is my specialty, the AI will understand what is relevant and find what you are looking for.


TAR Course Expands Again: Standardized Best Practice for Technology Assisted Review

February 11, 2018

The TAR Course has a new class, the Seventeenth Class: Another “Player’s View” of the Workflow. Several other parts of the Course have been updated and edited. It now has Eighteen Classes (listed at end). The TAR Course is free and follows the Open Source tradition. We freely disclose the method for electronic document review that uses the latest technology tools for search and quality controls. These technologies and methods empower attorneys to find the evidence needed for all text-based investigations. The TAR Course shares the state of the art for using AI to enhance electronic document review.

The key is to know how to use the document review search tools that are now available to find the targeted information. We have been working on various methods of use since our case before Judge Andrew Peck in Da Silva Moore in 2012. After we helped get the first judicial approval of predictive coding in Da Silva, we began a series of several hundred document reviews, both in legal practice and scientific experiments. We have now refined our method many times to attain optimal efficiency and effectiveness. We call our latest method Hybrid Multimodal IST Predictive Coding 4.0.

The Hybrid Multimodal method taught by the TARcourse.com combines law and technology. Successful completion of the TAR course requires knowledge of both fields. In the technology field active machine learning is the most important technology to understand, especially the intricacies of training selection, such as Intelligently Spaced Training (“IST”). In the legal field the proportionality doctrine is key to the  pragmatic application of the method taught at TAR Course. We give-away the information on the methods, we open-source it through this publication.

All we can transmit by online teaching is information, and a small bit of knowledge. Knowing the Information in the TAR Course is a necessary prerequisite for real knowledge of Hybrid Multimodal IST Predictive Coding 4.0. Knowledge, as opposed to Information, is taught the same way as advanced trial practice, by second chairing a number of trials. This kind of instruction is the one with real value, the one that completes a doc review project at the same time it completes training. We charge for document review and throw in the training. Information on the latest methods of document review is inherently free, but Knowledge of how to use these methods is a pay to learn process.

The Open Sourced Predictive Coding 4.0 method is applied for particular applications and search projects. There are always some customization and modifications to the default standards to meet the project requirements. All variations are documented and can be fully explained and justified. This is a process where the clients learn by doing and following along with Losey’s work.

What he has learned through a lifetime of teaching and studying Law and Technology is that real Knowledge can never be gained by reading or listening to presentations. Knowledge can only be gained by working with other people in real-time (or near-time), in this case, to carry out multiple electronic document reviews. The transmission of knowledge comes from the Q&A ESI Communications process. It comes from doing. When we lead a project, we help students to go from mere Information about the methods to real Knowledge of how it works. For instance, we do not just make the Stop decision, we also explain the decision. We share our work-product.

Knowledge comes from observing the application of the legal search methods in a variety of different review projects. Eventually some Wisdom may arise, especially as you recover from errors. For background on this triad, see Examining the 12 Predictions Made in 2015 in “Information → Knowledge → Wisdom” (2017). Once Wisdom arises some of the sayings in the TAR Course may start to make sense, such as our favorite “Relevant Is Irrelevant.” Until this koan is understood, the legal doctrine of Proportionality can be an overly complex weave.

The TAR Course is now composed of eighteen classes:

  1. First Class: Background and History of Predictive Coding
  2. Second Class: Introduction to the Course
  3. Third Class:  TREC Total Recall Track, 2015 and 2016
  4. Fourth Class: Introduction to the Nine Insights from TREC Research Concerning the Use of Predictive Coding in Legal Document Review
  5. Fifth Class: 1st of the Nine Insights – Active Machine Learning
  6. Sixth Class: 2nd Insight – Balanced Hybrid and Intelligently Spaced Training (IST)
  7. Seventh Class: 3rd and 4th Insights – Concept and Similarity Searches
  8. Eighth Class: 5th and 6th Insights – Keyword and Linear Review
  9. Ninth Class: 7th, 8th and 9th Insights – SME, Method, Software; the Three Pillars of Quality Control
  10. Tenth Class: Introduction to the Eight-Step Work Flow
  11. Eleventh Class: Step One – ESI Communications
  12. Twelfth Class: Step Two – Multimodal ECA
  13. Thirteenth Class: Step Three – Random Prevalence
  14. Fourteenth Class: Steps Four, Five and Six – Iterative Machine Training
  15. Fifteenth Class: Step Seven – ZEN Quality Assurance Tests (Zero Error Numerics)
  16. Sixteenth Class: Step Eight – Phased Production
  17. Seventeenth Class: Another “Player’s View” of the Workflow (class added 2018)
  18. Eighteenth Class: Conclusion

With a lot of hard work you can complete this online training program in a long weekend, but most people take a few weeks. After that, this course can serve as a solid reference to consult during complex document review projects. It can also serve as a launchpad for real Knowledge and eventually some Wisdom into electronic document review. TARcourse.com is designed to provide you with the Information needed to start this path to AI enhanced evidence detection and production.

 


%d bloggers like this: