Dumb and Dumber Strike Again: New case out of California provides a timely lesson in legal search stupidity

February 18, 2018

An interesting, albeit dumb, case out of California provides some good cautionary instruction for anybody doing discovery. Youngevity Int’l Corp. v. Smith, 2017 U.S. Dist. LEXIS 210386 (S.D. Cal. Dec. 21, 2017). Youngevity is essentially an unfair competition dispute that arose when some multi-level nutritional marketing sales types left one company to form their own. Yup, multi-level nutritional sales; the case has sleaze written all over it. The actions of the Plaintiff in this case are, in my opinion, and that of the judge, especially embarrassing. In fact both sides remind me of a classic movie Dumb and Dumber. It has a line in it favored by all students of statistics: So you’re telling me there’s a chance.

One in a million is about the chance that the Plaintiff’s discovery plan in Youngevity had of succeeding in federal court in front of the smart United States Magistrate Judge assigned to the case, Jill L. Burkhardt.

Dumb and Dumber

So what did the Plaintiff do that is so dumb? So timely? They confused documents that have a “hit” in them with documents that are relevant. As if having a keyword in a document could somehow magically make it relevant under the rules, or responsive to a request for relevant information under the rules. Not not only that, and here is the dumber part, the Plaintiff produced 4.2 Million pages of such “hit” documents to defendant without reviewing them. They produced the documents without review, but tried to protect their privilege by designating them all “Attorney Eyes Only.” Dumb and dumber. But, in fairness to Plaintiff’s counsel, not something I am especially known for doing, I know, but still, in fairness to the eight attorneys of record for the plaintiffs, this is something that clients sometimes make their attorneys do as a “cost saving” maneuver.

Fellow Blogger Comment

As my bow-tied friend, , put it in his blog on this case:

Just because ESI is a hit to a search term, does NOT mean that data is responsive to any discovery request. Moreover, designating all ESI as Attorney-Eyes Only should not be done as a tactic to avoid conducting document review. …

Responding to discovery requests should not ignore requests for production. Parties often get lost in search terms, focusing on document review as process independent of the claims of the lawsuit. Lawyers should resist that quagmire and focus document review to respond to the requests for production. Developing searches is the first step in responding, however, a search strategy should not simply be keywords. Searches should be built with the requests, including date ranges, messages sent between individuals, and other methods to focus on the merits of the case, not document review for the sake of document review.

The occurrence of a keyword term in a paper document, or a computer file, or any other ESI does not make the file relevant. A ESI file is relevant depending on the overall content of the file, not just one word.

Procedural Background

Here is Judge Jill L. Burkhardt concise explanation of the factual, procedural background of the keyword dispute (citations to the record omitted).

On May 9, 2017, Wakaya emailed Youngevity to discuss the use of search terms to identify and collect potentially responsive electronically-stored information (ESI) from the substantial amount of ESI both parties possessed. Wakaya proposed a three-step process by which: “(i) each side proposes a list of search terms for their own documents; (ii) each side offers any supplemental terms to be added to the other side’s proposed list; and (iii) each side may review the total number of results generated by each term in the supplemented lists (i.e., a ‘hit list’ from our third-party vendors) and request that the other side omit any terms appearing to generate a disproportionate number of results.” On May 10, 2017, while providing a date to exchange search terms, Youngevity stated that the “use of key words as search aids may not be used to justify non-disclosure of responsive information.” On May 15, 2017, Youngevity stated that “[w]e are amenable to the three step process described in your May 9 e-mail….” Later that day, the parties exchanged lists of proposed search terms to be run across their own ESI. On May 17, 2017, the parties exchanged lists of additional search terms that each side proposed be run across the opposing party’s ESI.

The plaintiffs never produced their hit list as promised and as demanded by Defendants several times after the agreement was reached. Instead, they produced all documents on the hit list, some 4.2 Million pages, and labeled them all AEO. The defendants primarily objected to calling the plaintiffs’ labeling all documents Attorneys Eyes Only, instead of Confidential. The complaint about the production defect by producing all documents with hits, instead of all documents that were responsive, seems like an after thought.

Keyword Search Was New in the 1980s

The focus in this case on keyword search alone, instead of using a Hybrid Multimodal approach, is how a majority of ill-informed lawyers today still handle legal search today. I think keywords are an acceptable way to start a conversation, and begin a review, but to use keyword search alone  hearkens back to the dark ages of document review, the mid-nineteen eighties. That is when lawyers first started using keyword search. Remember the Blair & Maron study of the San Francisco subway litigation document search? The study was completed in 1985. It found that when the lawyers and paralegals thought they had found over 75% of the relevant documents using keyword search, that they had in fact only found 20%. Blair, David C., & Maron, M. E., An evaluation of retrieval effectiveness for a full-text document-retrieval system; Communications of the ACM Volume 28, Issue 3 (March 1985).

The Blair Maron study is thirty-three years old and yet today we still have lawyers using keyword search alone, like it was the latest and greatest. The technology gap in the law is incredibly large. This is especially true when it comes to document review where the latest AI enhanced technologies are truly great. WHY I LOVE PREDICTIVE CODING: Making Document Review Fun Again with Mr. EDR and Predictive Coding 4.0. Wake up lawyers. We have come a long was since the 1980s and keyword search.

Judge Burkhardt’s Ruling

Back to the Dumb and Dumber story in Youngevity as told to us by the smartest person in that room, by far, Judge Burkhardt:

The Court suggested that a technology-assisted review (TAR) may be the most efficient way to resolve the myriad disputes surrounding Youngevity’s productions.

Note this suggestion seems to have been ignored by both sides. Are you surprised? At least the judge tried. Not back to the rest of the Dumb and Dumber story:

designated as AEO. Youngevity does not claim that the documents are all properly designated AEO, but asserts that this mass designation was the only way to timely meet its production obligations when it produced documents on July 21, 2017 and August 22, 2017. It offers no explanation as to why it has not used the intervening five months to conduct a review and properly designate the documents, except to say, “Youngevity believes that the parties reached an agreement on de-designation of Youngevity’s production which will occur upon the resolution of the matters underlying this briefing.” Why that de-designation is being held up while this motion is pending is not evident.

Oh yeah. Try to BS the judge. Another dumb move. Back to the story:

Wakaya argues that Youngevity failed to review any documents prior to production and instead provided Wakaya with a “document dump” containing masses of irrelevant documents, including privileged information, and missing “critical” documents. Youngevity’s productions contain documents such as Business Wire news emails, emails reminding employees to clean out the office
refrigerator, EBay transaction emails, UPS tracking emails, emails from StubHub, and employee file and benefits information. Youngevity argues that it simply provided the documents Wakaya requested in the manner that Wakaya instructed.  …

Wakaya demanded that Youngevity review its production and remove irrelevant and non-responsive documents.

The poor judge is now being bothered by motions and phone calls as the many lawyers for both sides bill like crazy and ask for her help. Judge Burkhardt again does the smart thing and pushed the attorneys to use TAR and, since it is obvious they are clueless, to hire vendors to help them to do it.

[T]he Court suggested that conducting a TAR of Youngevity’s productions might be an efficient way to resolve the issues. On October 5, 2017, the parties participated in another informal discovery conference with the Court because they were unable to resolve their disputes relating to the TAR process and the payment of costs associated with TAR. The Court suggested that counsel meet and confer again with both parties’ discovery vendors participating. Wakaya states that on October 6, 2017, the parties participated in a joint call with their discovery vendors to discuss the TAR process.  The parties could not agree on who would bear the costs of the TAR process. Youngevity states that it offered to pay half the costs associated with the TAR process, but Wakaya would not agree that TAR alone would result in a document production that satisfied Youngevity’s discovery obligations. Wakaya argued that it should not have bear the costs of fixing Youngevity’s improper productions. On October 9, 2017, the parties left a joint voicemail with the Court stating that they had reached a partial agreement to conduct a TAR of Youngevity’s production, but could not resolve the issue of which party would bear the TAR costs. In response to the parties’ joint voicemail, the Court issued a briefing schedule for the instant motion.

Makes you want to tear your hair out just to read it, doesn’t it? Yet the judge has to deal with junk like this every day. Patience of a saint.

More from Judge Burkhardt, who does a very good survey of the relevant law, starting at page four of the opinion (I suggest you read it). Skipping to the Analysis segment of the opinion at pages five through nine, here are the highlights, starting with a zinger against all counsel concerning the Rule 26(g) arguments:

Wakaya fails to establish that Youngevity violated Rule 26(g). Wakaya does not specifically claim that certificates signed by Youngevity or its counsel violate Rule 26(g). Neither party, despite filing over 1,600 pages of briefing and exhibits for this motion, provided the Court with Youngevity’s written discovery responses and certification. The Court declines to find that Youngevity improperly certified its discovery responses when the record before it does not indicate the content of Youngevity’s written responses, its certification, or a declaration stating that Youngevity in fact certified its responses. See Cherrington Asia Ltd. v. A & L Underground, Inc., 263 F.R.D. 653, 658 (D. Kan. 2010) (declining to impose sanctions under Rule 26(g) when plaintiffs do not specifically claim that certificates signed by defendant’s counsel violated the provisions of Rule 26(g)(1)). Accordingly, Wakaya is not entitled to relief under Rule 26(g).

Wow! Over 1,600 pages of memos and nobody provided the Rule 26(g) certification to the court that plaintiffs’ counsel allegedly violated. Back to the Dumb and Dumber story as told to us by Judge Burkhardt:

Besides establishing that Youngevity’s production exceeded Wakaya’s requests, the record indicates that Youngevity did not produce documents following the protocol to which the parties agreed.  … Youngevity failed to produce its hit list to Wakaya, and instead produced every document that hit upon any proposed search term. Had Youngevity provided its hit list to Wakaya as agreed and repeatedly requested, Wakaya might have proposed a modification to the search terms that generated disproportionate results, thus potentially substantially reducing the number of documents requiring further review and ultimate production. …

Second, Youngevity conflates a hit on the parties’ proposed search terms with responsiveness.[11] The two are not synonymous. Youngevity admits that it has an obligation to produce responsive documents. Youngevity argues that because each document hit on a search term, “the documents Youngevity produced are necessarily responsive to Wakaya’s Requests.” Search terms are an important tool parties may use to identify potentially responsive documents in cases involving substantial amounts of ESI. Search terms do not, however, replace a party’s requests for production. See In re Lithium Ion Batteries Antitrust Litig., No. 13MD02420 YGR (DMR), 2015 WL 833681, at *3 (N.D. Cal. Feb. 24, 2015) (noting that “a problem with keywords ‘is that they often are over inclusive, that is, they find responsive documents but also large numbers of irrelevant documents’”) (quoting Moore v. Publicis Groupe , 287 F.R.D. 182, 191 7 of 11 (S.D.N.Y. 2012)). UPS tracking emails and notices that employees must clean out the refrigerator are not responsive to Wakaya’s requests for production solely because they hit on a search term the parties’ agreed upon.

It was nice to see my Da Silva Moore case quoted on keyword defects, not just approval of predictive coding. The quote refers to what know known as the lack of PRECISION in using untested keyword search. One of the main advantages of active machine learning it to improve precision and keep lawyers from wasting their time reading messages about refrigerator cleaning.

Now Judge Burkhardt is ready to rule:

The Court is persuaded that running proposed search terms across Youngevity’s ESI, refusing to honor a negotiated agreement to provide a hit list which Wakaya was to use to narrow its requested search terms, and then producing all documents hit upon without reviewing a single document prior to production or engaging in any other quality control measures, does not satisfy Youngevity’s discovery obligations. Further, as is discussed below, mass designation of every document in both productions as AEO clearly violates the Stipulated Protective Order in this case. Youngevity may not frustrate the spirit of the discovery rules by producing a flood of documents it never reviewed, designate all the documents as AEO without regard to whether they meet the standard for such a designation, and thus bury responsive documents among millions of produced pages. See Queensridge Towers, LLC v. Allianz Glob. Risks US Ins. Co. , No. 2:13-CV-00197-JCM, 2014 WL 496952, at *6-7 (D. Nev. Feb. 4, 2014) (ordering plaintiff to supplement its discovery responses by specifying which documents are responsive to each of defendant’s discovery requests when plaintiff responded to requests for production and interrogatories by stating that the answers are somewhere among the millions of pages produced). Youngevity’s productions were such a mystery, even to itself, that it not only designated the entirety of both productions as AEO, but notified Wakaya that the productions might contain privileged documents. Accordingly, Wakaya’s request to compel proper productions is granted, as outlined below. See infra Section IV.

Judge Jill Burkhardt went on the award fees and costs to be taxed against the plaintiffs.

Conclusion

A document is never responsive, never relevant, just because it has a keyword in it. As Judge Burkhardt put it, that conflates a hit on the parties’ proposed search terms with responsiveness. In some cases, but not this one, a request for production may explicitly demand production of all documents that contain certain keywords. If such a request is made, then you should object. We are seeing more and more improper requests like this. The rules do not allow for a request to produce documents with certain keywords regardless of the relevance of the documents. (The reasonably calculated phrase was killed in 2015 and is no longer good law.) The rules and case law do not define relevance in terms of keywords. They define relevance in terms of proportional probative value to claims and defense raised. Again, as Judge Burkhardt out it, search terms do not …replace a party’s requests for production.

I agree with Josh Gilliland who said parties often get lost in search terms, focusing on document review as process independent of the claims of the lawsuit. The first step in my TAR process is ESI communications or Talk. This includes speaking with the requesting party to clarify the documents sought. This should mean discussion of the claims of the lawsuit and what the requesting party hopes to find. Keywords are just a secondary byproduct of this kind of discussion. Keywords are not an end in themselves. Avoid that quagmire as Josh says and focus on clarifying the requests for production. Focus on Rule 26(b)(1) relevance and proportionality.

Another lesson, do not get stuck with just using keywords. We have come up with many other search tools since the 1980s. Use them. Use all of them. Go Multimodal. In a big complex case like Youngevity Int’l Corp. v. Smith, be sure to go Hybrid too. Be sure to use the most powerful search tool of all,  predictive coding. See TAR Course for detailed instruction on Hybrid Multimodal. The robots will eat your keywords for lunch.

The AI power of active machine learning was the right solution available to the plaintiffs all along. Judge Burkhardt tried to tell them. Plaintiffs did not have to resort to dangerous production without review just to avoid paying their lawyers to read about their refrigerator cleanings. Let the AI read about all of that. It reads at near the speed of light and never forgets. If you have a good AI trainer, which is my specialty, the AI will understand what is relevant and find what you are looking for.


TAR Course Expands Again: Standardized Best Practice for Technology Assisted Review

February 11, 2018

The TAR Course has a new class, the Seventeenth Class: Another “Player’s View” of the Workflow. Several other parts of the Course have been updated and edited. It now has Eighteen Classes (listed at end). The TAR Course is free and follows the Open Source tradition. We freely disclose the method for electronic document review that uses the latest technology tools for search and quality controls. These technologies and methods empower attorneys to find the evidence needed for all text-based investigations. The TAR Course shares the state of the art for using AI to enhance electronic document review.

The key is to know how to use the document review search tools that are now available to find the targeted information. We have been working on various methods of use since our case before Judge Andrew Peck in Da Silva Moore in 2012. After we helped get the first judicial approval of predictive coding in Da Silva, we began a series of several hundred document reviews, both in legal practice and scientific experiments. We have now refined our method many times to attain optimal efficiency and effectiveness. We call our latest method Hybrid Multimodal IST Predictive Coding 4.0.

The Hybrid Multimodal method taught by the TARcourse.com combines law and technology. Successful completion of the TAR course requires knowledge of both fields. In the technology field active machine learning is the most important technology to understand, especially the intricacies of training selection, such as Intelligently Spaced Training (“IST”). In the legal field the proportionality doctrine is key to the  pragmatic application of the method taught at TAR Course. We give-away the information on the methods, we open-source it through this publication.

All we can transmit by online teaching is information, and a small bit of knowledge. Knowing the Information in the TAR Course is a necessary prerequisite for real knowledge of Hybrid Multimodal IST Predictive Coding 4.0. Knowledge, as opposed to Information, is taught the same way as advanced trial practice, by second chairing a number of trials. This kind of instruction is the one with real value, the one that completes a doc review project at the same time it completes training. We charge for document review and throw in the training. Information on the latest methods of document review is inherently free, but Knowledge of how to use these methods is a pay to learn process.

The Open Sourced Predictive Coding 4.0 method is applied for particular applications and search projects. There are always some customization and modifications to the default standards to meet the project requirements. All variations are documented and can be fully explained and justified. This is a process where the clients learn by doing and following along with Losey’s work.

What he has learned through a lifetime of teaching and studying Law and Technology is that real Knowledge can never be gained by reading or listening to presentations. Knowledge can only be gained by working with other people in real-time (or near-time), in this case, to carry out multiple electronic document reviews. The transmission of knowledge comes from the Q&A ESI Communications process. It comes from doing. When we lead a project, we help students to go from mere Information about the methods to real Knowledge of how it works. For instance, we do not just make the Stop decision, we also explain the decision. We share our work-product.

Knowledge comes from observing the application of the legal search methods in a variety of different review projects. Eventually some Wisdom may arise, especially as you recover from errors. For background on this triad, see Examining the 12 Predictions Made in 2015 in “Information → Knowledge → Wisdom” (2017). Once Wisdom arises some of the sayings in the TAR Course may start to make sense, such as our favorite “Relevant Is Irrelevant.” Until this koan is understood, the legal doctrine of Proportionality can be an overly complex weave.

The TAR Course is now composed of eighteen classes:

  1. First Class: Background and History of Predictive Coding
  2. Second Class: Introduction to the Course
  3. Third Class:  TREC Total Recall Track, 2015 and 2016
  4. Fourth Class: Introduction to the Nine Insights from TREC Research Concerning the Use of Predictive Coding in Legal Document Review
  5. Fifth Class: 1st of the Nine Insights – Active Machine Learning
  6. Sixth Class: 2nd Insight – Balanced Hybrid and Intelligently Spaced Training (IST)
  7. Seventh Class: 3rd and 4th Insights – Concept and Similarity Searches
  8. Eighth Class: 5th and 6th Insights – Keyword and Linear Review
  9. Ninth Class: 7th, 8th and 9th Insights – SME, Method, Software; the Three Pillars of Quality Control
  10. Tenth Class: Introduction to the Eight-Step Work Flow
  11. Eleventh Class: Step One – ESI Communications
  12. Twelfth Class: Step Two – Multimodal ECA
  13. Thirteenth Class: Step Three – Random Prevalence
  14. Fourteenth Class: Steps Four, Five and Six – Iterative Machine Training
  15. Fifteenth Class: Step Seven – ZEN Quality Assurance Tests (Zero Error Numerics)
  16. Sixteenth Class: Step Eight – Phased Production
  17. Seventeenth Class: Another “Player’s View” of the Workflow (class added 2018)
  18. Eighteenth Class: Conclusion

With a lot of hard work you can complete this online training program in a long weekend, but most people take a few weeks. After that, this course can serve as a solid reference to consult during complex document review projects. It can also serve as a launchpad for real Knowledge and eventually some Wisdom into electronic document review. TARcourse.com is designed to provide you with the Information needed to start this path to AI enhanced evidence detection and production.

 


The SME Team Members in a Complex AI-Enhanced Document Review Project

January 14, 2018

See the full article on all members and activities of a complex document review team, which this video supplements:  The Key Players and Play of an e-Discovery Team in a Complex AI-Enhanced Document Review Project.

Also see the video on the four-step method. Iterated Four-Step Work Flow for Active Machine Training to Help Attorneys Locate Relevant Evidence.

 



%d bloggers like this: