Elusion Random Sample Test Ordered Under Rule 26(g) in a Keyword Search Based Discovery Plan

August 26, 2018

There is a new case out of Chicago that advances the jurisprudence of my sub-specialty, Legal Search. City of Rockford v. Mallinckrodt ARD Inc., 2018 WL 3766673, Case 3:17-cv-50107 (N.D. Ill., Aug. 7, 2018). This discovery order was written by U.S. Magistrate Judge Iain Johnston who entitled it: “Order Establishing Production Protocol for Electronically Stored Information.” The opinion is both advanced and humorous, destined to be an oft-cited favorite for many. Thank you Judge Johnston.

In City of Rockford an Elusion random sample quality assurance test was required as part of the parties discovery plan to meet the reasonable efforts requirements of Rule 26(g). The random sample procedure proposed was found to impose only a proportional, reasonable burden under Rule 26(b)(1). What makes this holding particularly interesting is that an Elusion test is commonly employed in predictive coding projects, but here the parties had agreed to a keyword search based discovery plan. Also see: Tara Emory, PMP, Court Holds that Math Matters for eDiscovery Keyword Search,  Urges Lawyers to Abandon their Fear of Technology (Driven, (August 16, 2018) (“party using keywords was required to test the search effectiveness by sampling the set of documents that did not contain the keywords.”)

The Known Unknowns and Unknown Unknowns

Judge Johnston begins his order in City of Rockford with a famous quote by Donald Rumseld, a two-time Secretary of Defense.

“[A]s we know there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. . .”
Donald Rumseld

For those not familiar with this famous Known Knowns quip, here is a video of the original:

Here the knowledge logic is spelled out in a chart, since I know we all love that sort of thing. Deconstructing Rumsfeld: Knowledge and Ignorance in the Age of Innovation (Inovo 5/114).

Anybody who does complex investigations is familiar with this problem. Indeed, you can argue this insight is fundamental to all of science and experimental method. Logan, David C. (March 1, 2009). “Known knowns, known unknowns, unknown unknowns and the propagation of scientific enquiry”, Journal of Experimental Botany 60 (3). pp. 712–4. [I have always wanted to quote a botany journal.]

How do you deal with the known unknowns and the unknown unknowns, the information that we don’t even know that we don’t know about? The deep, hidden information that is both obtuse and rare. Information that is hard to retrieve and harder still to prove does not exist at all. Are you chasing something that might not exist? Something unknown because nonexistent? Such as an overlooked Highly Relevant document? (The stuff of nightmares!) Are you searching for nothing? Zero? If you find it, what does that mean? What can be known and what can never be known? Scientists, investigators and the Secretary of Defense alike all have to ponder these questions and all want to use the best tools and best people possible to do so. See: Deconstructing Rumsfeld: Knowledge and Ignorance in the Age of Innovation (Inovo 5/114).

Seeking Knowledge of the Unknown Elusion Error Rate

These big questions, though interesting, are not why Judge Johnston started his opinion with the Rumseld quote. Instead, he used the quote to emphasize that new e-discovery methods, namely random sampling and statistical analysis, can empower lawyers to know what they never did before. A technical way to know the known unknowns. For instance, a way to know the number of relevant documents that will be missed and not produced: the documents that elude retrieval.

As the opinion and this blog will explain, you can do that, know that, by using an Elusion random sample of the null-set. The statistical analysis of the sample transforms the unknown quantity to a known (subject to statistical probabilities and range). It allows lawyers to know, at least within a range, the number of relevant documents that have not been found. This is a very useful quality assurance method that relies on objective measurements to demonstrate success of your project, which here is information retrieval. This and other random sampling methods allow for the calculation of Recall, meaning the percent of total relevant documents found. This is another math-based, quality assurance tool in the field of information retrieval.

One of the main points Judge Johnston makes in his order is that lawyers should embrace this kind of technical knowledge, not shy away from it. As Tara Emory said in her article, Court Holds that Math Matters for eDiscovery Keyword Search:

A producing party must determine that its search process was reasonable. In many cases, the best way to do this is with objective metrics. Producing parties often put significant effort into brainstorming keywords, interviewing witnesses to determine additional terms, negotiating terms with the other party, and testing the documents containing their keywords to eliminate false positives. However, these efforts often still fail to identify documents if important keywords were missed, and sampling the null set is a simple, reasonable way to test whether additional keywords are needed. …

It is important to overcome the fear of technology and its related jargon, which can help counsel demonstrate the reasonableness of search and production process. As Judge Johnston explains, sampling the null set is a process to determine “the known unknown,” which “is the number of the documents that will be missed and not produced.” Judge Johnson disagreed with the defendants’ argument “that searching the null set would be costly and burdensome.” The Order requires Defendants to sample their null set at a 95% +/-2% margin of error (which, even for a very large set of documents, would be about 2,400 documents to review).[4] By taking these measures—either with TAR or with search terms, counsel can more appropriately represent that they have undertaken a “reasonable inquiry” for relevant information within the meaning of FRCP 26(g)(1).

Small Discovery Dispute in an Ocean of Cooperation

Judge Johnston was not asked to solve the deep mysteries of knowing and not knowing in City of Rockford. The parties came to him instead with an interesting, esoteric discovery dispute. They had agreed on a great number of things, for which the court profusely congratulated them.

The attorneys are commended for this cooperation, and their clients should appreciate their efforts in this regard. The Court certainly does. The litigation so far is a solid example that zealous advocacy is not necessarily incompatible with cooperation. The current issue before the Court is an example of that advocacy and cooperation. The parties have worked to develop a protocol for the production of ESI in this case, but have now reached an impasse as to one aspect of the protocol.

The parties disagreed on whether to include a document review quality assurance test in the protocol. The Plaintiffs wanted one and the Defendants did not. Too burdensome they said.

To be specific, the Plaintiffs wanted a test where the efficacy of any parties production would be tested by use of an Elusion type of Random Sample of the documents not produced. The Defendants opposed any specific test. Instead, they wanted the discovery protocol to say that if the receiving party had concerns about the adequacy of the producing party’s efforts, then they would have a conference to address the concerns.

Judge Johnston ruled for the plaintiff in this dispute and ordered a  random elusion sample to be taken after the defendant stopped work and completed production. In this case it was a good decision, but should not be routinely required in all matters.

The Stop Decision and Elusion Sample

One of the fundamental problems in any investigation is to know when you should stop the investigation because it is no longer worth the effort to carry on. When has a reasonable effort been completed? Ideally this happens after all of the important documents have already been found. At that point you should stop the effort and move on to a new project. Alternatively, perhaps you should keep on going and look for more? Should you stop or not?

In Legal Search we all this the “Stop Decision.” Should you conclude the investigation or continue further AI training rounds and other search. As explained in the e-Discovery Team TAR Course:

The all important stop decision is a legal, statistical decision requiring a holistic approach, including metrics, sampling and over-all project assessment.You decide to stop the review after weighing a multitude of considerations. Then you test your decision with a random sample in Step Seven.

See: TAR Course: 15th Class – Step Seven – ZEN Quality Assurance Tests.

If you want to go deeper into this, then listen in on this TAR Course lecture on the Stop decision.

____________

Once a decision is made to Stop, then a well managed document review project will use different tools and metrics to verify that the Stop decision was correct. Judge Johnston in City of Rockford used one of my favorite tools, the Elusion random sample that I teach in the e-Discovery Team TAR Course. This type of random sample is called an Elusion sample.

Judge Johnston ordered an Elusion type random sample of the null set in City of Rockford. The sample would determine the range of relevant documents that likely eluded you. These are called False Negatives. Documents presumed Irrelevant and withheld that were in fact Relevant and should have been produced. The Elusion sample is designed to give you information on the total number of Relevant documents that were likely missed, unretrieved, unreviewed and not produced or logged. The fewer the number of False Negatives the better the Recall of True Positives. The goal is to find, to retrieve, all of the Relevant ESI in the collection.

Another way to say the same thing is to say that the goal is Zero False Negatives. You do not miss a single relevant file. Every file designated Irrelevant is in fact not relevant. They are all True Negatives. That would be Total Recall: “the Truth, the Whole Truth …” But that is very rare and some error, some False Negatives, are expected in every large information retrieval project. Some relevant documents will almost always be missed, so the goal is to make the False Negatives inconsequential and keep the Elusion rate low.

Here is how Judge Iain Johnston explained the random sample:

Plaintiffs propose a random sample of the null set. (The “null set” is the set of documents that are not returned as responsive by a search process, or that are identified as not relevant by a review process. See Maura R. Grossman & Gordon v. Cormack, The Grossman-Cormack Glossary of Technology-Assisted Review, 7 Fed. Cts. L. Rev. 1, 25 (2013). The null set can be used to determine “elusion,” which is the fraction of documents identified as non-relevant by a search or review effort that are, in fact, relevant. Elusion is estimated by taking a random sample of the null set and determining how many or what portion of documents are actually relevant. Id. at 15.) FN 2

Judge Johnston’s Footnote Two is interesting for two reasons. One, it attempts to calm lawyers who freak out when hearing anything having to do with math or statistics, much less information science and technology. Two, it does so with a reference to Fizbo the clown.

The Court pauses here for a moment to calm down litigators less familiar with ESI. (You know who you are.) In life, there are many things to be scared of, including, but not limited to, spiders, sharks, and clowns – definitely clowns , even Fizbo. ESI is not something to be scared of. The same is true for all the terms and jargon related to ESI. … So don’t freak out.

Accept on Zero Error for Hot Documents

Although this is not addressed in the court order, in my personal view, no False Negatives, iw – overlooked  documents – are acceptable when it comes to Highly Relevant documents. If even one document like that is found in the sample, one Highly Relevant Document, then the Elusion test has failed in my view. You must conclude that the Stop decision was wrong and training and document review must recommence. That is called an Accept on Zero Error test for any hot documents found. Of course my personal views on best practice here assume the use of AI ranking, and the parties in City of Rockford only used keyword search. Apparently they were not doing machine training at all.

The odds of finding False Negatives, assuming that only a few exist (very low prevalence) and the database is large, are very unlikely in a modest sized random sample. With very low prevalence of relevant ESI the test can be of limited effectiveness. That is an inherent problem with low prevalence and random sampling. That is why statistics have only limited effectiveness and should be considered part of a total quality control program. See Zero Error Numerics: ZEN. Math matters, but so too does good project management and communications.

The inherent problem with random sampling is that the only way to reduce the error interval is to increase the size of the sample. For instance, to decrease the margin of error to only 2% either way, a total error of 4%, a random sample size of around 2,400 documents is needed. Even though that narrows the error rate to 4%, there is still another error factor of the Confidence Level, here at 95%. Still, it is not worth the effort to review even more sample documents to reduce that to a 99% Level.

Random sampling has limitations in low prevalence datasets, which is typical in e-discovery, but still sampling can be very useful. Due to this rarity issue, and the care that producing parties always take to attain high Recall, any documents found in an Elusion random sample should be carefully studied to see if they are of any significance. We look very carefully at any new documents found that are of a kind not seen before. That is unusual. Typically  any relevant documents found by random sample of the elusion set are of a type that have been seen before, often many, many times before. These “same old, same old” type of documents are of no importance to the investigation at this point.

Most email related datasets are filled with duplicative, low value data. It is not exactly irrelevant noise, but it is not a helpful signal either. We do not care if we  get all of that kind of merely relevant data. What we really want are the Hot Docs, the high value Highly Relevant ESI, or at least Relevant and of a kind not seen before. That is why the Accept On Zero Error test is so important for Highly Relevant documents.

The Elusion Test in City of Rockford 

In City of Rockford Judge Johnston considered a discovery stipulation where the parties had agreed to use a typical keyword search protocol, but disagreed on a quality assurance protocol. Judge Johnston held:

With key word searching (as with any retrieval process), without doubt, relevant documents will be produced, and without doubt, some relevant documents will be missed and not produced. That is a known known. The known unknown is the number of the documents that will be missed and not produced.

Back to the False Negatives again, the known unknown. Judge Johnston continues his analysis:

But there is a process by which to determine that answer, thereby making the known unknown a known known. That process is to randomly sample the nullset. Karl Schieneman & Thomas C. Gricks III, The Implications of Rule26(g) on the Use of Technology-Assisted Review, 2013 Fed. Cts. L. Rev. 239, 273 (2013)(“[S]ampling the null set will establish the number of relevant documents that are not being produced.”). Consequently, the question becomes whether sampling the null set is a reasonable inquiry under Rule 26(g) and proportional to the needs of this case under Rule 26(b)(1).

Rule 26(g) Certification
Judge Johnston takes an expansive view of the duties placed on counsel of record by Rule 26(g), but concedes that perfection is not required:

Federal Rule of Civil Procedure 26(g) requires all discovery requests be signed by at least one attorney (or party, if proceeding pro se). Fed. R. Civ. P. 26(g)(1). By signing the response, the attorney is certifying that to the best of counsel’s knowledge, information, and belief formed after a reasonable inquiry, the disclosure is complete and correct at the time it was made. Fed. R. Civ. P. 26(g)(1)(A). But disclosure of documents need not be perfect. … If the Federal Rules of Civil Procedure were previously only translucent on this point, it should now be clear with the renewed emphasis on proportionality.

Judge Johnston concludes that Rule 26(g) on certification applies to require the Elusion sample in this case.

Just as it is used in TAR, a random sample of the null set provides validation and quality assurance of the document production when performing key word searches.  Magistrate Judge Andrew Peck made this point nearly a decade ago. See William A. Gross Constr. Assocs., 256 F.R.D. at 135-6 (citing Victor Stanley, Inc. v. Creative Pipe, Inc., 250 F.R.D. 251, 262 (D. Md. 2008)); In re Seroquel Products Liability Litig., 244 F.R.D. 650, 662 (M.D. Fla. 2007) (requiring quality assurance).

Accordingly, because a random sample of the null set will help validate the document production in this case, the process is reasonable under Rule 26(g).

Rule 26(b)(1) Proportionality

Judge Johnston considered as a separate issue whether it was proportionate under Rule 26(b)(1) to require the elusion test requested. Again, the court found that it was in this large case on the pricing of prescription medication and held that it was proportional:

The Court’s experience and understanding is that a random sample of the null set will not be unreasonably expensive or burdensome. Moreover and critically, Defendants have failed to provide any evidence to support their contention. Mckinney/Pearl Rest. Partners, L.P. v. Metro. Life Ins. Co., 322 F.R.D. 235, 242 (N.D.Tex. 2016) (party required to submit affidavits or offer evidence revealing the nature of the burden)
Once again we see a party seeking protection from having to do something because it is so burdensome then failing to present actual evidence of burden. We see this a lot lately. Responding Party’s Complaints of Financial Burden of Document Review Were Unsupported by the Evidence, Any Evidence (e-Discovery Team, 8/5/18);

Judge Johnston concludes his “Order Establishing Production Protocol for Electronically Stored Information” with the following:

The Court adopts the parties’ proposed order establishing the production protocol for ESI with the inclusion of Plaintiffs’ proposal that a random sample of the null set will occur after the production and that any responsive documents found as a result of that process will be produced. Moreover, following that production, the parties should discuss what additional actions, if any, should occur. If the parties cannot agree at that point, they can raise the issue with the Court.

Conclusion

City of Rockford is important because it is the first case to hold that a quality control procedure should be used to meet the reasonable efforts certification requirements of Rule 26(g). The procedure here required was a random sample Elusion test with related, limited data sharing. If this interpretation of Rule 26(g) is followed by other courts, then it could have a big impact on legal search jurisprudence. Tara Emory in her article, Court Holds that Math Matters for eDiscovery Keyword Search goes so far as to conclude that City of Rockford stands for the proposition that “the testing and sampling process associated with search terms is essential for establishing the reasonableness of a search under FRCP 26(g).”

The City of Rockford holding could persuade other judges and encourage courts to be more active and impose specific document review procedures on all parties, including requiring the use of sampling and artificial intelligence. The producing party cannot always have a  free pass under Sedona Principle Six. Testing and sampling may well be routinely ordered in all “large” document review cases in the future.

It will be very interesting to watch how other attorneys argue City of Rockford. It will continue a line of cases examining methodology and procedures in document review. See eg., William A. Gross Construction Associates, Inc. v. American Manufacturers Mutual Insurance Co., 256 F.R.D. 134 (S.D.N.Y. 2009) (“wake-up call” for lawyers on keyword search); Winfield v. City of New York (SDNY, Nov. 27, 2017), where Judge Andrew Peck considers methodologies and quality controls of the active machine learning process. Also see Special Master Maura Grossman’s Order Regarding Search Methodology for ESI, a validation Protocol for the Broiler Chicken antitrust cases.

The validation procedure of an Elusion sample in City of Rockford is just one of many possible review protocols that a court could impose under Rule 26(g). There are dozens more, including whether predictive coding should be required. So far, courts have been reluctant to order that, as Judge Peck explained in Hyles:

There may come a time when TAR is so widely used that it might be unreasonable for a party to decline to use TAR. We are not there yet.

Hyles v. New York City, No. 10 Civ. 3119 (AT)(AJP), 2016 WL 4077114 (S.D.N.Y. Aug. 1, 2016):

Like a kid in the backseat of the car, I cannot help but ask, are we there yet? Hyles was published over two years ago now. Maybe some court, somewhere in the world, has already ordered a party to do predictive coding against their will, but not to our knowledge. That is a known unknown. Still, we are closer to “There” with the City of Rockford’s requirement of an Elusion test.

When we get “there,” and TAR is finally ordered in a case, it will probably arise in a situation like City of Rockford where a joint protocol applicable to all parties is involved. That is easier to sell than a one-sided protocol. The court is likely to justify the order by Rule 26(g), and hold that it requires all parties in the case to use predictive coding. Otherwise, they will not meet the  reasonable effort burdens of Rule 26(g). Other rules will be cited too, of course, including Rule 1, but Rule 26(g) is likley to be key.

____________

___

 

____

 

 


“Save Everything” and Eventually You Will Not Be Able to Find Anything: The Sedona Conference Principles and Commentary on Defensible Disposition

August 13, 2018

If you are a data hoarder, an information pack-rat that saves everything, you will eventually drown in your own data and die. Maybe not literally killed, mind you, but figuratively. Maybe not you personally, but your enterprise, your group, your project, your network. Too much information can render you and your enterprise intellectually paralyzed, cut off and seriously misinformed or uninformed. Saving it all is physically and logistically difficult, if not possible. Even if you could, keeping it all would impede your search, making it hard to find the information you need, when you need it. I address these issues this week in my review of a new commentary by The Sedona Conference Principles and Commentary on Defensible Disposition (August 2018).

Information overload is better than physical death I know, but still very bad in today’s Google world. You end up not being able to find the information you need, when you need it. That makes it hard to determine what really happened. It allows lies and liars to fester and grow. We are now seeing firsthand in the U.S. where this can lead. It is not good. It has put the whole world into a precarious situation. We need the truth to thrive as a culture; not smoke and mirrors, not conman games. A culture built on lies is a cancer. It is a deadly disease, especially for the Law, which depends on truth, on evidence, on real facts, to attain the goal of Justice.

Saving Too Much

Over-retention is the enemy of effective, efficient search. The more ESI there is to search, the more difficult the search. There can be exceptions to this rule, but for the most part it is true. That makes a “save everything” ESI policy an enemy of search. It interferes with the ability to find the information needed, which in my case is electronic evidence in legal proceedings, when it is needed. It is important for these information needs be filled quickly and completely.

Search is powerful. That is my field. The more data the better, is often true, but not always. It depends on the data and its effective life, how long a particular type of data is of any use to anyone. Big data allows for detection of patterns that would otherwise not be seen. This analysis takes CPU power. The advances in this area have been fantastic. We have the processing power, as well as the cheap storage, but our search and retrieval software has not otherwise kept up with the data explosion in volume and complexity. Predictive coding software and other AI applications have come a long way, but are still sometimes confused by the volume, variety and complexity of useless data that plagues most company IT systems.

Retrieval of specific documents and metadata takes time and specialized human skills. The more worthless data in a collection, such as spam, the greater the number false positives in a search, no matter how powerful the algorithms or skilled the searcher. Vast volumes of data make searches longer to execute and less precise. The more noise in the data, the more difficult to hear the signal. That is a fundamental law of information.

With high data volumes you can often still find the signal, the relevant documents that you need in large chaotic data collections, but it takes time and special tools and skills. There are often too many false positives in searches of data collections containing too much spam-like, useless data. Although search is strong, search alone is inadequate to meet the needs of most organizations. They also need data destruction and retention policies that govern all information. That is one reason why the success of information governance depends on data disposition.

An organization should save as much as it needs, but not too much, and also not too little. It is a Goldilocks situation. If you do not save data, you can never find it. If you save too little, then what you later need might not be there to be found. But if you save too much, you may never be able to find what you need. The signal may be in the collection to be found, in plain view, but hidden in the vast numbers, the noise of spam and other irrelevancies.

Search v. Destroy

I have debated Information Governance leaders for years the importance of search versus file destruction. I was pretty much the only advocate for search over disposition. I favored retention over destruction in most close cases, but I had a cost and proportionality overlay. I am reminded, for instance, of my debate with Jason Baron on the subject at the IQPC 10th Anniversary of Information Governance and eDiscovery, where he managed to quote Churchill at the end and won the debate hands-down. e-Disco News, Knowledge and Humor: What’s Happening Today and Likely to Happen Tomorrow (e-Discovery Team, June 7, 2015); Information Governance v Search: The Battle Lines Are Redrawn (e-Discovery Team, Feb. 8, 2015).

I did not consider it a fair debate because of Jason’s very successful pandering to the jury during his closing argument with a quote by Churchill from his speech, We Shall Fight on the Beaches. That’s the one about never surrendering in the fight against “the odious apparatus of Nazi rule” (sadly, this exhortation still has legs today in the US).

The debate was “unfair” primarily because this was an IG conference. Everybody in IG is pro-destruction and values disposition over search. I think most IG leaders go too far, that they are trigger happy to kill data. I pointed out in my debates that once a file is deleted, it cannot be found, no matter how good your filing, no matter how good your search (forensic recovery issues aside).

I am pro-search and think that the importance of management of ESI by filing and disposition is somewhat overblown. I think search is king, not data deletion. Still, even in my most strident of debates and pro-search arguments, I never advocated for the retention of all data. I always assumed that some file disposition was required and accepted that as a given. I was not a save everything and search advocate. I advocated for both, search and destroy. I advocated for more retention than most, but have never argued to retain everything.

There is a common core of agreement that some ESI should be deleted, that all data should not be saved. The disagreement is on how much data to save. How does a person or company know what is the “just right” data destruction policy for that company? There is agreement among experts that there is no one-size-fits-all solution, so custom work is required. Different retention and destruction policies should apply depending on the company and the particularities of their data universe. Many IG specialists advise clients on the custom fit they need. It involves careful investigation of the company, its data and activities, including law suits and other investigations.

The Sedona Conference  Principles and Commentary on Defensible Disposition

Kevin Brady

Kevin Brady

These IG specialists, and the companies they serve, now have an excellent new resource tool to analyze and custom-fit data destruction policies. The Sedona Conference Principles and Commentary on Defensible Disposition (August 2018 Public Comment Version) (Editors-in-Chief, Kevin F. Brady and Dean Kuckelman). I highly recommend this new and excellent work by The Sedona Conferences. My commendations to the Drafting Team: Lauren A. Allen, Jesse Murray, Ross Gotler, Ken Prine, Logan J. Herlinger, David C. Shonka, Mark Kindy; the Drafting Team Leaders: Tara Emory and Becca Rausch; the Staff Editor: Susan McClaim, and Editors-in-Chief, Kevin F. Brady and Dean Kuckelman. Please send to them any comments you may have.

The Commentary begins in usual Sedona fashion by articulation of basic principles and comments tied to principles. The cases and legal authorities cited in all Commentaries by The Sedona Conference are excellent. This commentary on data disposition is no exception. I commend it for your detailed study and reference. Free download here from The Sedona Conference.

The Principles are:

PRINCIPLE 1.    Absent a legal retention or preservation obligation, organizations may dispose of their information.

Comment 1.a.   An organization should, in the ordinary course of business, properly dispose of information that it does not need.

Comment 1.b.   When designing and implementing an information disposition program, organizations should consider the obligation to preserve information that is relevant to the claims and defenses and proportional to the needs of any pending or anticipated litigation.

Comment 1.c. When designing and implementing an information disposition program, organizations should consider the obligation to preserve information that is relevant to the subject matter of government inquiries or investigations that are pending or threatened against the organization.

Comment 1.d.   When designing and implementing an information disposition program, organizations should consider applicable statutory and regulatory obligations to retain information.

PRINCIPLE 2.    When designing and implementing an information disposition program, organizations should identify and manage the risks of over-retention.

Comment 2.a.   Information has a lifecycle, including a time when disposal is beneficial.

Comment 2.b. To determine the “right” time for disposal, risks and costs of retention and disposal should be evaluated.

PRINCIPLE 3.    Disposition should be based on Information Governance policies that reflect and harmonize with an organization’s information, technological capabilities, and objectives.

Comment 3.a.   To create effective information disposition policies, organizations should establish core components of an Information Governance program, which should reflect what information it has, when it can be disposed of, how it is stored, and who owns it.

Comment 3.b. An organization should understand its technological capabilities and define its information objectives in the context of those capabilities.

Document Disposition and Information Governance

The Sedona Conference Principles and Commentary on Defensible Disposition builds upon Sedona’s earlier work, the Sedona Conference Commentary on Information Governance (Oct. 2014). Principle 6 of the Commentary on Information Governance provides the following guidance to organizations:

The effective, timely, and consistent disposal of physical and electronic information that no longer needs to be retained should be a core component of any Information Governance program. The Sedona Conference, Commentary on Information Governance, 15 SEDONA CONF. J. 125, 146 (2014) (“Information Governance” is “an organization’s coordinated, interdisciplinary approach to satisfying information compliance requirements and managing information risks while optimizing information value.” Id. at 126).

The Comment to Principle 6 goes on to explain:

It is a sound strategic objective of a corporate organization to dispose of information no longer required for compliance, legal hold purposes, or in the ordinary course of business. If there is no legal retention obligation, information should be disposed as soon as the cost and risk of retaining the information is outweighed by the likely business value of retaining the information. . . . Typically, the business value decreases and the cost and risk increase as information ages. Id. at 147.

The Sedona Conference concluded in 2018 that this 2014 advice, and similar advice from other sources, has not been followed by most organizations. instead, they continue to struggle to make “effective disposition decisions.” The group in Principles and Commentary on Defensible Disposition concluded in its Introduction that this struggle was caused by many factors, but identified the three main problems:

[T]he incorrect belief that organizations will be forced to “defend” their disposition
actions if they later become involved in litigation. Indeed, the phrase “defensible disposition” suggests that organizations have a duty to defend their information disposition actions. While it is true that organizations must make “reasonable and good faith efforts to retain information that is relevant to claims or defenses,” that duty to preserve information is not triggered until there is a “reasonably anticipated or pending litigation” or other legal demands for records. The Sedona Principles, Third Edition: Best Practices, Recommendations & Principles for Addressing Electronic Document Production, 19 SEDONA CONF. J. 1, 51, Principle 5, 93 (2018).

Another factor in the struggle toward effective disposition of information is the difficulty in appreciating how such disposition reduces costs and risks.

Lastly, many organizations struggle with how to design and implement effective disposition as part of their overall Information Governance program.

The Principles and Commentary on Defensible Disposition attempt to address these three factors and provide guidance to organizations, and the professionals who counsel organizations, on developing and implementing an effective disposition program.

Disposition Challenges

The Sedona Conference Principles and Commentary on Defensible Disposition (August, 2018) concludes by identifying the main challenges to data deletion.

  1. Unstructured Information.
  2. Mergers and Acquisitions.
  3. Departed, Separated, or Former Employees
  4. Shared File Sites
  5. Personally Identifiable Information (“PII”)
  6. Law Firms, eDiscovery Vendors, and Adversaries
  7. In-House Legal Departments
  8. Hoarders (my personal favorite)
  9. Regulations
  10. Cultural Change and Training

There are more, I am sure, but this is a good top ten list to start. I only wish they had included more discussion of these top ten.

Conclusion

Search is still more important for me than destroy. I prefer Where’s Waldo over Kill Waldo! I have not changed my position on that. But neither has mainstream Information Governance. They still disagree with my emphasis on Search. But everyone agrees that we should do both: Search and Destroy. Even I do not want companies to save all of their data. Some data should be destroyed.

I agree with mainstream IG that saving everything forever is not a viable information governance policy, no matter how many resources you also put into ESI search and retrieval. I have never said that you should rely solely on search, just that you should give Search more importance and, when in doubt, that you should save more documents than less. The Search and Destroy argument has always been one of a matter of degree and balance, not whether there should be no destruction at all. The difficult questions involve what should be saved and for how long, which are traditional information management problems.

Where to draw the line on destruction is the big question for everyone. The answer is always company specific, even project specific. It involves questions of varying retention times, files type and custodian analysis. When it comes down to specific decisions, and close questions, I generally favor retention. What may appear to be useless today, may prove to be relevant evidence tomorrow. I hate not being able to prove my case because all of the documents have already been deleted. Then it is just one person’s word against another. IG experts, who usually no longer litigate, or never litigated, do not like my complaints. They are eager to kill, to purge and destroy data. I am more inclined to save and search, but not save too much. It is a question of balance.

Data destruction – the killing of data – can, if done properly, make the search for relevant content much easier. Some disposition of obviously irrelevant, spam and otherwise useless information makes sense on every level. It helps all users of the IT system. It also helps with legal compliance. Too much destruction of data, too aggressive, and you may end up deleting information that you were required by law to keep. You could lose a law suit because of one mistake in a data disposition decision. Where do you draw the line between save and delete? What is the scope of a preservation duty? What files types should be retained? What retention times should apply? How much is too much? Not enough?

The questions go on and on and there is no one right answer. It all depends on the facts and circumstances of the organization and its data. The new Sedona Conference Principles and Commentary on Defensible Disposition is an important new guide to help IT lawyers and technologists to craft custom answers to these questions.

 


Judge Goes Where Angels Fear To Tread: Tells the Parties What Keyword Searches to Use

June 24, 2018

John Facciola was one of the first e-discovery expert judges to consider the adequacy of a producing parties keyword search efforts in United States v. O’Keefe, 537 F. Supp. 2d 14 (D.D.C. 2008). He first observed that keyword search and other computer assisted legal search techniques required special expertise to do properly. Everyone agrees with that. He then reached an interesting, but still somewhat controversial conclusion: because he lacked such special legal search expertise, and knew full well that most of the lawyers appearing before him did too, that he could not properly analyze and compel the use of specific keywords without the help of expert testimony. To help make his point he paraphrased Alexander Pope‘s famous line from An Essay on Criticism: “For fools rush in where angels fear to tread.

Here are the well-known words of Judge Facciola in O’Keffe (emphasis added):

As noted above, defendants protest the search terms the government used.[6]  Whether search terms or “keywords” will yield the information sought is a complicated question involving the interplay, at least, of the sciences of computer technology, statistics and linguistics. See George L. Paul & Jason R. Baron, Information Inflation: Can the Legal System Adapt?; 13 Ricn. J.L. & TECH. 10 (2007). Indeed, a special project team of the Working Group on Electronic Discovery of the Sedona Conference is studying that subject and their work indicates how difficult this question is. See The Sedona Conference, Best Practices Commentary on the Use of Search and Information Retrieval, 8 THE SEDONA CONF. J. 189 (2008).

Given this complexity, for lawyers and judges to dare opine that a certain search term or terms would be more likely to produce information than the terms that were used is truly to go where angels fear to tread. This topic is clearly beyond the ken of a layman and requires that any such conclusion be based on evidence that, for example, meets the criteria of Rule 702 of the Federal Rules of Evidence. Accordingly, if defendants are going to contend that the search terms used by the government were insufficient, they will have to specifically so contend in a motion to compel and their contention must be based on evidence that meets the requirements of Rule 702 of the Federal Rules of Evidence.

Many courts have followed O’Keffe, even though it is a criminal case, and declined to step in and order specific searches without expert input. See eg. the well-known patent case, Vasudevan Software, Inc. v. Microstrategy Inc., No. 11-cv-06637-RS-PSG, 2012 US Dist LEXIS 163654 (ND Cal Nov 15, 2012). The opinion was by U.S. Magistrate Judge Paul S. Grewal, who later became the V.P. and Deputy General Counsel of Facebook. Judge Grewal wrote:

But as this case makes clear, making those determinations often is no easy task. “There is no magic to the science of search and retrieval: only mathematics, linguistics, and hard work.”[9]

Unfortunately, despite being a topic fraught with traps for the unwary, the parties invite the court to enter this morass of search terms and discovery requests with little more than their arguments.

More recently, e-discovery expert Judge James Francis addressed this issue in Greater New York Taxi Association v. City of New York, No. 13 Civ. 3089 (VSB) (JCF) (S.D.N.Y. Sept. 11, 2017) and held:

The defendants have not provided the necessary expert opinions for me to assess their motion to compel search terms. The application is therefore denied. This leaves the defendants with three options: “They can cooperate [with the plaintiffs] (along with their technical consultants) and attempt to agree on an appropriate set of search criteria. They can refile a motion to compel, supported by expert testimony. Or, they can request the appointment of a neutral consultant who will design a search strategy.”[10] Assured Guaranty Municipal Corp. v. UBS Real Estate Securities Inc., No. 12 Civ. 1579, 2012 WL 5927379, at *4 (S.D.N.Y. Nov. 21, 2012).

I am inclined to agree with Judge Francis. I know from daily experience that legal search, even keyword search, can be very tricky, depends on many factors, including the documents searched. I have spent over a decade working hard to develop expertise in this area. I know that the appropriate searches to be run depends on experience and scientific, technical knowledge on information retrieval and statistics. It also depends on tests of proposed keywords; it depends on sampling and document reviews; it depends on getting your hands dirty in the digital mud of the actual ESI. It cannot be done effectively in the blind, no matter what your level of expertise. It is an iterative process of trial and errors, false positives and negatives alike.

Enter a Judge Braver Than Angels

Recently appointed U.S. Magistrate Judge Laura Fashing in Albuquerque, New Mexico, heard a case involving a dispute over keywords. United States v. New Mexico State University, No. 1:16-cv-00911-JAP-LF, 2017 WL 4386358 (D.N.M. Sept. 29, 2017). It looks like the attorneys in the case neglected to inform Judge Fashing of United States v. O’Keefe. It is a landmark case in this field, yet was not cited in Judge Fashing’s order. More importantly, Judge Fashing did not take the advice of O’Keefe, nor the many cases that follow it. Unlike Judge Facciola and his angels, she told the parties what keywords to use, even without input from experts.

The New Mexico State University opinion did, however, cite to two other landmark cases in legal search, William A. Gross Const. Assocs., Inc. v. Am. Mfrs. Mut. Ins. Co., 256 F.R.D. 134, 135 (S.D.N.Y. 2009) by Judge Andrew Peck and Victor Stanley, Inc. v. Creative Pipe, Inc., 250 F.R.D. 251, 260, 262 (D. Md. May 29, 2008) by Judge Paul Grimm. Judge Fashing held in New Mexico State University:

This case presents the question of how parties should search and produce electronically stored information (“ESI”) in response to discovery requests. “[T]he best solution in the entire area of electronic discovery is cooperation among counsel.” William A. Gross Const. Assocs., Inc. v. Am. Mfrs. Mut. Ins. Co., 256 F.R.D. 134, 135 (S.D.N.Y. 2009). Cooperation prevents lawyers designing keyword searches “in the dark, by the seat of the pants,” without adequate discussion with each other to determine which words would yield the most responsive results. Id.

While keyword searches have long been recognized as appropriate and helpful for ESI search and retrieval, there are well-known limitations and risks associated with them, and proper selection and implementation obviously involves technical, if not scientific knowledge.

* * *

Selection of the appropriate search and information retrieval technique requires careful advance planning by persons qualified to design effective search methodology. The implementation of the methodology selected should be tested for quality assurance; and the party selecting the methodology must be prepared to explain the rationale for the method chosen to the court, demonstrate that it is appropriate for the task, and show that it was properly implemented.

Id. (quoting Victor Stanley, Inc. v. Creative Pipe, Inc., 250 F.R.D. 251, 260, 262 (D. Md. May 29, 2008)).

Although NMSU has performed several searches and produced thousands of documents, counsel for NMSU did not adequately confer with the United States before performing the searches, which resulted in searches that were inadequate to reveal all responsive documents. As the government points out, “NMSU alone is responsible for its illogical choices in constructing searches.” Doc. 117-1 at 8. Consequently, which searches will be conducted is left to the Court.

Judges Francis, Peck and Facciola

Judge Laura Fashing had me in the quote above until the final sentence. Up till then she had been wisely following the four great judges in this area, Facciola, Peck, Francis and Grimm. Then in the next several paragraphs she rushes in to specify what search terms should be used for what categories of ESI requested. Why should the Court go ahead and do that without expert advice? Why not wait? Especially since Judge Fashing starts her opinion by recognizing the difficulty of the task, that “there are well-known limitations and risks associated with them [keyword searches], and proper selection and implementation obviously involves technical, if not scientific knowledge.” Knowing that, why was she fearless? Why did she ignore Judge Facciola’s advice? Why did she make multiple detailed, technical decisions on legal search, including specific keywords to be used, without the benefit of expert testimony? Was that foolish as several judges have suggested, or was she just doing her job by making the decisions that the parties asked her to make?

Judge Fashing recognized that she did have enough facts to make a decision, much less expert opinions based on technical, scientific knowledge, but she went ahead and ruled anyway.

Although NMSU argues that the search terms proposed by the government will return a greater number of non-responsive documents than responsive documents, this is not a particular and specific demonstration of fact, but is, instead, a conclusory argument by counsel. See Velasquez, 229 F.R.D. at 200. NMSU’s motion for a protective order with regard to RFP No. 8 is DENIED.

NMSU will perform a search of the email addresses of all individuals involved in salary-setting for Ms. Harkins and her comparators, including Kathy Agnew and Dorothy Anderson, to include the search terms “Meaghan,” “Harkins,” “Gregory,” or “Fister” for the time period of 2007-2012. If this search results in voluminous documents that are non-responsive, NMSU may further search the results by including terms such as “cross-country,” “track,” “coach,” “salary,” “pay,” “contract,” or “applicants,” or other appropriate terms such as “compensation,” which may reduce the results to those communications most likely relevant to this case, and which would not encompass every “Meaghan” or “Gregory” in the system. However, the Court will require NMSU to work with the USA to design an appropriate search if it seeks to narrow the search beyond the four search terms requested by the United States.

Judge Fashing goes on to make several specific orders on what to do to make a reasonable effort to find relevant evidence:

NMSU will conduct searches of the OIE databases, OIE employee’s email accounts, and the email accounts of all head coaches, sport administrators, HR liaisons working within the Athletics Department, assistant or associate Athletic Directors, and/or Athletic Directors employed by NMSU between 2007 and the present. The USA suggests that NMSU conduct a search for terms that are functionally equivalent to a search for (pay or compensate! or salary) and (discriminat! or fair! or unfair!). Doc. 117-1 at 13. If NMSU cannot search with “Boolean” connectors as suggested, it must search for the terms “pay” or “compensate” or “salary” and “discriminate” or “fair” or “unfair” and the various derivatives of these terms (for example the search would include “compensate” and “compensation”). The parties are to work together to determine what terms will be used to search these databases and email accounts.

Judge Laura Fashing hangs her hat on cooperation, but not on experts. She concludes her order with the following admonishment:

The parties are reminded that:

Electronic discovery requires cooperation between opposing counsel and transparency in all aspects of preservation and production of ESI. Moreover, where counsel are using keyword searches for retrieval of ESI, they at a minimum must carefully craft the appropriate keywords, with input from the ESI’s custodians as to the words and abbreviations they use, and the proposed methodology must be quality control tested to assure accuracy in retrieval and elimination of “false positives.” It is time that the Bar—even those lawyers who did not come of age in the computer era—understand this.

William A. Gross Const. Assocs., Inc., 256 F.R.D. at 136.

Conclusion

Of course I agree with Judge Fashing’s concluding reminder to the parties. Cooperation is key, but so is expertise. There is a good reason for the fear felt by Facciola’s angels. They wisely  knew that they lacked the necessary technical, scientific knowledge for the proper selection and implementation of keyword searches. I only wish that Judge Fashing’s order had reminded the parties of this need for experts too. It would have made her job much easier and also helped the parties. Sometimes the wisest thing to do is nothing, at least not until you have more information.

There is widespread agreement among legal search experts on such simplistic methods as keyword search. They would have helped. The same holds true on advanced search methods, such as active machine learning (predictive coding), at least among the elite. See TARcourse.com. There is still some disagreement on TAR methods, especially when you include the many pseudo experts out there. But even they can usually agree on keyword search methods.

I urge the judges and litigants faced with a situation like Judge Fashing had to deal with in New Mexico State University, to consider the three choices set out by Judge Francis in Greater New York Taxi Association:

  1. Cooperation with the other side and their technical consultants to attempt to agree on an appropriate set of search criteria.
  2. Motions supported by expert testimony and facts regarding the search.
  3. Appointment of a neutral consultant who will design a search strategy.

Going it alone with legal search in a complex case is a fool’s errand. Bring in an expert. Spend a little to save a lot. It is not only the smart thing to do, it is also required by ethics. Rule 1.1: Competence, Model Rules of Professional Conduct. The ABA Comment two to Rule 1.1 states that “Competent representation can also be provided through the association of a lawyer of established competence in the field in question.” Yet, in my experience, this is seldom done and is not something that clients are clamoring for. That should change, and quickly, if we are ever to stop wasting so much time and money on simplistic e-discovery arguments. I am again reminded of the great Alexander Pope (1688–1744) and another of his famous lines from An Essay on Criticism.

_______________

 

After I wrote this blog I did a webinar for ACEDS about this topic. Here is a one-hour talk to add to your personal Pierian spring.

 

_________

 

 

 


Disproportionate Keyword Search Demands Defeated by Metric Evidence of Burden

June 10, 2018

The defendant in a complex commercial dispute demanded that plaintiff search its ESI for all files that had the names of four construction projects. Am. Mun. Power, Inc. v. Voith Hydro, Inc. (S.D. Ohio, 6/4/18) (copy of full opinion below). These were the four projects underlying the law suit. Defense counsel, like many attorneys today, thought that they had magical powers when it comes to finding electronic evidence. They thought that all, or most all, of the ESI with these fairly common project names would be relevant or, at the very least, worth examining for relevance. As it turns out, defense counsel was very wrong, most of the docs with keyword hits were not relevant and the demand was unreasonable.

The Municipal Power opinion was written by Chief Magistrate Judge Elizabeth A. Preston Deavers of the Southern District Court of Ohio. She reached this conclusion based on evidence of burden, what we like to call the project metrics. We do not know the total evidence presented, but we do know that Judge Deavers was impressed by the estimate that the privilege review alone would cost the plaintiff between $100,000 – $125,000. I assume that estimate was based on a linear review of all relevant documents. That is very expensive to do right, especially in large, diverse data sets with high privilege and relevance prevalence. Triple and quadruple checks are common and are built into standard protocols.

Judge Deavers ruled against the defense on the four project names keywords request, and granted a protective order for the plaintiff because, in her words:

The burden and expense of applying the search terms of each Project’s name without additional qualifiers outweighs the benefits of this discovery for Voith and is disproportionate to the needs of even this extremely complicated case.

The plaintiff made its own excessive demand upon defendant to search its ESI using a long list of keywords, including Boolean logic. The plaintiff’s keyword list was much more sophisticated than the defendants four name search demand. The plaintiff’s proposal was rejected by the defendant and the judge for the same proportionality reason. It kind of looks like tit for tat with excessive demands on both sides. But, it is hard to say because the negotiations were apparently focused on mere guessed-keywords, instead of a process of testing and refining – evolved-tested keywords.

Defense counsel responded to the plaintiff’s keyword demands by presenting their own metrics of burden, including the projected costs of redaction of confidential customer information. These confidentiality concerns can be difficult, especially where you are required to redact. Better to agree upon an alternative procedure where you withhold the entire document and log them with a description. This can be a less expensive alternative to redaction.

When reading the opinion below note how the Plaintiff’s opposition to the demand to review all ESI with the four project names gave specific examples of types of documents (ESI) that would have the names on them and still have nothing whatsoever to do with the parties claims or defenses, the so called “false positives.” This is a very important exercise that should not be overlooked in any argument. I have seen some pretty terrible precision percentages, sometimes as low as two percent.

Get your hands in the digital mud. Go deep into TAR if you need to. It is where the time warps happen and we bend space and time to attain maximum efficiency. Our goal is to attain: (1) the highest possible review speeds (files per hr), both hybrid and human; (2)  the highest precision (% of relevant docs); and, (3) the countervailing goal of total recall (% of relevant docs found). The recall goal is typically given the greatest weight, with emphasis on highly relevant. The question is how much greater weight to give recall and that depends on the total facts and circumstances of the doc review project.

Keywords are the Model T of legal search, but we all start there. It is still a very important skill for everyone to learn and then move on to other techniques, especially to active machine learning.

In some simple projects it can still be effective, especially if the user is highly skilled and the data is simple. It also helps if the data is well known to the searcher from earlier projects. See TAR Course: 8th Class (Keyword and Linear Review).

________________________

Below is the unedited full opinion (very short). We look forward to more good opinions by Judge Deavers on e-discovery.

__________

UNITED STATES DISTRICT COURT FOR THE SOUTHERN DISTRICT OF OHIO, EASTERN DIVISION. No. 2:17-cv-708

June 4, 2018

AMERICAN MUNICIPAL POWER, INC., Plaintiff, vs. VOITH HYDRO, INC., Defendant.

ELIZABETH A. PRESTON DEAVERS, UNITED STATES MAGISTRATE JUDGE. Judge Algenon L. Marbley.

MEMORANDUM OF DECISION

This matter came before the Court for a discovery conference on May 24, 2018. Counsel for both parties appeared and participated in the conference.

The parties provided extensive letter briefing regarding certain discovery disputes relating to the production of Electronically Stored Information (“ESI”) and other documents. Specifically, the parties’ dispute centers around two ESI-related issues: (1) the propriety of a single-word search by Project name proposed by Defendant Voith Hydro, Inc. (“Voith”) which it seeks to have applied to American Municipal Power, Inc.’s (“AMP”) ESI; 1 and (2) the propriety of AMP’s request that Voith run crafted search terms which AMP has proposed that are not limited to the Project’s name. 2 After careful consideration of the parties’ letter briefing and their arguments during the discovery conference, the Court concluded as follows:

  • Voith’s single-word Project name search terms are over-inclusive. AMP’s position as the owner of the power-plant Projects puts it in a different situation than Voith in terms of how many ESI “hits” searching by Project name would return. As owner, AMP has stored millions of documents for more than a decade that contain the name of the Projects which refer to all kinds of matters unrelated to this case. Searching by Project name, therefore, would yield a significant amount of discovery that has no bearing on the construction of the power plants or Voith’s involvement in it, including but not limited to documents related to real property acquisitions, licensing, employee benefits, facility tours, parking lot signage, etc. While searching by the individual Project’s name would yield extensive information related to the name of the Project, it would not necessarily bear on or be relevant to the construction of the four hydroelectric power plants, which are the subject of this litigation. AMP has demonstrated that using a single-word search by Project name would significantly increase the cost of discovery in this case, including a privilege review that would add $100,000 – $125,000 to its cost of production. The burden and expense of applying the search terms of each Project’s name without additional qualifiers outweighs the benefits of this discovery for Voith and is disproportionate to the needs of even this extremely complicated case.
  • AMP’s request that Voith search its ESI collection without reference to the Project names by using as search terms including various employee and contractor names together with a list of common construction terms and the names of hydroelectric parts is overly inclusive and would yield confidential communications about other projects Voith performed for other customers. Voith employees work on and communicate regarding many customers at any one time. AMPs proposal to search terms limited to certain date ranges does not remedy the issue because those employees still would have sent and received communications about other projects during the times in which they were engaged in work related to AMP’s Projects. Similarly, AMP’s proposal to exclude the names of other customers’ project names with “AND NOT” phrases is unworkable because Voith cannot reasonably identify all the projects from around the world with which its employees were involved during the decade they were engaged in work for AMP on the Projects. Voith has demonstrated that using the terms proposed by AMP without connecting them to the names of the Projects would return thousands of documents that are not related to this litigation. The burden on Voith of running AMP’s proposed search terms connected to the names of individual employees and general construction terms outweighs the possibility that the searches would generate hits that are relevant to this case. Moreover, running the searches AMP proposes would impose on Voith the substantial and expensive burden of manually reviewing the ESI page by page to ensure that it does not disclose confidential and sensitive information of other customers. The request is therefore overly burdensome and not proportional to the needs of the case.

1 Voith seeks to have AMP use the names of the four hydroelectric projects at issue in this case (Cannelton, Smithland, Willow and Meldahl) as standalone search terms without qualifiers across all of AMP’s ESI. AMP proposed and has begun collecting from searches with numerous multiple-word search terms using Boolean connectors. AMP did not include the name of each Project as a standalone term.

2 AMP contends that if Voith connects all its searches together with the Project name, it will not capture relevant internal-Voith ESI relating to the construction claims and defenses in the case. AMP asserts Voith may have some internal documents that relate to the construction projects that do not refer to the Project by name, and included three (3) emails with these criteria it had discovered as exemplars. AMP proposes that Voith search its ESI collection without reference to the Project names by using as search terms including various employee and contractor names together with a list of generic construction terms and the names of hydroelectric parts.

IT IS SO ORDERED.

DATED: June 4, 2018

/s/ Elizabeth A. Preston Deavers

ELIZABETH A. PRESTON DEAVERS

UNITED STATES MAGISTRATE JUDGE

 

 


%d bloggers like this: