Day Two of a Predictive Coding Narrative: More Than A Random Stroll Down Memory Lane

July 8, 2012

Day One of the search project ended when I completed review of the initial 1,507 machine-selected documents and initiated the machine learning. I mentioned in the Day One narrative that I would explain why the sample size was that high. I will begin with that explanation and then, with the help of William Webber, go deeper into math and statistical sampling than ever before. I will also give you the big picture of my review plan and search philosophy: its hybrid and multimodal. Some search experts disagree with my philosophy. They think I do not go far enough to fully embrace machine coding. They are wrong. I will explain why and rant on in defense of humanity. Only then will I conclude with the Day Two narrative.

Why the 1,507 Random Sample Size to Start Inview’s Predictive Coding

A pure random sample using 95% +/-3% and a 50% prevalence (the most conservative prevalence estimate) would require a sample of 1,065 documents. But Inview generates a larger sample of 1,507. This is because it uses what KO calls a conservative approach to sampling that has been reviewed and approved by several experts, including KO’s outside consulting expert on predictive coding, David Lewis (an authority on information science and a co-founder of TREC Legal Track). In fact, this particular feature is under constant review and revisions are expected in future software releases.

Inview’s uses a so-called simple random sample method in which each member of the population has an equal chance of being observed and sampled. But KO uses a larger than required minimum sample size because it uses a kind of continuous stream sampling where data is sampled at the time of input. That and other technical reasons explain the approximate 40% over-sampling in Inview, i.w., the use of 1,507 samples, instead of 1,065 samples, for a 95% +/-3% probability calculation.

This is typical of KO’s conservative approach to predictive coding in general. The over-sampling adds slightly to the cost of review of the random samples (you must review 1,507 documents, instead of 1,065 documents). But this does not add that much to the cost. That is because the review of these sample sets goes fast, since almost all of them in most cases will be irrelevant. Review of irrelevant documents takes far less time on average than review of relevant documents. So I am convinced that this extra cost is really negligible, as compared to the increased defensibility of the sampling.

Since this approximate 40% larger than normal sample size is standard in Inview, even though the confidence level is supposedly only 3%, you can argue that in most datasets it represents an even smaller margin of error. A random sample of 1,507 documents in a dataset of this size would normally represent a 95% confidence interval with a margin of error (confidence interval) of only 2.52%, not 3%. See my prior blog on random sample calculations: Random Sample Calculations And My Prediction That 300,000 Lawyers Will Be Using Random Sampling By 2022.

Baseline Quality Control Sample Calculation

At the beginning of every predictive coding project I like to have an idea as to how many relevant documents there may be. For that reason I use the random sample that Inview generates for predictive coding training purposes, for another purpose entirely, for quality control purposes. I use the random sample to calculate the probable number of relevant documents in the whole dataset. Only simple math is required for this standard baseline calculation. For this particular search, where I found 2 relevant documents in the sample of 1,507 documents, it is: 2/1507=.00132714. We’ll call that 0.13%. That is the percentage of relevant documents found in the whole, which is called the prevalence, a/k/a density rate or yield.

Based on this random sample percentage, my projection of the likely total number of relevant documents in the total database (aka yield) is 928 (.13%*699,082=928). So my general goal was to find 928 documents. That is called the spot projection or point projection. It represents a loose target or general goal for the search, a bullseye of sorts. It is not meant to be a recall calculation, of F1 measure, or anything like that. It is just a standard baseline for quality control purposes that many legal searchers use, not just me. It is, however, not part of the standard KO software or predictive coding design. I just use the random sample they generate for that secondary purpose.

The KO random sampling is for an entirely different purpose of creating a machine training set for the predictive coding type algorithms to work. This is an important distinction to understand that many people miss. David Lewis had to explain that basic distinction to me many times before I finally got it. This distinction in the use of random samples is basic to all information science search, and is not at all unique to KO’s Inview.

You need to be aware that there may well be more or less than the spot projection number of relevant documents in the collection (here 928). This is because of the limitations inherent in all random sampling statistics; the confidence intervals and levels. Here we used a confidence level (95%) and the confidence interval (+/- 3%). With a 3% confidence interval, there could (or so I thought, see important correction below by William Webber) be as many as 21,881 relevant documents (699,082*3.13%), or there could be no more relevant documents at all (just the 2 already found, since you can’t have a minus percentage, i.w., you can not have -2.87%). Those extremes numbers, are, however, highly unlikely, especially considering the prevalence factors.

I presented the above to William Webber, an information scientist whose excellent work in the field of legal search and statistics I have described before. I asked Webber to evaluate my math and analysis. He was kind enough to provide the following comments he allowed me to include here:

On the width of the actual confidence interval, you can’t directly apply the +/- 3%, as it refers to the worst case width, that is, when estimated prevalence is 50%. For an estimate prevalence of 0.13% on a sample of 1,507, the exact 95% confidence interval is [0.016%, 0.479%]. Note that this is not simply a +/- interval; it is wider on the high side than on the low side. (Essentially, in a sample of 1,507, the chance that a true prevalence of 0.479% would produce a sample yield of 2 or fewer is 2.5%, and the chance that a true prevalence of 0.016% would produce a sample yield of 2 or more is 2.5%; thus, we have a (100 – 2.5 – 2.5) = 95% interval.) So the interval is between 112 and 3,345 relevant documents in the collection. (bold added)

I clarified with William that he is saying with the .13% prevalence we have here (a/k/a density of relevant documents), and the 95% confidence level we are using, that the range of probable relevant documents is not from between 2 to  21,881, as I had thought, but rather is from between 112 and 3,345 relevant documents (.479%*699,082=3,345 (Webber is using exact numbers, not rounded off as shown here, which explains the small divergence, i.e. 3,345, and not 3,349)).

The spot or point projection bullseye I made of 928 relevant document remains unchanged (.13%*699,082=928). I had gotten that right. I just had not understood the variable target circles under the bell curve of probability, which Webber calculated for me as shown below assuming a sample size of 1,507.

Figure shows the end-to-end width of a 95% Wald approximate confidence interval for different sample prevalences

The mistake I was making, and also made in my Random Sample Calculations essay, which I’m proud to say Webber complimented, was to simply add or subtract 3% to the scope of the spot target projection. I had assumed that the +/- 3% interval meant that you simply added or subtracted 3 to the prevalence rate. Thus, in this example, I added 3% to the .13% prevalence we have here to calculate the high end, and subtracted to determine the low end. That was a mistake.

Fortunately for us searchers, it does not work that way. You do not simply add or subtract 3% to the .13% prevalence rate to come up with a range. The target range is actually much tighter than that, providing us with more guidance on whether we are meeting our search goals. The actual range is from 0.016% to 0.479%, creating a full target of between 112 and 3,345 relevant documents. Again, this is a tempering down from .13% to .016% on the low-end, and a tempering up from .13%. to .479% on the high-end. This is required because of the 95% confidence interval and the sharply dropping bell curve that cuts off these extreme numbers in the 95% probability. As Webber puts it:

Note that there’s a difference here between the absolute width of the interval, and the width of the interval as a proportion of the point (spot) estimate. The former decreases as sample prevalence falls below 50%; the latter increases.  …  I also attach a graph of the interval width as a proportion of the point estimate (shown above).  Which of these is the correct way of looking at things depends on whether you want to say “Your Honour, we found 928 relevant documents, but another fifth of a percent of the collection might be relevant”, or “Your Honour, we found 928 relevant documents, but there could be three times that in the collection”.

William Webber responded to my questions and further explained that:

0.016% is the lower end of the confidence interval; 0.13% is the point (spot) estimate of precision. The interval in percentages is [ 0.016%, 0.479% ]; multiply this by the collection size of 699,082, and we get the interval in documents (rounded to the nearest document) of [ 112 – 3,349 ].

Now in fact I’ve been slightly sloppy here; as you point out, we’ve already seen 1,507 documents, and found that 2 of them are relevant, so thinking that way, we should say the interval is: 2 + (699,082 – 1,507) * [ 0.016%, 0.479% ] = [ 114 – 3,343 ]

(In fact, even this is not entirely exact, because the finite population means the sampling probability changes very very slightly every time we remove a document from the collection — but let’s ignore that and not give ourselves a headache.)

Hopefully you can kind of follow that, and understand that his final parenthetical adjustment represents very miniscule numerical adjustments of no significance to our world of whole documents, and not sub-parts thereof. For an online calculator that Webber told me about wherein you can calculate the range for yourself, please see the Binomial Confidence Intervals calculator.

I also asked William for further clarification on the low-end, why it was 112 documents, and not just the 2 documents already found? Again, it is the tempering effect of the 95% Wald interval. Here is Webber’s interesting response to that question:

As to why the lower bound is not exactly 2 (that is, the 2 we’ve already seen). Well, if there were only 2 relevant documents in the entire collection of 699,082, then the chance we’d happen to sample both of them in a sample of 1,507 is (again ignoring the finite population):

[ (2 / 699,082) ^ 2 ] * [ (1 – (2 / 699,082)) ^ 1,505 ] * [ (1505!) / (1503! * 2!) ]

A slightly scary looking expression: the first line calculates the probability of sampling 2 relevant and 1,505 irrelevant documents in any particular permutation, and the second calculates the number of different permutations this can be done.

[Put another way:] that whole expression simplifies to 1505 * 1504 / 2.  The number of different ways you can choose 2 elements from 1505.  In this case, the 2 locations in the sequence of samples at which the relevant documents are found.

[Either way] [The] expression equates to a chance of 1 in 108,420. That’s not impossible (few things are impossible), but it’s so unlikely that we rule it out. And very small numbers of relevant documents are also implausible (by a related, but slightly elaborated formula). In fact, it is not until we get to 112 (or 114, if you prefer) relevant documents in the collection that the chance of a sample with 2 _or more_ relevant finally reaches 2.5%. We also rule out 2.5% at the upper end, and get a 95% confidence interval as a result.

So, it was possible that when I found 2 relevant documents in the random sample of 1,507 documents that I had in fact found all of the relevant documents. But the odds against that were 108,420 to 1. That is essentially why it is very reasonable to round out, or as I have said here, temper, the improbable range I had assumed before of between 2 to  21,881, down to between 112 and 3,345.

Generating the Seed Set for Next Predictive Coding Session Using a Hybrid Multimodal Approach

I began day two with a plan to use any reviewer’s most powerful tool, their brain, to find and identify additional documents to train Inview. My standard Search and Review plan is multimodal. By this I mean my standard is to use all kinds of search methods, in addition to predictive coding. The other methods include expert human review, the wetware of our own brains, and our unique knowledge of the case as lawyers who understand the legal issues, understand relevancy, and the parties, witnesses, custodian language, timeline, opposing counsel, deciding judge, appeals court, and all the rest of the many complexities that go into legal search.

I also include Parametric Boolean Keyword search, which is a standard type of search built into Inview and most other modern review software. This allows keyword search with Boolean logic, plus searches delimited to certain document fields and metadata.

I also include Similarity type searches using near duplication technology. For instance, if you find a relevant document, you can then search for documents similar to it. In Inview this is called Find Similar. You can even dial in a percentage of similarity. You can also do Add Associated type search methods which finds and includes all associated documents, like email family members and email threads. Again, these Similarity type search features are found in most modern review software today, not just Inview, and can be very powerful tools.

Finally, I used the Concept search methods to locate good training documents. Concept searches used to be the most advanced feature for software review tools, and is present in many good review platforms by now. This is a great way to harness the ability of the computer to know about linguistic patterns in documents and related keywords that you would never think of on your own.

Under a multimodal approach all of the search methods are used between rounds to improve the seed set, and predictive coding is not used as a stand-alone feature.

My plan for this review project is to limit the input of each seed set, of course, but to be flexible on the numbers and search time required between rounds, depending upon what conditions I actually encounter. In the first few rounds I plan to use keyword searches, and concept searches, and searches on high probability rank and mid-probability rank (the software’s grey area) searches. I may use other methods depending again on how the search develops. My reviews will focus on the documents found by these searches. The data itself will dictate the exact methods and tools employed.

This multimodal, multi-search-methods approach to search is shown in the diagram below. Note IR stands for Intelligent Review, which is the KO language for predictive coding, a/k/a probabilistic coding. It stands at the top, but incorporates and includes all of the rest.

Some Vendors and Experts Disagree with Hybrid Multimodal

The multimodal approach is also encouraged by KO, which is one reason we selected KO as our preferred vendor. But not all software vendors and experts agree with the multimodal approach. Some advocate use of pure predictive coding methods alone, and do not espouse the need or desirability of using other search methods to generate seed sets. In fact, some experts and vendors even oppose the Hybrid approach, which means equal collaboration between Man and Machine. They do so because they favor of the Machine! (Unlike some lawyers who go to the other extreme and distrust the machine entirely and want to manually review everything.)

The anti-hybrid, anti-multimodal type experts would, in this search scenario and others like it, proceed directly to another machine selected set of documents. They would rely entirely on the computer judgment and computer selection of documents. The human reviewers would only be used to decide on the coding of the documents that the computer finds and instructs them to review.

That is a mere random stroll down memory lane. It is not a bona fide Hybrid approach, any more than is linear review where the humans do not rely on the computers to do anything but serve as a display vehicle. That is the style of old-fashioned e-discovery where lawyers and paralegals simply do a manual linear review on a computer, but without any real computer assistance.

Hybrid for me means use of both the natural intelligence of humans, namely skilled attorneys with knowledge of the law, and the artificial intelligence of computers, namely advanced software with ability to learn from and leverage the human instructions and review tirelessly and consistently.

Fighting for the Rights of Human Lawyers

I was frankly surprised to find in my due diligence investigation of predictive coding type software that there are several experts who have, in my view at least, a distinct anti-human, anti-lawyer bent. They have an anti-hybrid prejudice in favor of the computer. As a result, they have designed software that minimizes the input of lawyers. By doing so they have, in their opinion, created a pure system with better quality controls and less likelihood of human error and prejudice. Humans are weak-minded and tire easily. They are inconsistent and make mistakes. They go on and on about how their software prevents a lawyer from gaming the system, either intentionally and unintentionally. Usually they are careful in how they say that, but I have become sensitized after many such conversations and learned to read between the lines and call them on it.

These software designers want to take lawyers and other mere humans out of the picture as much as possible. They think in that way they will insulate their predictive model from bias. For instance, they want to prevent untrustworthy humans, especially tricky lawyer types, from causing the system to focus on one aspect of the relevancy topic to the detriment of another. They claim their software has no bias and will look for all aspects of relevancy in this manner. (They try to sweep under the carpet the fact, which they dislike, that it is the human lawyers who train the system to begin with in what is or is not relevant.) These software designers put a new spin on an old phrase, and say trust me, I’m a computer.

You usually run into this kind of attitude when talking to software designers and asking them questions about the software, and pressing for a real answer, instead of the bs they often throw out. They are pretty careful about what they put into writing, as they realize lawyers are their customers, and it is never a good idea to directly insult your customer, or their competence, and especially not their honesty. I happened upon an example of this in an otherwise good publication by the EDRM on search, a collaborative publication (so we do not know who wrote this particular paragraph among the thousands in the publication) EDRM Search Guide, May 7, 2009, DRAFT v. 1.17,  at page 80 of 83:

In the realm of e-discovery, measurement bias could occur if the content of the sample is known before the sampling is done. As an example, if one were to sample for responsive documents and during the sampling stage, content is reviewed, there is potential for higher-level litigation strategy to impact the responsive documents. If a project manager has communicated the cost of reviewing responsive documents, and it is understood that responsive documents should somehow be as small as possible, that could impact your sample selection. To overcome this, the person implementing the sample selection should not be provided access to the content.

See what I am talking about? Yes, it is true lawyers could lie and cheat. But it is also true that the vast majority do not. They are honest. They are careful. They do not allow higher-level litigation strategy to impact the responsive documents. They do their best to find the evidence, not hide the evidence. Any software design built on the premise of the inherent dishonesty and frailty of mind of the users is inherently flawed. It takes human intelligence out of the picture based on an excessive disdain for human competence and honesty. It also ignores the undeniable fact that the few dishonest persons in any population, be it lawyers, scientists, techs, or software designers, will always find a way to lie, cheat, and steal. Barriers in software will not stop them.

In my experience with a few information scientists, and many technology experts, many of them distrust the abilities of all human reviewers, but especially lawyers, to contribute much to the search process. (David and William, are, however, not among them.) I speculate they are like this because: (a) so many of the lawyers and lit-support people they interact with tend to be relatively unsophisticated when it comes to legal search and technology; or, (b) they are just crazy in love with computers and their own software and don’t particularly like people, especially lawyer people. I suppose they think the Borg Queen is quite attractive too. Whatever the reason, several of the predictive coding software programs on the market today that they have designed rely too much on computer guidance and random sampling to the neglect of lawyer expertise. (Yes. That is what I really think. And no, I will not name names.)

I will not be assimilated. Resistance is not futile. I am a free man!

I will not be assimilated.

After enduring many such experts and their pitches, I find their anti-lawyer, anti-human intelligence attitude offensive. I for one will not be assimilated into the Borg hive-mind. I will fight for the rights of human lawyers. I will oppose the borg-like software. Resistance is not futile!

The Borg-like experts design fully automated software for drones. Their software trivializes user expertise and judgment. The single-modal software search systems they promote underestimate the abilities (and honesty) of trained attorneys. They also underestimate the abilities of other kinds of search methods to find evidence, i.e., concept, similarity, and keyword searches.

I promote diversity of search methods and intelligence, but they do not. They rely too much on the computer, on random sampling, and on this one style of search. As a result, they do not properly leverage the skills of a trained attorney, nor take advantage of all types of programming.

In spite of their essentially hostile attitude to lawyers, I will try to keep an open mind. It is possible that a pure computer, pure probabilistic coding method may someday surpass my multimodal hybrid approach that still keeps humans in charge. Someday a random stroll down memory lane may be the way to go. But I doubt it.

In my opinion, legal search is different from other kinds of search. The goal of relevant evidence is inherently fuzzy. The 7±2 Rule reigns supreme in the court room, a place where most such computer geeks have never even been, much less understand. Legal search for possible evidence to use at trial will, in my opinion, always require trained attorneys to do correctly. It is a mistake to try to replace them entirely with machines. Hybrid is the only way to go.

So, after this long random introduction, and rant in favor of humanity, I finally come to the narrative itself about Day Two.

Second Day of Review (3.5 Hours)

I was disappointed at the end of the first day that I had not found more relevant documents in the first random sample. I knew this would make the search more difficult. But I wanted to stick with this hypothetical of involuntary terminations and run through multiple seed sets to see what happens. Still, when I do this again with this same data slice, and that is the current plan for the next set of trainees, I will use another hypothetical, one where I know I will find more hits (higher prevalence), namely a search for privileged documents. 

I started my second day by reviewing all of the 711 documents containing the term “firing.” I had high hopes I would find emails about firing employees. I did find a couple of relevant emails, but not many. Turns out an energy company like Enron often used the term firing to refer to starting up coal furnaces and the like. Who knew? That was a good example of the flexibility of language and the limitations of keyword search.

I had better luck with “terminat*” within 10 words of “employment.” I sped through the search results by ignoring most of the irrelevant, and not taking time to mark them (although I did mark a few for training purposes). I found several relevant documents, and even found one I considered Highly Relevant. I marked them all and included them for training.

Next I used the “find similar” searches to expand upon the documents already located and marked as relevant documents. This proved to be a successful strategy, but I still had only found 26 relevant documents. It was late, so I called it a night. (It is never good to do this kind of work without rest, unless absolutely required.)  I estimate my time on this second day of the project at three and a half hours.

To be continued . . . .


Good, Better, Best: a Tale of Three Proportionality Cases – Part Two

April 15, 2012

Continuation of Part One of Good, Better, Best: a Tale of Three Proportionality Cases.

The Best Case: DCG Systems

Compared to I-Med Pharma and U.S. ex rel McBride, DCG Systems is the best of the lot. DCG Sys., Inc. v. Checkpoint Techs, LLC, 2011 WL 5244356 (N.D. Cal. Nov. 2, 2011) It is better than the rest because of timing. The issue of proportionality of discovery was raised in DCG Systems at the beginning of the case. It was raised at the 26(f) conference and 16(b) hearing as part of discovery plan discussions. That is what the rules intended. Proportionality protection requires prompt, diligent action.

In I-Med Pharma the party responding to discovery waited to take action until after a stipulation and order to review 64,382,929 hits covering 95 Million pages. In U.S. ex rel McBride the party responding to discovery waited until after the email of 230 custodians had been produced, and, in the words of Judge Facciola, a king’s ransom had already been paid.

The lesson is clear. Be a good little lawyer hacker. Be fast, be bold, and be open to impact discovery in a proportional way.  Impactful, Fast, Bold, Open, Values: Guidance of the “Hacker Way.”  Timing is everything, in law and in life. Are we not all trapped in an hour-glass? There is no getting out!

Timing and Rule 26(g)

A key lesson of these three cases is that timing is everything. Consider proportionality from the get go, and remember that it is not only based on the protective order rule, 26(b)(2)(C), it is based on the rule governing a requesting party’s signing a discovery request. I am talking about the Rule 11 of discovery, Rule 26(g)(1)(B)(iii):

(g) Signing Disclosures and Discovery Requests, Responses, and Objections.

(1) Signature Required; Effect of Signature. Every disclosure under Rule 26(a)(1) or (a)(3) and every discovery request, response, or objection must be signed by at least one attorney of record in the attorney’s own name—or by the party personally, if unrepresented—and must state the signer’s address, e-mail address, and telephone number. By signing, an attorney or party certifies that to the best of the person’s knowledge, information, and belief formed after a reasonable inquiry: …

(B) with respect to a discovery request, response, or objection, it is: …

(iii) neither unreasonable nor unduly burdensome or expensive, considering the needs of the case, prior discovery in the case, the amount in controversy, and the importance of the issues at stake in the action.

Judge Paul Grimm calls Rule 26(g) the most overlooked and misunderstood of all of the rules of civil procedure. That is the fault of us lawyers, and, it is also the fault of our judges. Rule 26(g) in subsection (3) requires a court, “on its own,” to sanction anyone who signs a discovery request in violation of the rule. This means that judges must impose sanctions, on their own initiative, whenever they see a disproportionate discovery request. There is no discretion given to judges about this. The rule does not say “may” impose sanctions. It says the court “on motion or on its own, must impose an appropriate sanction on the signer.” Yet, in my thirty-two years of legal practice in federal court, I have never once seen this done by a judge. Have you?

Here is the language of subsection (3) of Rule 26(g). It should be in all of your discovery briefs.

(3) Sanction for Improper Certification. If a certification violates this rule without substantial justification, the court, on motion or on its own, must impose an appropriate sanction on the signer, the party on whose behalf the signer was acting, or both. The sanction may include an order to pay the reasonable expenses, including attorney’s fees, caused by the violation.

Lawyers need to start including this rule in their initial analysis of any discovery request. If one side refuses in engage in cooperative discussions to narrow discovery requests, if, for instance, they refuse to limit discovery to the actual factual issues in the case, then Rule 26(g) must be squarely brought to the attention of the supervising judge. There is no time to wait.  We are all trapped in an hour-glass, and a billable one at that!

As Judge Waxse has pointed out, there is a clear path in the rules to deal with non-cooperators, and Rule 26(g) is one of the road signs on that path. See: Judge David Waxse on Cooperation and Lawyers Who Act Like Spoiled Children. But you have to time your motions. You have to seek protection before you pay the piper, but after you make a good faith effort to cooperate. Timing is everything.

Model Patent Order

The Patent Bar is trying an experiment to try to control run away e-discovery costs in patent litigation. They have a committee composed of a handful of patent lawyers and a few key judges who are well-known in patent law, Chief Judge James Ware (ND Cal), Judge Virginia Kendall (ND Ill), Magistrate Judge Chad Everingham (ED Tex), and Chief Judge Randall Rader (Fed. Cir.). They have come up with what they call a Model Order Limiting E-Discovery in Patent Cases. They explain that the Model Order:

… is intended to be a helpful starting point for district courts to use in requiring the responsible, targeted use of e-discovery in patent cases. The goal of this Model Order is to promote economic and judicial efficiency by streamlining ediscovery, particularly email production, and requiring litigants to focus on the proper purpose of discovery—the gathering of material information—rather than permitting unlimited fishing expeditions. It is further intended to encourage discussion and public commentary by judges, litigants, and other interested parties regarding e-discovery problems and potential solutions.

The Model Order is inspired by Rule 30 that presumptively limits cases to ten depositions and seven hours per deposition. The Committee notes that since email is the biggest time-waster in patent litigation (well, except for Qualcomm of course), and so it uses this same limiting approach to email discovery. It limits initial e-discovery to email from five custodians and five keywords per custodian. The Committee is careful to note that “the parties may jointly agree to modify these limits or request court modification for good cause.” Even if they do not agree, or there is no order permitting more email discovery, a requesting party is still entitled to more if they pay for it. This is their approach to proportionality:

This is not to say a discovering party should be precluded from obtaining more e-discovery than agreed upon by the parties or allowed by the court. Rather, the discovering party shall bear all reasonable costs of discovery that exceeds these limits. This will help ensure that discovery requests are being made with a true eye on the balance between the value of the discovery and its cost.

The Model Order also addresses concerns regarding waiver of attorney-client privilege and work product protection in order to minimize human pre-production review. It does so by including Rule 502(d) non-waiver language into the standard order. The Order itself is pretty short and simple, which is one of its virtues, so I reproduce it here in its entirety:


Plaintiff,
v.
Defendant.

[MODEL] ORDER REGARDING E-DISCOVERY IN PATENT CASES

The Court ORDERS as follows:

1. This Order supplements all other discovery rules and orders. It streamlines Electronically Stored Information (“ESI”) production to promote a “just, speedy, and inexpensive determination” of this action, as required by Federal Rule of Civil Procedure 1.

2. This Order may be modified for good cause. The parties shall jointly submit any proposed modifications within 30 days after the Federal Rule of Civil Procedure 16 conference. If the parties cannot resolve their disagreements regarding these modifications, the parties shall submit their competing proposals and a summary of their dispute.

3. Costs will be shifted for disproportionate ESI production requests pursuant to Federal Rule of Civil Procedure 26. Likewise, a party’s nonresponsive or dilatory discovery tactics will be cost-shifting considerations.

4. A party’s meaningful compliance with this Order and efforts to promote efficiency and reduce costs will be considered in cost-shifting determinations.

5. General ESI production requests under Federal Rules of Civil Procedure 34 and 45 shall not include metadata absent a showing of good cause. However, fields showing the date and time that the document was sent and received, as well as the complete distribution list, shall generally be included in the production.

6. General ESI production requests under Federal Rules of Civil Procedure 34 and 45 shall not include email or other forms of electronic correspondence (collectively “email”). To obtain email parties must propound specific email production requests.

7. Email production requests shall only be propounded for specific issues, rather than general discovery of a product or business.

8. Email production requests shall be phased to occur after the parties have exchanged initial disclosures and basic documentation about the patents, the prior art, the accused instrumentalities, and the relevant finances. While this provision does not require the production of such information, the Court encourages prompt and early production of this information to promote efficient and economical streamlining of the case.

9. Email production requests shall identify the custodian, search terms, and time frame. The parties shall cooperate to identify the proper custodians, proper search terms and proper timeframe.

10. Each requesting party shall limit its email production requests to a total of five custodians per producing party for all such requests. The parties may jointly agree to modify this limit without the Court’s leave. The Court shall consider contested requests for up to five additional custodians per producing party, upon showing a distinct need based on the size, complexity, and issues of this specific case. Should a party serve email production requests for additional custodians beyond the limits agreed to by the parties or granted by the Court pursuant to this paragraph, the requesting party shall bear all reasonable costs caused by such additional discovery.

11. Each requesting party shall limit its email production requests to a total of five search terms per custodian per party. The parties may jointly agree to modify this limit without the Court’s leave. The Court shall consider contested requests for up to five additional search terms per custodian, upon showing a distinct need based on the size, complexity, and issues of this specific case. The search terms shall be narrowly tailored to particular issues. Indiscriminate terms, such as the producing company’s name or its product name, are inappropriate unless combined with narrowing search criteria that sufficiently reduce the risk of overproduction. A conjunctive combination of multiple words or phrases (e.g., “computer” and “system”) narrows the search and shall count as a single search term. A disjunctive combination of multiple words or phrases (e.g., “computer” or “system”) broadens the search, and thus each word or phrase shall count as a separate search term unless they are variants of the same word. Use of narrowing search criteria (e.g., “and,” “but not,” “w/x”) is encouraged to limit the production and shall be considered when determining whether to shift costs for disproportionate discovery. Should a party serve email production requests with search terms beyond the limits agreed to by the parties or granted by the Court pursuant to this paragraph, the requesting party shall bear all reasonable costs caused by such additional discovery.

12. The receiving party shall not use ESI that the producing party asserts is attorney-client privileged or work product protected to challenge the privilege or protection.

13. Pursuant to Federal Rule of Evidence 502(d), the inadvertent production of a privileged or work product protected ESI is not a waiver in the pending case or in any other federal or state proceeding.

14. The mere production of ESI in a litigation as part of a mass production shall not itself constitute a waiver for any purpose.

This Model Order is a terrific first experiment to try to reign in disproportionate e-discovery expenses and stop wasting everybody’s time. Still, the plaintiff in DCG Systems did not like it and tried to avoid its application in its case. I have my own criticisms of the Model Order, including the obvious one of reliance on five blind keywords, and that puzzling para five on metadata, but I will save that for the conclusion.

DCG Sys., Inc. v. Checkpoint Techs, LLC

DCG Systems is a garden variety patent case between two companies with competing patent rights. It is not another very common type of patent case where a small patent troll with only a little ESI sues a big company with lots of ESI. They call those NPE cases. This means that in the DCG Systems case both companies could find e-discovery equally troubling. The plaintiff, DCH Systems Inc., argued that the Model Order should not be applied to their case because the Order was primarily designed for the David and Goliath, troll versus big company NPE type patent case.

United States Magistrate Judge Paul S. Grewal did not agree:

The court is not persuaded by DCG’s argument for at least two reasons. First, although the undersigned will not presume to know whether Chief Judge Rader or any of the esteemed members of the subcommittee were focused exclusively on reducing discovery costs in so-called “NPE” cases, there is nothing in the language of the Chief Judge’s speech or the text of the model order so limiting its application. Second, and more fundamentally, there is no reason to believe that competitor cases present less compelling circumstances in which to impose reasonable restrictions on the timing and scope of email discovery. To the extent DCG faces unique or particularly undue constraints as a result of the limitations, it remains free, under the Model Order, to seek relief from the court. But in general copying and the availability of an injunction are issues that are impacted by such restrictions no more than the myriad of other issues (e.g., inducement, state of the art, willfulness) that are present in just about all patent cases. And if competitor cases such as this lack the asymmetrical production burden often found in NPE cases, so that two parties might benefit from production restrictions, the Model Order would seem more appropriate, not less.

I know nothing about patent cases, but I do know e-discovery, and Judge Grewal’s argument sounds compelling. Judge Grewal ends his opinion with the following cautionary comment, words that I again completely agree with:

Perhaps the restrictions of the Model Order will prove undue. In that case, the court is more than willing to entertain a request to modify the limits. But only through experimentation of at least the modest sort urged by the Chief Judge will courts and parties come to better understand what steps might be taken to address what has to date been a largely unchecked problem.

We have to take new steps to control e-discovery costs, to make them proportionate. That is why I came up with my Bottom Line Driven Proportional Review approach. But the Patent Committee approach has the advantage of far greater simplicity. Moreover, little or no skill in e-discovery is required to implement this proportionality reform. Still, I am troubled by the reliance on Go Fish keyword search methods. See Child’s Game of “Go Fish” is a Poor Model for e-Discovery Search. The lack of precision and recall in blind keyword search makes this method both expensive and ineffective.

Methods aside, the Model Order Limiting E-Discovery in Patent Cases makes an important first step in litigation reform. The DCG Systems case shows timely application of the Model Order. The opinion also includes good language explaining the order and why courts should try using it to attain proportionality. (It’s use is at the discretion of the presiding judge.) You may want to use Judge Grewal’s language in DCG Systems in your case memos, patent or otherwise, to show the need to control e-discovery:

Critically, the email production requests must focus on particular issues for which that type of discovery is warranted. The requesting party must further limit each request to a total of five search terms and the responsive documents must come from only a defined set of five custodians. These restrictions are designed to address the imbalance of benefit and burden resulting from email production in most cases. As Chief Judge Rader noted in his recent address in Texas on the “The State of Patent Litigation” in which he unveiled the Model Order, “[g]enerally, the production burden of expansive e-requests outweighs their benefits. I saw one analysis that concluded that .0074% of the documents produced actually made their way onto the trial exhibit list-less than one document in ten thousand. And for all the thousands of appeals I’ve evaluated, email appears more rarely as relevant evidence.”

Remember that statistic and use it. Only .0074% of e-docs discovered ever make it onto a trial exhibit list, much less ever get used to make a difference in a case. That is why in my Secrets of Search article, Part Three, I say Relevant Is Irrelevant and point out the old trial psychology rule of 7±2, to argue for higher culling rates in e-discovery search.

More Authorities on Proportionality

Want to learn more about proportionality? Don’t rely on a keyword search to find the cases. As seen, they often do not even use the word proportionality. Try these additional articles, cases, and Mr. Shepherd instead. Mr. Google will help you find still more.

  • The Sedona Conference® Commentary on Proportionality in Electronic Discovery.
  • Bottom Line Driven Proportional Review.
  • Discovery As Abuse.
  • An Old Case With a New Opinion Demonstrating Perfect Proportionality.
  • Rimkus Consulting Group v. Cammarata, 688 F.Supp. 2d 598, 613 (S.D. Tx. 2010) (the Rules require that the parties engage in “reasonable efforts” and what is reasonable “depends on whether what was done – or not done – was proportional to that case…”)
  • Moody v. Turner Corp. Case No. 1:07-cv-692. (S.D. OH, 2010) (“…the mere availability of such vast amounts of electronic information can lead to a situation of the ESI-discovery-tail wagging the poor old merits-of-the-dispute dog.”)
  • Dilley v. Metro. Life Ins. Co., 256 F.R.D. 643, 644 (N.D. Cal. 2009) (“The court must limit discovery if it determines that ‘the burden or expense of the proposed discovery outweighs its likely benefit,’ considering certain factors including ‘the importance of the issues at stake in the action, and the importance of the discovery in resolving the issues.’” ) (quoting FED. R. CIV. P. 26(b)(2)(C)(iii))
  • Averett v. Honda of Am. Mfg., Inc., No. 2:07-cv-1167, 2009 WL 799638, at *2 (“the court always has a duty to limit discovery under Rule 26(b)(2)(C)(i)-(iii)”)
  • Wood v. Capital One Services, LLC, No. 5:09-CV-1445 (NPM/DEP), 2011 WL 2154279, at *1-3, *7 (N.D.N.Y, 2011) (the “rule of proportionality” dictated that the plaintiff’s motion be denied “without prejudice to his right to renew the motion to compel in the event he is willing to underwrite the expense associated with any such search.”)
  • Thermal Design, Inc. v. Guardian Building Products, Inc., No. 08-C-828 (E.D. Wis., 2011), (Judge refused to approve plaintiff’s electronic fishing expedition simply because the defendant had the financial resources to pay for the searches. Th financial resources of the defendant are not tantamount to good cause under FRCP 26(b)(2)(C))
  • General Steel Domestic Sales, LLC v. Chumley, No. 10-cv-01398 (D. Colo., 2011) (Judge rejected defendant’s request for the production of every recorded sales call on plaintiff’s database for a two-year period because it would take four years to listen to the calls to identify potentially responsive information.)
  • Daugherty v. Murphy, No. 1:06-cv-0878-SEB-DML, 2010 WL 4877720, at *5 (S.D. Ind., 2010) (The cost and burden of the additional production outweighed the benefit. The defendant’s sworn testimony on burden and cost was credible.)
  • Willnerd v. Sybase, 2010 U.S. Dist. LEXIS 121658 (SD Id., 2010)(“… a search of the employees’ e-mails would amount to the proverbial fishing expedition — an exploration of a sea of information with scarcely more than a hope that it will yield evidence to support a plausible claim of defamation. … In employing the proportionality standard of Rule 26(b)(2)(C), as suggested by Willnerd, the Court balances Willnerd’s interest in the documents requested, against the not-inconsequential burden of searching for and producing documents.”)
  • Rodriguez-Torres v. Gov. Dev. Bank of P.R., 265 F.R.D. 40 (D. P.R., 2010) (“… the Court determines that the ESI requested is not reasonably accessible because of the undue burden and cost. The Court finds that $35,000 is too high of a cost for the production of the requested ESI in this type of action. Moreover, the Court is very concerned over the increase in costs that will result from the privilege and confidentiality review that Defendant GDB will have to undertake on what could turn out to be hundreds or thousands of documents.”
  • Madere v, Compass Bank, 2011 U.S. Dist. LEXIS 124758, (WD Tx. 2011) (“As the cost to restore Compass Bank’s backup tapes “outweighs its likely benefit,” especially in light of the amount in controversy, the Court DENIES Madere’s request for production.”)
  • Convolve, Inc. v. Compaq Comp. Corp, 223 F.R.D. 162 (SDNY 2004) (The production request “would require an expenditure of time and resources far out of proportion to the marginal value of the materials to this litigation.”)
  • United Central Bank v. Kanan Fashions, Inc., 2010 U.S. Dist. LEXIS 83700 (DN Ill, 2010) (Restrictive date range required, but further protection from excessive burden denied due to failure to support the contentions of high cost to comply with specific facts.)
  • High Voltage Beverages, LLC v. Coca-Cola Co., 2009 U.S.Dist. LEXIS 88259 (WD NC, 2009) (“Under Rule 26(b)(2)(C)(i), the court finds that requiring defendant to sift sand for documents it has already produced would be unreasonably duplicative of earlier efforts and that the material contained therein is likely available from other sources, to wit, an earlier production of documents. … Under Rule 26(b)(2)(C)(iii), defendant has made an unrebutted showing that the man-hours and expense of reviewing the collection would be extraordinary, and it appears to the court that the burden or expense of the proposed discovery outweighs its likely benefit. Thus, the court find that it would be disproportional to require defendant to review such information prior to producing it to plaintiff and deny plaintiff’s request.”)
  • Bassi Bellotti S.p.A. v. Transcon. Granite, Inc., 2010 U.S. Dist. LEXIS 93055 (D. Md., 2010) (“… Federal Rules do impose an obligation upon courts to limit the frequency or extent of discovery sought in certain circumstances, such as when the discovery requested is unreasonably duplicative or cumulative, or the burden or expense of the proposed discovery outweighs the likely benefit, considering the needs of the case, the importance of the issues at stake in the action, and the importance of the discovery in resolving those issues. “)
  • Call of the Wild Movie, LLC v. Does 1-1062, No. 10-455 (BAH), — F. Supp. 2d —-, 2011 WL 996786, at *18-20 (D.D.C., 2011) (granting motion to compel because the request was narrow and the ESI requested was important, compared with an insufficient showing of undue burden.)
  • Hock Foods, Inc. v. William Blair & Co., LLC, No. 09-2588-KHV, 2011 WL 884446, at *9 (D. Kan. 2011) (Sebelius, Maj. J.) (denying in part a motion to compel in light of costs estimated between $1.2 and $3.6 million to search 12,000 gigabytes of data in order to answer an overbroad interrogatory.)
  • Diesel Mach., Inc. v. Manitowoc Crane, Inc., No. CIV 09-cv-4087-RAL, 2011 WL 677458, at *2-3 (D.S.D., 2011) (motion to compel the production of documents in native format was denied because no explanation provided on why information contained in native format was necessary to facts of case when those same documents had already been produced as PDFs).
  • Tucker v. American Intern. Group, Inc., 2012 WL 902930 (D. Conn. Mar. 15, 2012) (Plaintiff’s non-party Rule 45 subpoena to inspect hard drives asked the Court “to allow plaintiff “essentially carte blanche access to rummage through Marsh’s electronically stored information, purportedly in the hope that the needle she is looking for lurks somewhere in that haystack. … [T]he burdens of plaintiff’s proposed inspection upon Marsh outweigh the benefits plaintiff might obtain were she to obtain the emails through a Datatrack inspection. Plaintiff seeks to search, inter alia, the mirror images of eighty-three laptops — in effect, to dredge an ocean of Marsh’s electronically stored information and records in an effort to capture a few elusive, perhaps non-existent, fish. … Courts are obliged to recognize that non-parties should be protected with respect to significant expense and burden of compelled inspections under Fed. R. Civ. P. 45(c)(2)(B)(ii). … Moreover, courts have focused on the importance of the Rule 26(b)(2)(C) proportionality limit to implement fair and efficient operation of discovery. … Balancing the prospective burden to Marsh against the likely benefit to plaintiff from the proposed inspection, the Court concludes that the circumstances do not warrant compelling Marsh to endure inspection of its computer records by Datatrack.”)

Conclusion

DCG Systems, Inc. v. Checkpoint Techs, LLC is, by far, the best of the three cases, but it is still far from perfect. It embraces proportionality, and will no doubt save the parties lots of money in e-discovery, but at what cost? Litigation is about finding justice. If you lose that. You lose everything.

Rule 1 says, among other things, that litigation should be speedy and inexpensive. Limiting discovery to five keywords and five custodians will get you that. But Rule 1 also says litigation should be just. That is, after all, the whole point of litigation. In America, like most of the civilized world, we don’t just go through the motions of legal process in a fast and cursory manner. Court systems are not just an empty charade. The heart of law as we know it is due process. We decide cases on the merits, on the facts, on the evidence; not just on the whim of judges or juries. That is what justice means to us. I am concerned about arbitrary limits on e-discovery to save money, and speed things along, that do so at the price of justice.

Judge Paul S. Grewal, who decided DCG Systems, shares these concerns, I am sure. So too does the Patent Bar who adopted this Model Order, and Chief Judge Randall Rader who promotes it. They are, like all bona fide professionals in the Law, trying hard to find a proportional balance between benefit and burden, to know when enough is enough in the search for evidence. They don’t want too much, like some unscrupulous attorneys for whom e-discovery is little more than a legal tool of extortion. They don’t want too little, like some equally unscrupulous attorneys who play hide the ball. Good attorneys are like Goldilocks; they are looking for the just-right amount of e-discovery. They are looking for proportionality.

The patent judges show this concern in the pains they take to say that the five/five rule is just a starting point. They make clear that more e-discovery outside of these limits may be appropriate, that parties can always move the court for additional discovery. For instance, Judge Grewal in DCG Systems says: “Perhaps the restrictions of the Model Order will prove undue. In that case, the court is more than willing to entertain a request to modify the limits.” The Model Order shows the same concern that justice not be sacrificed at the altar of efficiency: “The Court shall consider contested requests for up to five additional custodians per producing party, upon showing a distinct need based on the size, complexity, and issues of this specific case.

My main criticism of the case and Model Order, aside again from the bizarre comment in paragraph five against metadata, pertains to the reliance on Go Fish type keyword search. It is not so much the arbitrary limit to five keywords that bothers me, much less the limit to five custodians, which I think is fine. What bothers me about the Model Order, and bothers every other expert I have talked to, is the reliance on keyword search alone, and blind-pick keyword search at that. It should bother anyone who has read the scientific studies. The Model Order is promoting the worst kind of search: the blind keyword guessing kind. That is inadvertent I’m sure. The lawyers and judges behind the model order were not aware of the limits of blind-guessing-based-keywords. When they do, I assume they will consider appropriate revisions to the Model Order.

The Model Order should be reformed to require that basic metrics be shared on proposed keywords. It should require enough disclosure so that the keyword picks are not blind. Some keyword testing should be permitted a requesting party before five terms are settled upon. The Order is a good start, but it needs tweaking so that the keyword searches can be more effective. I am sure there are many search experts who would help the Committee if asked. I hope they do, because the Patent Bar’s heart is in the right place, a proportionality place.

Now please, would someone get me out of this damn time bottle?

_________________________________________

_________________________________

________________________

________________

_______

__

Thank You!


Evidentiary Objections to Email are Key to BP Oil Spill Case

February 19, 2012

The Deepwater Horizon oil spill case is scheduled for non-jury trial in New Orleans on February 27, 2012. In re: Oil Spill by the Oil Rig “Deepwater Horizon” in the Gulf of Mexico, on April 20, 2010, (E.D.La., MDL No. 2179). This mammoth case is a consolidation of 300 law suits involving 120,000 people and businesses. Click here to see the full docket on Justia. The biggest case in the country proves, once again, that email is powerful evidence. You may recall news concerning email and the world’s largest oil spill back in 2010 when Congress publicized an email from a BP drilling engineer, Brian Morel. It warned that the Deepwater Horizon oil rig was a “nightmare well” that had caused the company problems in the past. Of course, there were more emails like this, but they did not all get into evidence as this blog will explain.

Here is how the presiding Judge Carl Barbier describes the In re: Oil Spill by the Oil Rig “Deepwater Horizon” case in a recent Order:

This Multi-district Litigation (“MDL”) arises from the April 20, 2010 explosion and fire on the DEEPWATER HORIZON mobile offshore drilling unit (“MODU”), and the subsequent discharge of millions of gallons of oil into the Gulf of Mexico. The consolidated cases include claims for the death of eleven individuals, numerous claims for personal injury, and various claims for environmental and economic damages.

Order dated January 26, 2010, Granting in Part and Denying in Part Transocean’s and BP’s Cross-Motions for Partial Summary Judgment Regarding Contractual Indemnity

The purpose of the upcoming trail is to assign and apportion blame among the many defendants sued in these cases. The main corporate defendants include BP, rig owner Transocean, and Halliburton, which provided cementing services. As a side note, BP recently accused Halliburton of spoliation by intentional destruction of computer records and has, of course, moved for sanctions. Anadarko Petroleum, one of BP’s partners in the well, is also involved in the upcoming trial. Plaintiffs include individuals and businesses, represented by a plaintiffs’ steering committee, as well as many states and the U.S. government.

Smoking Gun Emails

Emails will certainly be part of the 7±2 documents that the trial lawyers of all parties will build their arguments around. See: Secrets of Search, Part III.  In addition to the “nightmare well” email that will be the centerpiece of every attorney’s opening statement, except for BP, many other emails were found that will be featured as evidence. Three smoking gun type emails were subject to a motion in limine to try to have them excluded.

One of the emails subject to the motion to exclude was a pre-accident 2009 email where an Anadarko employee expressed disappointment about BP. He complained that BP had not disclosed some information related to tropical storm damage caused to a different Transocean rig. Another Anadarko employee responded with an email saying: “Bummer. I’m amazed that they did not tell us about this.” Bummer and amazing make great touch stones for attorney arguments about cover-up and fault. This is just the kind of email you need to build a persuasive pitch to pass all blame to BP. Mix in the nightmare well, and you have a real bummer for BP’s attempt to share the blame with other defendants.

But wait, there’s more. They also found a June 2010 email from a Halliburton employee, Ryan Haire, which questioned the company’s reported findings regarding some tests on the well.

But wait, there’s still more. They also found an February 2010 email from a BP geologist to a friend referring to the Deepwater Horizon rig and saying: “thanks for the shitty cement job.” Oh, this is a particularly good one for lawyers because of the colorful language.

These emails could be used to argue cover-up and negligence, despite what the witnesses later say under oath. Trial lawyers could now say it was a shitty nightmare well that BP knew was an amazing bummer. Powerful stuff, especially with a jury who might later hear damage claims. BP knew that it had to try to keep out these three emails, so they made an all-out effort with a motion in limine (one of dozens).

Just because you discover email, and it’s hot, and would be part of anyone’s 7±2, does not mean that the email will actually be considered. Never forget that the whole purpose of e-discovery is not just to find evidence, it is to get it admitted at trial. If it does not get into the record, it cannot be part of the 7±2 based argument. All three of the emails quoted above have been excluded from evidence by a February 8, 2012, Order of U.S. Magistrate Judge Sally Shushan.

Judge Shushan excluded the first two emails on the basis of hearsay. The author of one of the emails, Ryan Hair, testified that he really had no first hand knowledge of the test findings that his email criticized. It was just what someone else had told him. Hearsay objection sustained.

As to “thanks for the shitty cement job” email, it was excluded on even more interesting grounds. According to news reports, Halliburton argued that the email was no more than a casual, tasteless joke made by one friend to another. Judge Shushan agreed. She concluded that there was no showing that the email was a “business record” of the cement work that could be used as a basis to introduce the email into evidence. Judge Shusan explained:

It must be demonstrated that the e-mail at issue was not sent or received casually, nor was its creation a mere isolated incident.

Hmm. You have to prove that the email was not casual? I guess this shows the “just kidding” objection sometimes works and can be used in a last-ditch attempt to exclude email. Usually that kind of “didn’t really mean it” argument does not work. The email will be allowed into evidence, but you can provide other testimony that it was just a joke, and let the trier of fact determine the truth. The problem is, most juries lack a sense of humor, especially when people are killed and the lives and business of thousands of people are ruined. So I can see why BP did not want to go that route.

Defense counsel here must have made a very compelling argument, probably concerning unfair prejudice. I suspect their argument also relied upon contextual email and other emails between these friends showing that is how these boys actually talked to each other. Real jokers, and tasteless ones at that, as BP smartly admitted.

Yes, it is amazing what people say in electronic communications like email, not to mention text messages, private Facebook posts, and the like. Email remains king, as the Deepwater case shows, but so to do evidentiary objections. Also see LTN article on Google’s recent attempts to exclude emails on the basis of privilege in its billion dollar patent suit against Oracle. Here is the Sixth Circuit Court of Appeals Order denying Google’s Petition for Writ of Mandamus.

Lorraine v. Markel

Everybody should know Judge Paul Grimm’s Lorraine opinion, and should study it again before they go to trial. Lorraine v. Markel American Insurance Company, 241 F.R.D. 534 (D.Md. 2007). It is the best treatise on rules of evidence governing ESI. Who knows, you just might be able to devise an argument to keep an email out of evidence that would otherwise sink your ship.

Consider Judge’s Grimm’s summary at page nine of the one-hundred-and-one page decision in Lorraine of the kind of evidentiary issues that you should consider:

Whenever ESI is offered as evidence, either at trial or in summary judgment, the following evidence rules must be considered: (1) is the ESI relevant as determined by Rule 401 (does it have any tendency to make some fact that is of consequence to the litigation more or less probable than it otherwise would be); (2) if relevant under 401, is it authentic as required by Rule 901(a) (can the proponent show that the ESI is what it purports to be); (3) if the ESI is offered for its substantive truth, is it hearsay as defined by Rule 801, and if so, is it covered by an applicable exception (Rules 803, 804 and 807); (4) is the form of the ESI that is being offered as evidence an original or duplicate under the original writing rule, of if not, is there admissible secondary evidence to prove the content of the ESI (Rules 1001-1008); and (5) is the probative value of the ESI substantially outweighed by the danger of unfair prejudice or one of the other factors identified by Rule 403, such that it should be excluded despite its relevance.

Conclusion

Email and other electronic evidence, including video, are powerful forces in court rooms today. But just because you discovered relevant ESI, does not mean you will be able to use it or show it to the jury. It might, for instance, not be authentic, as some claim about this genuinely hilarious video.

The ninth step in the EDRM model, Presentation, is the home of complex, sometimes arcane evidence rules and unexpected rulings. The recent order by Judge Shushan in the largest case in the country shows that these evidentiary considerations and arguments are an essential part of e-discovery practice.

Objections to admissibility can come at you from many directions. For instance, in another order in the Deepwater case, Judge Shushan denied an objection to other email based on spousal privilege. She held that the email was not covered by this privilege because a husband had no reasonable expectation of privacy in emails sent to his wife from his work computer. See EvidenceProf Blog.

You neglect evidentiary analysis at your peril. Be prepared and do not be surprised when you hear some new outbursts at trial when you move to admit email into evidence, such as:

Defense Counsel: Objection your honor. Counsel has not proven that this e-mail was not sent or received casually, nor that its creation was just a mere isolated incident.

Plaintiff’s Counsel: But your honor, the witness testified that he was wearing a tuxedo when he sent this email. That proves it was not sent casually. Further, it could not have been isolated because our deduplication software found five copies of this email.

Get ready for some interesting appeals too.


“The Hacker Way” – What e-Discovery Can Learn From Facebook’s Culture and Management

February 5, 2012

Facebook’s regulatory filing for its initial public stock offering included a letter to potential investors by 27 year old billionaire Mark Zuckerberg. The letter describes the culture and approach to management that he follows as CEO of Facebook. Zuckerberg calls it the Hacker Way. Mark did not invent this culture. In a way, it invented him. It molded him and made him and Facebook what they are today. This letter reveals the secrets of Mark’s success and establishes him as the current child prodigy of the Hacker Way.

Too bad most of the CEOs in the e-discovery industry have not read the letter, much less understand how Facebook operates. They are clueless about the management ethic it takes to run a high-tech company.

An editorial in Law Technology News explains why I think most of the CEOs in the e-discovery software industry are just empty suits. They do not understand modern software culture. They think the Hacker Way is a security threat. They are incapable of creating insanely great software. They cannot lead with the kind of inspired genius that the legal profession now desperately needs from its software vendors to survive the data deluge. From what I have seen most of the pointy-haired management types that now run e-discovery software companies should be thrown out. They should be replaced with Hacker savvy management before their once proud companies go the way of the Blackberry. The LTN article has more details on the slackers in silk suits. Vendor CEOs: Stop Being Empty Suits & Embrace the Hacker Way. This essay, a partial rerun from a prior blog, gives you the background on Facebook’s Hacker Way.

Hacker History

The Hacker Way tradition and way of thinking has been around since at least the sixties. It has little or nothing to do with illegal computer intrusions. Moreover, to be clear, NSA leaker Edward Snowden is no hacker. All he did was steal classified information, put it on a thumb drive, meet the press, and then flea the country, to communist dictatorships no less. That has nothing to do with the Hacker Way and everything to do with politics.

The Hacker Way – often called the hacker ethic – has nothing to do with politics. It did not develop in government like the Internet did, but in the hobby of model railroad building and MIT computer labs. This philosophy is well-known and has influenced many in the tech world, including the great Steve Jobs (who never fully embraced its openness doctrines), and Steve’s hacker friend, Steve Wozniak, the laughing Yoda of the Hacker Way. The Hacker approach is primarily known to software coders, but can apply to all kinds of work. Even a few lawyers know about the hacker work ethic and have been influenced by it.

Who is Mark Zuckerberg?

We have all seen a movie version of Mark Zuckerberg in The Social Network, who, by the way, will still own 56.9% voting control of Facebook after the public offering later this year. But who is Mark Zuckerberg really? His Facebook page may reveal some of his personal life and ideas, but how did he create a Hundred Billion Dollar company so fast?

How did he change the world at such a young age? There are now over 850 million people on Facebook with over 100 billion connections. On any one day there are over 500 million people using Facebook. These are astonishing numbers. How did this kind of creative innovation and success come about? What drove Mark and his hacker friends to labor so long, and so well? The letter to investors that Mark published  gives us a glimpse into the answer and a glimpse into the real Mark Zuckerberg. Do I have your full attention yet?

The Hacker Way philosophy described in the investor letter explains the methods used by Mark Zuckerberg’s and his team to change the world. Regardless of who Mark really is, greedy guy or saint (or like Steve Jobs, perhaps a strange combination of both), Mark’s stated philosophy is very interesting. It has applications to anyone who wants to change the world, including those of us trying to change the law and e-discovery.

Hacker Culture and Management

Mark’s letter to investors explains the unique culture and approach to management inherent in the Hacker Way that he and Facebook have adopted.

As part of building a strong company, we work hard at making Facebook the best place for great people to have a big impact on the world and learn from other great people. We have cultivated a unique culture and management approach that we call the Hacker Way.

The word `hacker’ has an unfairly negative connotation from being portrayed in the media as people who break into computers. In reality, hacking just means building something quickly or testing the boundaries of what can be done. Like most things, it can be used for good or bad, but the vast majority of hackers I’ve met tend to be idealistic people who want to have a positive impact on the world.

The Hacker Way is an approach to building that involves continuous improvement and iteration. Hackers believe that something can always be better, and that nothing is ever complete. They just have to go fix it — often in the face of people who say it’s impossible or are content with the status quo.

Hackers try to build the best services over the long term by quickly releasing and learning from smaller iterations rather than trying to get everything right all at once. To support this, we have built a testing framework that at any given time can try out thousands of versions of Facebook. We have the words `Done is better than perfect’ painted on our walls to remind ourselves to always keep shipping.

Hacking is also an inherently hands-on and active discipline. Instead of debating for days whether a new idea is possible or what the best way to build something is, hackers would rather just prototype something and see what works. There’s a hacker mantra that you’ll hear a lot around Facebook offices: `Code wins arguments.’

Hacker culture is also extremely open and meritocratic. Hackers believe that the best idea and implementation should always win — not the person who is best at lobbying for an idea or the person who manages the most people.

To encourage this approach, every few months we have a hackathon, where everyone builds prototypes for new ideas they have. At the end, the whole team gets together and looks at everything that has been built. Many of our most successful products came out of hackathons, including Timeline, chat, video, our mobile development framework and some of our most important infrastructure like the HipHop compiler.

To make sure all our engineers share this approach, we require all new engineers — even managers whose primary job will not be to write code — to go through a program called Bootcamp where they learn our codebase, our tools and our approach. There are a lot of folks in the industry who manage engineers and don’t want to code themselves, but the type of hands-on people we’re looking for are willing and able to go through Bootcamp.

So sayst Zuckerberg. Hands-on is the way.

Application of the Hacker Way to e-Discovery

E-discovery needs that same hands-on approach. E-discovery lawyers need to go through bootcamp too, even if they primarily just supervise others. Even senior partners should go, at least if they purport to manage and direct e-discovery work. Partners should, for example, know how to use the search and review software themselves, and from time to time, do it, not just direct junior partners, associates, and contact lawyers. You cannot manage others at a job unless you can actually do the job yourself. That is the hacker key to successful management.

Also, as I often say, to be a good e-discovery lawyer, you have to get your hands dirty in the digital mud. Look at the documents, don’t just theorize about them or what might be relevant. Bring it all down to earth. Test your keywords, don’t just negotiate them. Prove your search concept by the metrics of the search results. See what works. When it doesn’t, change the approach and try again. Plus, in the new paradigm of predictive coding, where keywords are just a start, the SMEs must get their hand dirty. They must use the software to train the machine. That is how the artificial intelligence aspects of predictive coding work. The days of hands-off theorists is over. Predictive coding work is the penultimate example of code wins arguments.

Iteration is king of ESI search and production. Phased production is the only way to do e-discovery productions. There is no one final, perfect production of ESI. As Voltaire said, perfect is the enemy of  good. For e-discovery to work properly it must be hacked. It needs lawyer hackers. It needs SMEs that can train the machine on what is relevant, on what evidence must be found to do justice. Are you up to the challenge?

Mark’s Explanation to Investors of the Hacker Way of Management

Mark goes on to explain in his letter to investors how the Hacker Way translates into the core values for Facebook management.

The examples above all relate to engineering, but we have distilled these principles into five core values for how we run Facebook:

Focus on Impact

If we want to have the biggest impact, the best way to do this is to make sure we always focus on solving the most important problems. It sounds simple, but we think most companies do this poorly and waste a lot of time. We expect everyone at Facebook to be good at finding the biggest problems to work on.

Move Fast

Moving fast enables us to build more things and learn faster. However, as most companies grow, they slow down too much because they’re more afraid of making mistakes than they are of losing opportunities by moving too slowly. We have a saying: “Move fast and break things.” The idea is that if you never break anything, you’re probably not moving fast enough.

Be Bold

Building great things means taking risks. This can be scary and prevents most companies from doing the bold things they should. However, in a world that’s changing so quickly, you’re guaranteed to fail if you don’t take any risks. We have another saying: “The riskiest thing is to take no risks.” We encourage everyone to make bold decisions, even if that means being wrong some of the time.

Be Open

We believe that a more open world is a better world because people with more information can make better decisions and have a greater impact. That goes for running our company as well. We work hard to make sure everyone at Facebook has access to as much information as possible about every part of the company so they can make the best decisions and have the greatest impact.

Build Social Value

Once again, Facebook exists to make the world more open and connected, and not just to build a company. We expect everyone at Facebook to focus every day on how to build real value for the world in everything they do.

________

Applying the Hacker Way of Management to e-Discovery

Hacker_pentagram

Focus on Impact

Law firms, corporate law departments, and vendors need to focus on solving the most important problems, the high costs of e-discovery and the lack of skills. The cost problem primarily arises from review expenses, so focus on that. The way to have the biggest impact here is to solve the needle in the haystack problem. Costs can be dramatically reduced by improving search. In that way we can focus and limit our review to the most important documents. This incorporates the search principles of Relevant Is Irrelevant and 7±2 that I addressed in Secrets of Search, Part III. My own work has been driven by this hacker focus on impact and led to my development of Bottom Line Driven Proportional Review and multimodal predictive coding search methods. Other hacker oriented lawyers and technologists have developed their own methods to give clients the most bang for their buck.

The other big problem in e-discovery is that most lawyers do not know how to do it, and so they avoid it altogether. This in turn drives up the costs for everyone because it means the vendors cannot yet realize large economies of scale. Again, many lawyers and vendors understand that lack of education and skill sets is a key problem and are focusing on it.

Move Fast

This is an especially challenging dictate for lawyers and law firms because they are overly fearful of making mistakes, of breaking things as Facebook puts it. They are afraid of looking bad and malpractice suits. But the truth is, professional malpractice suits are very rare in litigation. Such suits happen much more often in other areas of the law, like estates and trusts, property, and tax. As far as looking bad goes, they should be more afraid of the bad publicity from not moving fast enough, which is a much more common problem, one that we see daily in sanctions cases. Society is changing fast, if you aren’t too, you’re falling behind.

The problem of slow adoptions also afflicts the bigger e-discovery vendors who often drown in bureaucracy and are afraid to make big decisions. That is why you see individuals like me starting an online education program, while the big boys keep on debating. I have already changed my e-Discovery Team Training program six times since it went public almost two years ago. `Code wins arguments.’ Lawyers must be especially careful of the thinking Man’s disease, paralysis by analysis, if they want to remain competitive.

A few lawyers and e-discovery vendors understand this hacker maxim and do move fast. A few vendors appreciate the value of getting there first, but fewer law firms do. It seems hard for most of law firm management to understand that the risks of lost opportunities are far more dangerous and certain than the risks of a making a few mistakes along the way. The slower, too conservative law firms are already starting to see their clients move business to the innovators, the few law firms who are moving fast. These firms have more than just puffed-up websites claiming e-discovery expertise, they have dedicated specialists and, in e-discovery at least, they are now far ahead of the rest of the crowd. Will the slow and timid ever catch up, or will they simply dissolve like Heller Ehrman, LLP?

Be Bold

This is all about taking risks and believing in your visions. It is directly related to moving fast and embracing change; not for its own sake, but to benefit your clients. Good lawyers are experts in risk analysis. There is no such thing as zero-risk, but there is certainly a point of diminishing returns for every litigation activity that is designed to control risks. Good lawyers know when enough is enough and constantly consult with their clients on cost benefit analysis. Should we take more depositions? Should we do another round of document checks for privilege? Often lawyers err on the side of caution, without consulting with their clients on the costs involved. They follow an overly cautious approach wherein the lawyers profit by more fees. Who are they really serving when they do that?

The adoption of predictive coding provides a perfect example of how some firms and vendors understand technology and are bold, and others do not and are timid. The legal profession is like any other industry, it rewards the bold, the innovators who create new legal methods and law for the benefit of their clients. What client wants a wimpy lawyer who is over-cautious and just runs up bills? They want a bold lawyer, who at the same time remains reasonable, and involves them in the key risk-reward decisions inherent in any e-discovery project.

Be Open

In the world of e-discovery this is all about transparency and strategic lowering of the wall of work product. Transparency is a proven method for building trust in discovery. Select disclosure is what cooperation looks like. It is what is supposed to happen at Rule 26(f) conferences, but seldom does. The attorneys that use openness as a tool are saving their clients needless expense and disputes. They are protecting them from dreaded redos, where a judge finds that you did a review wrong and requires you to do it again, usually under very short timelines. There are limits to openness of course, and lawyers have an inviolate duty to preserve their client’s secrets. But that still leaves room for disclosure of information on your own methods of search and review when doing so will serve your client’s interests.

Build Social Value 

The law is not a business. It is a profession. Lawyers and law firms exist to do justice. That is their social value. We should never lose sight of that in our day-to-day work. Vendors who serve the legal profession must also support these lofty goals in order to provide value. In e-discovery we should serve the prime directive, the dictates of Rule 1, for just, speedy, and inexpensive litigation. We should focus on legal services that provide that kind of social value. Profits to the firm should be secondary. As Zuckerberg said in the letter to potential investors:

Simply put: we don’t build services to make money; we make money to build better services.

This social value model is not naive, it works. It eventually creates huge financial rewards, as a number of e-discovery vendors and law firms are starting to realize. But that should never be the main point.

Conclusion

Facebook and Mark Zuckerberg should serve as an example to everyone, including e-discovery lawyers and vendors. I admit it is odd that we should have to turn to our youth for management guidance, but you cannot argue with success. We should study Zuckerberg’s 21st Century management style and Hacker Way philosophy. We can learn from its tremendous success. Zuckerberg and Facebook have proven that these management principles work in the digital age. It is true if it works. That is the pragmatic tradition of American philosophy. We live in fast changing times. Embrace change that works. As the face of Facebook says: “The riskiest thing is to take no risks.”


%d bloggers like this: