Two-Filter Document Culling – Part Two

February 1, 2015

Please read Part One of this article first.

Second Filter – Predictive Culling and Coding

Bottom-Filter_onlyThe second filter begins where the first leaves off. The ESI has already been purged of unwanted custodians, date ranges, spam, and other obvious irrelevant files and file types. Think of the First Filter as a rough, coarse filter, and the Second Filter as fine grained. The Second Filter requires a much deeper dive into file contents to cull out irrelevance. The most effective way to do that is to use predictive coding, by which I mean active machine learning, supplemented somewhat by using a variety of methods to find good training documents. That is what I call a multimodal approach that places primary reliance on the Artificial Intelligence at the top of the search pyramid. If you do not have active machine learning type of  predictive coding with ranking abilities, you can still do fine grained Second Level filtering, but it will be harder, and probably less effective and more expensive.

Multimodal Search Pyramid

All kinds of Second Filter search methods should be used to find highly relevant and relevant documents for AI training. Stay away from any process that uses just one search method, even if the one method is predictive ranking. Stay far away if the one method is rolling dice. Relying on random chance alone has been proven to be an inefficient and ineffective way to select training documents. Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part OneTwoThree and Four. No one should be surprised by that.

The first round of training begins with the documents reviewed and coded relevant incidental to the First Filter coding. You may also want to defer the first round until you have done more active searches for relevant and highly relevant from the pool remaining after First Filter culling. In that case you also include irrelevant in the first training round, which is also important. Note that even though the first round of training is the only round of training that has a special name – seed set – there is nothing all that important or special about it. All rounds of training are important.

There is so much misunderstanding about that, and seed sets, that I no longer like to even use the term. The only thing special in my mind about the first round of training is that it is often a very large training set. That happens when the First Filter turns up a large amount of relevant files, or they are otherwise known and coded before the Second Filter training begins. The sheer volume of training documents in many first rounds thus makes it special, not the fact that it came first.

ralph_wrongNo good predictive coding software is going to give special significance to a training document just because it came first in time. The software I use has no trouble at all disregarding any early training if it later finds that it is inconsistent with the total training input. It is, admittedly, somewhat aggravating to have a machine tell you that your earlier coding was wrong. But I would rather have an emotionless machine tell me that, than another gloating attorney (or judge), especially when the computer is correct, which is often (not always) the case.

man_robotThat is, after all, the whole point of using good software with artificial intelligence. You do that to enhance your own abilities. There is no way I could attain the level of recall I have been able to manage lately in large document review projects by reliance on my own, limited intelligence alone. That is another one of my search and review secrets. Get help from a higher intelligence, even if you have to create it yourself by following proper training protocols.

Presuit_smallMaybe someday the AI will come prepackaged, and not require training, as I imagine in PreSuit. I know it can be done. I can do it with existing commercial software. But apparently from the lack of demand I have seen in reaction to my offer of Presuit as a legal service, the world is not ready to go there yet. I for one do not intend to push for PreSuit, at least not until the privacy aspects of information governance are worked out. Should Lawyers Be Big Data Cops?

Information governance in general is something that concerns me, and is another reason I hold back on Presuit. Hadoop, Data Lakes, Predictive Analytics and the Ultimate Demise of Information GovernancePart One and Part Two. Also see: e-Discovery Industry Reaction to Microsoft’s Offer to Purchase Equivio for $200 MillionPart Two. I do not want my information governed, even assuming that’s possible. I want it secured, protected, and findable, but only by me, unless I give my express written assent (no contracts of adhesion permitted). By the way, even though I am cautious, I see no problem in requiring that consent as a condition of employment, so long as it is reasonable in scope and limited to only business communications.

I am wary of Big Brother emerging from Big Data. You should be too. I want AIs under our own individual control where they each have a real big off switch. That is the way it is now with legal search and I want it to stay that way. I want the AIs to remain under my control, not visa versa. Not only that, like all Europeans, I want a right to be forgotten by AIs and humans alike.

Facciola_shrugBut wait, there’s still more to my vision of a free future, one where the ideals of America triumph. I want AIs smart enough to protect individuals from out of control governments, for instance, from any government, including the Obama administration, that ignores the Constitutional prohibition against General Warrants. See: Fourth Amendment to the U.S. Constitution. Now that Judge Facciola has retired, who on the DC bench is brave enough to protect us? SeeJudge John Facciola Exposes Justice Department’s Unconstitutional Search and Seizure of Personal Email.

Perhaps quantum entanglement encryption is the ultimate solution? See eg.: Entangled Photons on Silicon Chip: Secure Communications & Ultrafast ComputersThe Hacker News, 1/27/15.  Truth is far stranger than fiction. Quantum Physics may seem irrational, but it has been repeatedly proven true. The fact that it may seem irrational for two electrons to interact instantly over any distance just means that our sense of reason is not keeping up. There may soon be spooky ways for private communications to be forever private.

quantum-cryptology

At the same time that I want unentangled freedom and privacy, I want a government that can protect us from crooks, crazies, foreign governments, and black hats. I just do not want to give up my Constitutional rights to receive that protection. We should not have to trade privacy for security. Once we lay down our Constitutional rights in the name of security, the terrorists have already won. Why do we not have people in the Justice Department clear-headed enough to see that?

Getting back to legal search, and how to find out what you need to know inside the law by using the latest AI-enhanced search methods, there are three kinds of probability ranked search engines now in use for predictive coding.

Three Kinds of Second Filter Probability Based Search Engines

SALAfter the first round of training, you can begin to harness the AI features in your software. You can begin to use its probability ranking to find relevant documents. There are currently three kinds of ranking search and review strategies in use: uncertainty, high probability, and random. The uncertainty search, sometimes called SAL for Simple Active Learning, looks at middle ranking documents where the code is unsure of relevance, typically the 40%-60% range. The high probability search looks at documents where the AI thinks it knows about whether documents are relevant or irrelevant. You can also use some random searches, if you want, both simple and judgmental, just be careful not to rely too much on chance.

CALThe 2014 Cormack Grossman comparative study of various methods has shown that the high probability search, which they called CAL, for Continuous Active Learning using high ranking documents, is very effective. Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014.  Also see: Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine TrainingPart Two.

My own experience also confirms their experiments. High probability searches usually involve SME training and review of the upper strata, the documents with a 90% or higher probability of relevance. I will, however, also check out the low strata, but will not spend as much time on that end. I like to use both uncertainty and high probability searches, but typically with a strong emphasis on the high probability searches. And again, I supplement these ranking searches with other multimodal methods, especially when I encounter strong, new, or highly relevant type documents.

SPLSometimes I will even use a little random sampling, but the mentioned Cormack Grossman study shows that it is not effective, especially on its own. They call such chance based search Simple Passive Learning, or SPL. Ever since reading the Cormack Grossman study I have cut back on my reliance on random searches. You should too. It was small before, it is even smaller now.

Irrelevant Training Documents Are Important Too

In the second filer you are on a search for the gold, the highly relevant, and, to a lesser extent, the strong and merely relevant. As part of this Second Filter search you will naturally come upon many irrelevant documents too. Some of these documents should also be added to the training. In fact, is not uncommon to have more irrelevant documents in training than relevant, especially with low prevalence collections. If you judge a document, then go ahead and code it and let the computer know your judgment. That is how it learns. There are some documents that you judge that you may not want to train on – such as the very large, or very odd – but they are few and far between,

Of course, if you have culled out a document altogether in the First Filter, you do not need to code it, because these documents will not be part of the documents included in the Second Filter. In other words, they will not be among the documents ranked in predictive coding. The will either be excluded from possible production altogether as irrelevant, or will be diverted to a non-predictive coding track for final determinations. The later is the case for non-text file types like graphics and audio in cases where they might have relevant information.

How To Do Second Filter Culling Without Predictive Ranking

KEYS_cone.filter-copyWhen you have software with active machine learning features that allow you to do predictive ranking, then you find documents for training, and from that point forward you incorporate ranking searches into your review. If you do not have such features, you still sort out documents in the Second Filter for manual review, you just do not use ranking with SAL and CAL to do so. Instead, you rely on keyword selections, enhanced with concept searches and similarity searches.

When you find an effective parametric Boolean keyword combination, which is done by a process of party negotiation, then testing, educated guessing, trial and error, and judgmental sampling, then you submit the documents containing proven hits to full manual review. Ranking by keywords can also be tried for document batching, but be careful of large files having many keyword hits just on the basis of file size, not relevance. Some software compensates for that, but most do not. So ranking by keywords can be a risky process.

I am not going to go into detail on the old fashioned ways of batching out documents for manual review. Most e-discovery lawyers already have a good idea of how to do that. So too do most vendors. Just one word of advice. When you start the manual review based on keyword or other non-predictive coding processes, check in daily with the contract reviewer work and calculate what kind of precision the various keyword and other assignment folders are creating. If it is terrible, which I would say is less than 50% precision, then I suggest you try to improve the selection matrix. Change the Boolean, or key words, or something. Do not just keep plodding ahead and wasting client money.

I once took over a review project that was using negotiated, then tested and modified keywords. After two days of manual review we realized that only 2% of the documents selected for review by this method were relevant. After I came in and spent three days with training to add predictive ranking we were able to increase that to 80% precision. If you use these multimodal methods, you can expect similar results.

Basic Idea of Two Filter Search and Review

CULLING.2-Filters.3-lakes-ProductionLWhether you use predictive ranking or not, the basic idea behind the two filter method is to start with a very large pool of documents, reduce the size by a coarse First Filter, then reduce it again by a much finer Second Filter. The result should be a much, much small pool that is human reviewed, and an even smaller pool that is actually produced or logged. Of course, some of the documents subject to the final human review may be overturned, that is, found to be irrelevant, False Positives. That means they will not make it to the very bottom production pool after manual review in the diagram right.

In multimodal projects where predictive coding is used the precision rates can often be very high. Lately I have been seeing that the second pool of documents, subject to the manual review has precision rates of at least 80%, sometimes even as high as 95% near the end of a CAL project. That means the final pool of documents produced is almost as large as the pool after the Second Filter.

Please remember that almost every document that is manually reviewed and coded after the Second Filter gets recycled back into the machine training process. This is known as Continuous Active Learning or CAL, and in my version of it at least, is multimodal and not limited to only high probability ranking searches. See: Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine TrainingPart Two. In some projects you may just train for multiple iterations and then stop training and transition to pure manual review, but in most you will want to continue training as you do manual review. Thus you set up a CAL constant feedback loop until you are done, or nearly done, with manual review.

CAL_multi

As mentioned, active machine learning trains on both relevance and irrelevance. Although, in my opinion, the documents found that are Highly Relevant, the hot documents, are the most important of all for training purposes. The idea is to use predictive coding to segregate your data into two separate camps, relevant and irrelevant. You not only separate them, but you also rank them according to probable relevance. The software I use has a percentage system from .01% to 99.9% probable relevant and visa versa. A near perfect segregation-ranking project should end up looking like an upside down champagne glass.

UpSide_down_champagne_glassAfter you have segregated the document collection into two groups, and gone as far as you can, or as far as your budget allows, then you cull out the probable irrelevant. The most logical place for the Second Filter cut-off point in most projects in the 49.9% and less probable relevant. They are the documents that are more likely than not to be irrelevant. But do not take the 50% plus dividing line as an absolute rule in every case. There are no hard and fast rules to predictive culling. In some cases you may have to cut off at 90% probable relevant. Much depends on the overall distribution of the rankings and the proportionality constraints of the case. Like I said before, if you are looking for Gilbert’s black-letter law solutions to legal search, you are in the wrong type of law.

Upside-down_champagne_2-halfs

Almost all of the documents in the production set (the red top half of the diagram) will be reviewed by a lawyer or paralegal. Of course, there are shortcuts to that too, like duplicate and near-duplicate syncing. Some of the documents in the irrelevant low ranked documents will have been reviewed too. That is all part of the CAL process where both relevant and irrelevant documents are used in training. But only a very low percentage of the probable irrelevant documents need to be reviewed.

Limiting Final Manual Review

In some cases you can, with client permission (often insistence), dispense with attorney review of all or near all of the documents in the upper half. You might, for instance, stop after the manual review has attained a well defined and stable ranking structure. You might, for instance, only have reviewed 10% of the probable relevant documents (top half of the diagram), but decide to produce the other 90% of the probable relevant documents without attorney eyes ever looking at them. There are, of course, obvious problems with privilege and confidentiality to such a strategy. Still, in some cases, where appropriate clawback and other confidentiality orders are in place, the client may want to risk disclosure of secrets to save the costs of final manual review.

In such productions there are also dangers of imprecision where a significant percentage of irrelevant documents are included. This in turn raises concerns that an adversarial view of the other documents could engender other suits, even if there is some agreement for return of irrelevant. Once the bell has been rung, privileged or hot, it cannot be un-rung.

Case Example of Production With No Final Manual Review

In spite of the dangers of the unringable bell, the allure of extreme cost savings can be strong to some clients in some cases. For instance, I did one experiment using multimodal CAL with no final review at all, where I still attained fairly high recall, and the cost per document was only seven cents. I did all of the review myself acting as the sole SME. The visualization of this project would look like the below figure.

CULLING.filters_SME_only_review

Note that if the SME review pool were drawn to scale according to number of documents read, then, in most cases, it would be much smaller than shown. In the review where I brought the cost down to $0.07 per document I started with a document pool of about 1.7 Million, and ended with a production of about 400,000. The SME review pool in the middle was only 3,400 documents.

CULLING.filters_SME_Ex

As far as legal search projects go it was an unusually high prevalence, and thus the production of 400,000 documents was very large. Four hundred thousand was the number of documents ranked with a 50% or higher probable prevalence when I stopped the training. I only personally reviewed about 3,400 documents during the SME review, plus another 1,745 after I decided to stop training in a quality assurance sample. To be clear, I worked alone, and no one other than me reviewed any documents. This was an Army of One type project.

Although I only personally reviewed 3,400 documents for training, and I actually instructed the machine to train on many more documents than that. I just selected them for training without actually reviewing them first. I did so on the basis of ranking and judgmental sampling of the ranked categories. It was somewhat risky, but it did speed up the process considerably, and in the end worked out very well. I later found out that information scientists often use this technique as well.

My goal in this project was recall, not precision, nor even F1, and I was careful not to overtrain on irrelevance. The requesting party was much more concerned with recall than precision, especially since the relevancy standard here was so loose. (Precision was still important, and was attained too. Indeed, there were no complaints about that.) In situations like that the slight over-inclusion of relevant training documents is not terribly risky, especially if you check out your decisions with careful judgmental sampling, and quasi-random sampling.

I accomplished this review in two weeks, spending 65 hours on the project. Interestingly, my time broke down into 46 hours of actual document review time, plus another 19 hours of analysis. Yes, about one hour of thinking and measuring for every two and a half hours of review. If you want the secret of my success, that is it.

I stopped after 65 hours, and two weeks of calendar time, primarily because I ran out of time. I had a deadline to meet and I met it. I am not sure how much longer I would have had to continue the training before the training fully stabilized in the traditional sense. I doubt it would have been more than another two or three rounds; four or five more rounds at most.

Typically I have the luxury to keep training in a large project like this until I no longer find any significant new relevant document types, and do not see any significant changes in document rankings. I did not think at the time that my culling out of irrelevant documents had been ideal, but I was confident it was good, and certainly reasonable. (I had not yet uncovered my ideal upside down champagne glass shape visualization.) I saw a slow down in probability shifts, and thought I was close to the end.

I had completed a total of sixteen rounds of training by that time. I think I could have improved the recall somewhat had I done a few more rounds of training, and spent more time looking at the mid-ranked documents (40%-60% probable relevant). The precision would have improved somewhat too, but I did not have the time. I am also sure I could have improved the identification of privileged documents, as I had only trained for that in the last three rounds. (It would have been a partial waste of time to do that training from the beginning.)

The sampling I did after the decision to stop suggested that I had exceeded my recall goals, but still, the project was much more rushed than I would have liked. I was also comforted by the fact that the elusion sample test at the end passed my accept on zero error quality assurance test. I did not find any hot documents. For those reasons (plus great weariness with the whole project), I decided not to pull some all-nighters to run a few more rounds of training. Instead, I went ahead and completed my report, added graphics and more analysis, and made my production with a few hours to spare.

A scientist hired after the production did some post-hoc testing that confirmed an approximate 95% confidence level recall achievement of between 83% to 94%.  My work also confirmed all subsequent challenges. I am not at liberty to disclose further details.

In post hoc analysis I found that the probability distribution was close to the ideal shape that I now know to look for. The below diagram represents an approximate depiction of the ranking distribution of the 1.7 Million documents at the end of the project. The 400,000 documents produced (obviously I am rounding off all these numbers) were 50% plus, and 1,300,000 not produced were less than 50%. Of the 1,300,000 Negatives, 480,000 documents were ranked with only 1% or less probable relevance. On the other end, the high side, 245,000 documents had a probable relevance ranking of 99% or more. There were another 155,000 documents with a ranking between 99% and 50% probable relevant. Finally, there were 820,000 documents ranked between 49% and 01% probable relevant.

Probability_Distribution_Ora

The file review speed here realized of about 35,000 files per hour, and extremely low cost of about $0.07 per document, would not have been possible without the client’s agreement to forgo full document review of the 400,000 documents produced. A group of contract lawyers could have been brought in for second pass review, but that would have greatly increased the cost, even assuming a billing rate for them of only $50 per hour, which was 1/10th my rate at the time (it is now much higher.)

The client here was comfortable with reliance on confidentiality agreements for reasons that I cannot disclose. In most cases litigants are not, and insist on eyes on review of every document produced. I well understand this, and in today’s harsh world of hard ball litigation it is usually prudent to do so, clawback or no.

Another reason the review was so cheap and fast in this project is because there were very little opposing counsel transactional costs involved, and everyone was hands off. I just did my thing, on my own, and with no interference. I did not have to talk to anybody; just read a few guidance memorandums. My task was to find the relevant documents, make the production, and prepare a detailed report – 41 pages, including diagrams – that described my review. Someone else prepared a privilege log for the 2,500 documents withheld on the basis of privilege.

I am proud of what I was able to accomplish with the two-filter multimodal methods, especially as it was subject to the mentioned post-review analysis and recall validation. But, as mentioned, I would not want to do it again. Working alone like that was very challenging and demanding. Further, it was only possible at all because I happened to be a subject matter expert of the type of legal dispute involved. There are only a few fields where I am competent to act alone as an SME. Moreover, virtually no legal SMEs are also experienced ESI searchers and software power users. In fact, most legal SMEs are technophobes. I have even had to print out key documents to paper to work with some of them.

Penrose_triangle_ExpertiseEven if I have adequate SME abilities on a legal dispute, I now prefer to do a small team approach, rather than a solo approach. I now prefer to have one of two attorneys assisting me on the document reading, and a couple more assisting me as SMEs. In fact, I can act as the conductor of a predictive coding project where I have very little or no subject matter expertise at all. That is not uncommon. I just work as the software and methodology expert; the Experienced Searcher.

Right now I am working on a project where I do not even speak the language used in most of the documents. I could not read most of them, even if I tried. I just work on procedure and numbers alone, where others get their hands in the digital mud and report to me and the SMEs. I am confident this will work fine. I have good bilingual SMEs and contract reviewers doing most of the hands-on work.

 Conclusion

Ralph_face_13There is much more to efficient, effective review than just using software with predictive coding features. The methodology of how you do the review is critical. The two filter method described here has been used for years to cull away irrelevant documents before manual review, but it has typically just been used with keywords. I have tried to show here how this method can be employed in a multimodal method that includes predictive coding in the Second Filter.

Keywords can be an effective method to both cull out presumptively irrelevant files, and cull in presumptively relevant, but keywords are only one method, among many. In most projects it is not even the most effective method. AI-enhanced review with predictive coding is usually a much more powerful method to cull out the irrelevant and cull in the relevant and highly relevant.

If you are using a one-filter method, where you just do a rough cut and filter out by keywords, date, and custodians, and then manually review the rest, you are reviewing too much. It is especially ineffective when you collect based on keywords. As shown in Biomet, that can doom you to low recall, no matter how good your later predictive coding may be.

If you are using a two-filter method, but are not using predictive coding in the Second Filter, you are still reviewing too much. The two-filter method is far more effective when you use relevance probability ranking to cull out documents from final manual review.

Try the two filter method described here in your next review. Drop me a line to let me know how it works out.



Introducing “ei-Recall” – A New Gold Standard for Recall Calculations in Legal Search – Part Three

January 18, 2015

ei-recallPlease read Part One and Part Two of this article before reading this third and final segment.

First Example of How to Calculate Recall Using the ei-Recall Method

Let us begin with the same simple hypothetical used in In Legal Search Exact Recall Can Never Be Known. Here we assume a review project of 100,000 documents. By the end of the search and review, when we could no longer find any more relevant documents, we decided to stop and run our ei-Recall quality assurance test. We had by then found and verified 8,000 relevant documents, the True Positives. That left 92,000 documents presumed irrelevant that would not be produced, the Negatives.

As a side note, the decision to stop may be somewhat informed by running estimates of possible recall range attained based on early prevalence assumptions from a sample of all documents at or near the beginning of the project. The prevalence based recall range estimate would not, however, be the sole driver of the decision to stop and test. The prevalence based recall estimates alone can be very unreliable as shown In Legal Search Exact Recall Can Never Be Known. That is one of the main reasons for developing the ei-Recall alternative. I explained the thinking behind the decision to stop in Visualizing Data in a Predictive Coding Project – Part Three.

I will not have stopped the review in most projects (proportionality constraints aside), unless I was confident that I had already found all of those (highly relevant) types of documents; already found all types of strong relevant documents, and already found all highly relevant document, even if they are cumulative. I want to find each and every instance of all hot (highly relevant) documents that exists in the entire collection. I will only stop (proportionality constraints aside) when I think the only relevant documents I have not recalled are of an unimportant, cumulative type; the merely relevant. The truth is, most documents found in e-discovery are of this type; they are merely relevant, and of little to no use to anybody except to find the strong relevant, new types of relevant evidence, or highly relevant evidence.

Back to our hypothetical. We take a sample of 1,534 (95%+/-2.5%) documents, creating a 95% confidence level and 2.5% confidence interval, from the 92,000 Negatives. This allows us to estimate how many relevant documents had been missed, the False Negatives.

Assume we found only 5 False Negatives. Conversely, we found that 1,529 of the documents picked at random from the Negatives were in fact irrelevant as expected. They were True Negatives.

The percentage of False Negatives in this sample was thus a low 0.33% (5/1534). Using the Normal, but wrong, Gaussian confidence interval the projected total number of False Negatives in the entire 92,000 Negatives would thus be between 5 and 2,604 documents (0.33%+2.5%= 2.83% * 92,000). Using the binomial interval calculation the range would be from 0.11% to 0.76%. The more accurate binomial calculation eliminates the absurd result of a negative interval on the low recall range (.33% -2.5%= -2.17). The fact that negative recall arises from using the Gaussian normal distribution demonstrates why the binomial interval calculation should always be used, not Gaussian, especially in low prevalence. From this point forward, in accordance with the ei-Recall method, we will only use the more accurate Binomial range calculations. Here the correct range generated by the binomial interval is from between 101 (92,000 * 0.11%) and 699 (92,000 * 0.76%) False Negatives. Thus the FNh value is 699, and FNl is 101.

ei-recall_exampleThe calculation of the lowest end of the recall range is based on the high end of the False Negatives projection: Rl = TP / (TP+FNh) = 8,000 / (8,000 + 699) = 91.96% 

The calculation of the highest end of the recall range is based on the low end of the False Negatives projection: Rh = TP / (TP+FNl) = 8,000 / (8,000 + 101) = 98.75%.

Our final recall range values for this first hypothetical is thus from 92%- 99% recall. It was an unusually good result.

Recall_Range_1

Ex. 1 – 92% – 99%

It is important to note that we could have still failed this quality assurance test, in spite of the high recall range shown, if any of the five False Negatives found was a highly relevant, or unique-strong relevant document. That is in accord with the accept on zero error standard that I always apply to the final elusion sample, a standard having nothing directly to do with ei-Recall. Still, I recommend that the e-discovery community also accept this as a corollary to implement ei-Recall. I have previously explained this zero error quality assurance protocol on this blog several times, most recently in Visualizing Data in a Predictive Coding Project – Part Three where I explained:

I always use what is called an accept on zero error protocol for the elusion test when it comes to highly relevant documents. If any are highly relevant, then the quality assurance test automatically fails. In that case you must go back and search for more documents like the one that eluded you and must train the system some more. I have only had that happen once, and it was easy to see from the document found why it happened. It was a black swan type document. It used odd language. It qualified as a highly relevant under the rules we had developed, but just barely, and it was cumulative. Still, we tried to find more like it and ran another round of training. No more were found, but still we did a third sample of the null set just to be sure. The second time it passed.

Variations of First Example with Higher False Negatives Ranges

I want to provide two variations of this hypothetical where the sample of the null set, Negatives, finds more mistakes, more False Negatives. Variations like this will provide a better idea of the impact of the False Negatives range on the recall calculations. Further, the first example wherein I assumed that only five mistakes were found in a sample of 1,534 is somewhat unusual. A point projection ratio of 0.33% for elusion is on the low side for a typical legal search project. In my experience in most projects a higher rate of False Negatives will be found, say in the 0.5% to 2% range.

Let us assume for the first variation that instead of finding 5 False Negatives, we find 20. That is a quadrupling of the False Negatives. It means that we found 1,514 True Negatives and 20 False Negatives in the sample of 1,534 documents from the 92,000 document discard pile. This creates a point projection of 1.30% (20 / 1534), and a binomial range of 0.8% to 2.01%. This generates a projected range of total False Negatives of from 736 (92,000 * .8%) to 1,849 (92,000 * 2.01%).

Now let’s see how this quadrupling of errors found in the sample impacts the recall range calculation.

ei-recall_example2The calculation of the low end of the recall range is based on the high end of the False Negatives projection: Rl = TP / (TP+FNh) = 8,000 / (8,000 + 1,849) = 81.23% 

The calculation of the high end of the recall range is based on the low end of the False Negatives projection: Rh = TP / (TP+FNl) = 8,000 / (8,000 + 736) = 91.58%.

Our final recall range values for this variation of the first hypothetical is thus 81% – 92%.

In this first variation the quadrupling of the number of False Negatives found at the end of the project, from 5 to 20, caused an approximate 10% decrease in recall values from the first hypothetical where we attained a recall range of 92% to 99%.

Ex. 2

Ex. 2 – 81% – 87%

Let us assume a second variation that instead of finding 5 False Negatives, finds 40. That is eight times the number of False Negatives found in the first hypothetical. It means that we found 1,494 True Negatives and 40 False Negatives in the sample of 1,534 documents from the 92,000 document discard pile. This creates a point projection of 2.61% (40/1534), and a binomial range of 1.87% to 3.53%. This generates a projected range of total False Negatives of from 1,720 (92,000*1.87%) to 3,248 (92,000*3.53%).

ei-recall_example3The calculation of the low end of the recall range is based on the high end of the False Negatives projection: Rl2 = TP / TP+FNh = 8,000 / (8,000 + 3,248) = 71.12% 

The calculation of the high end of the recall range is based on the low end of the False Negatives projection: Rh2 = TP / TP+FNl = 8,000 / (8,000 + 1,720) = 82.30%.

Our recall range values for this variation of the first hypothetical is thus 71% – 82%.

In this second variation the eightfold increase of the number of False Negatives found at the end of the project, from 5 to 20, caused an approximate 20% decrease in recall values from the first hypothetical where we attained a recall range of 92% to 99%.

Ex. 3

Ex. 3 – 71% – 82%

Second Example of How to Calculate Recall Using the ei-Recall Method

We will again go back to the second example used in In Legal Search Exact Recall Can Never Be KnownThe second hypothetical assumes a total collection of 1,000,000 documents and that 210,000 relevant documents were found and verified.

In the random sample of 1,534 documents (95%+/-2.5%) from the 790,000 documents withheld as irrelevant (1,000,000 – 210,000) we assume that only ten mistakes were uncovered, in other words, 10 False Negatives. Conversely, we found that 1,524 of the documents picked at random from the discard pile (another name for the Negatives) were in fact irrelevant as expected; they were True Negatives.

The percentage of False Negatives in this sample was thus 0.65% (10/1534). Using the binomial interval calculation the range would be from 0.31% to 1.2%. The range generated by the binomial interval is from  2,449 (790,000*0.31%) to 9,480 (790,000*1.2%) False Negatives.

ei-recall_example4The calculation of the lowest end of the recall range is based on the high end of the False Negatives projection: Rl2 = TP / TP+FNh = 210,000 / (210,000 + 9,480) = 95.68% 

The calculation of the highest end of the recall range is based on the low end of the False Negatives projection: Rh2 = TP / TP+FNl = 210,000 / (210,000 + 2,449) = 98.85%.

Our recall range for this second hypothetical is thus 96% – 99% recall. This is a highly unusual, truly outstanding result. It is, of course, still subject to the outlier result uncertainty inherent in the confidence level. In that sense my labels on the diagram below of “worst” or “best” case scenario are not correct. It could be better or worse in five times out of one hundred times the sample is drawn in accord with the 95% confidence level. See the discussion near the end of my article In Legal Search Exact Recall Can Never Be Known, regarding the role that luck necessarily plays in any random sample. This could have been a lucky draw, but nevertheless, it is just one quality assurance factor among many, and is still an extremely good recall range achievement.

Ex.4 -

Ex.4 – 96% – 99%

Variations of Second Example with Higher False Negatives Ranges

I now offer three variations of the second hypothetical where each has a higher False Negative rate. These examples should better illustrate the impact of the elusion sample on the overall recall calculation.

Let us first assume that instead of finding 10 False Negatives, we find 20, a doubling of the rate. This means that we found 1,514 True Negatives and 20 False Negatives in the sample of 1,534 documents in the 790,000 document discard pile. This creates a point projection of 1.30% (20/1534), and a binomial range of 0.8% to 2.01%. This generates a projected range of total False Negatives of from 6,320 (790,000*.8%) to 15,879 (790,000*2.01%).

ei-recall_example5

Now let us see how this doubling of errors in the second sample impacts the recall range calculation.

The calculation of the low end of the recall range is: Rl = TP / (TP+FNh) = 210,000 / (210,000 + 15,879) = 92.97% 

The calculation of the high end of the recall range is: Rh = TP / (TP+FNl) = 210,000 / (210,000 + 6,320) = 97.08%.

Our recall range for this first variation of the second hypothetical is thus 93% – 97%

The doubling of the number of False Negatives from 10 to 20, caused an approximate 2.5% decrease in recall values from the second hypothetical where we attained a recall range of 96% to 99%.

Ex. 5 -

Ex. 5 – 93% – 97%

Let us assume a second variation where instead of finding 10 False Negatives at the end of the project, we find 40. That is a quadrupling of the number of False Negatives found in the first hypothetical. It means that we found 1,494 True Negatives and 40 False Negatives in the sample of 1,534 documents from the 790,000 document discard pile. This creates a point projection of 2.61% (40/1534), and a binomial range of 1.87% to 3.53%. This generates a projected range of total False Negatives of from 14,773 (790,000*1.87%) to 27,887 (790,000*3.53%).

ei-recall_example6The calculation of the low end of the recall range is now: Rl = TP / (TP+FNh) = 210,000 / (210,000 + 27,887) = 88.28% 

The calculation of the high end of the recall range is now: Rh = TP / (TP+FNl) = 210,000 / (210,000 + 14,773) = 93.43%.

Our recall range for this second variation of second hypothetical is thus 88% – 93%.

The quadrupling of the number of False Negatives from 10 to 40, caused an approximate 7% decrease in recall values from the original where we attained a recall range of 96% to 99%.

Ex. 6 – 88% – 93%

If we do a third variation and increase the number of False Positives found by eight-times, from 10 to 80, this changes the point projection to 5.22% (80/1534), with a binomial range of 4.16% to 6.45%.  This generates a projected range of total False Negatives of from 32,864 (790,000*4.16%) to 50,955 (790,000*6.45%).

ei-recall_example7The calculation of the low end of the recall range is: Rl = TP / (TP+FNh) = 210,000 / (210,000 + 50,955) = 80.47%. 

The calculation of the high end of the recall range is: Rh = TP / (TP+FNl) = 210,000 / (210,000 + 32,864) = 86.47%.

Our recall range for this third variation of the second hypothetical is thus 80% – 86%.

The eightfold increase of the number of False Negatives, from 10 to 80, caused an approximate 15% decrease in recall values from the second hypothetical where we attained a recall range of 96% to 99%.

Ex. 7 - 80% - 86%

Ex. 7 – 80% – 86%

By now you should have a pretty good idea of how the ei-Recall calculation works, and a feel for how the number of False Negatives found impacts the overall recall range.

Third Example of How to Calculate Recall Using the ei-Recall Method where there is Very Low Prevalence

A criticism of many recall calculation methods is that they fail and become completely useless in very low prevalence situations, say 1%, or sometimes even less. Such low prevalence is considered by many to be common in legal search projects.

upside_down_plane_stampObviously it is much harder to find things that are very rare, such as the famous, and very valuable, Inverted Jenny postage stamp with the upside down plane. These stamps exist, but not many. Still, it is at least possible to find them (or buy them), as opposed to a search for a Unicorn or other complete fiction. (Please, Unicorn lovers, no hate mail!) These creatures cannot be found no matter how many searches and samples you take because they do not exist. There is absolute zero prevalence.

unicornThis circumstance sometimes happens in legal search, where one side claims that mythical documents must exist because they want them to. They have a strong suspicion of their existence, but no proof. More like hope, or wishful thinking. No matter how hard you look for such smoking guns, you cannot find them. You cannot find something that does not exist. All you can do is show that you made reasonable, good faith efforts to find the Unicorn documents, and they did not appear. Recall calculations make no sense in crazy situations like that because there is nothing to recall. Fortunately that does not happen too often, but it does happen, especially in the wonderful world of employment litigation.

We are not going to talk further about a search for something that does not exist, like a Unicorn, the zero prevalence. We will not even talk about the extremely, extremely rare, like the Inverted Jenny. Instead we are going to talk about prevalence of about 1%, which is still very low.

In many cases, but not all, very low prevalence like 1%, or less, can be avoided, or at least mitigated, by intelligent culling. This certainly does not mean filtering out all documents that do not have certain keywords. There are other, more reliable methods than simple keywords to eliminate superfluous irrelevant documents, including elimination by file type, date ranges, custodians, and email domains, among other things.

When there is a very low prevalence of relevant documents, this necessarily means that there will be a very large Negatives pool, thus diluting the sampling. There are ways to address the large Negatives sample pool, as I discussed in Part One. The most promising method is to cull out the low end of the probability rankings where relevant documents should anyway be non-existent.

Even with the smartest culling possible, low prevalence is often still a problem in legal search. For that reason, and because it is the hardest test for any recall calculation method, I will end this series of examples with a completely new hypothetical that considers a very low prevalence situation of only 1%. This means that there will be a large size Negatives pool: 99% of the total collection.

We will again assume a 1,000,000 document collection, and again assume sample sizes using 95% +/-2.5% confidence level and interval parameters. An initial sample of all documents taken at the beginning of the project to give us a rough sense of prevalence for search guidance purposes (not recall calculations), projected a range of relevant documents of from 5,500 to 16,100.

The lawyers in this hypothetical legal search project plodded away for a couple of weeks and found and confirmed 9,000 relevant documents, True Positives all. At this point they are finding it very difficult and time consuming to find more relevant documents. What they do find is just more of the same. They are sophisticated lawyers who read my blog and have a good grasp of the nuances of sampling. So they know better than to simply rely on a point projection of prevalence to calculate recall, especially one based on a relatively small sample of a million documents taken at the beginning of the project. See In Legal Search Exact Recall Can Never Be KnownThey know that their recall level could be only a 56% recall 9,000/16,100 (or perhaps far less, in the event the one sample they took was a confidence level outlier event, or there was more concept drift than they thought). It could also be near perfect, 100% recall, when they consider the binomial interval range going the other way. The 9,000 documents they had found was way more than the low range of 5,500. But they did not really consider that too likely.

They decide to stop the search and take a second 1,534 document sample, but this time of the 991,000 null set (1,000,000 – 9,000). They want to follow the ei-Recall method, and they also want to test for any highly relevant or unique strong relevant documents by following the accept on zero error quality assurance test. They find -1- relevant document in that sample. It is just a more of the same type merely relevant document. They had seen many like it before. Finding a document like that meant that they passed the quality assurance test they had set up for themselves. It also meant that using the binomial intervals for 1/1534, which is from 0.00% and 0.36%, there is a projected range of False Negatives of from between -0- and 3,568 documents (991,000*0.36%). (Actually, a binomial calculator that shows more decimal places than any I have found on the web (hopefully we can fix that soon) will not show zero percent, but some very small percentage less than one hundredth of a percent, and thus some documents, not -0- documents, and thus something slightly less than 100% recall.)

ei-recall_example8They then took out the ei-Recall formula and plugged in the values to see what recall range they ended up with. They were hoping it was tighter, and more reliable, than the 56% to 100% recall level they calculated from the first sample alone based on prevalence.

Calculation for the low end of the recall range: Rl = TP / (TP+FNh) = 9,000 / (9,000 + 3,568) = 71.61%.  

Calculation for the high end of the recall range: Rh = TP / (TP+FNl) = 9,000 / (9,000 + 0) = 100%.

The recall range using ei-Recall was 72% – 100%.

Ex. 8 - 72% - 100%

Ex. 8 – 72% – 100%

The attorneys’ hopes in this extremely low prevalence hypothetical were met. The 72%-100% estimated recall range was much tighter than the original 56%-100%. It was also more reliable because it was based on a sample taken at the end of the project when relevance was well defined. Although this sample did not, of and by itself, prove that a reasonable legal effort had been made, it did strongly support that position. When considering all of the many other quality control efforts they could report, if challenged, they were comfortable with the results. Assuming that they did not miss a highly relevant document that later turns up in discovery, it is very unlikely they will ever have to redo, or even continue, this particular legal search and review project.

Would the result have been much different if they had doubled the sample size, and thus doubled the cost of this quality control effort? Let us do the math and find out, assuming that everything else was the same.

ei-recall_example9This time the sample is 3,068 documents from the 991,000 null set. They find two relevant documents, False Negatives, of a kind they had seen many times before. This created a binomial range of 0.01%  to 0.24%, projecting a range of False Negatives from 99 to 2,378 (991,000 * 0.01% — 991,000 * 0.24%). That creates a recall range of 79% – 99%.

Rl = TP / (TP+FNh) = 9,000 / (9,000 + 2,378) = 79.1%.  

Rh = TP / (TP+FNl) = 9,000 / (9,000 + 99) = 98.91%.

Ex. 9 - 79% - 99%

Ex. 9 – 79% – 99%

In this situation by doubling the sample size the attorneys were able to narrow the recall range from 72% – 100% to 79% – 99%. But was it worth the effort and doubling of  cost? I do not think so, at least not in most cases. But perhaps in larger cases, it would be worth the expense to tighten the range somewhat and so increase somewhat the defensibility of your efforts. After all, we are assuming in this hypothetical that the same proportional results would turn up in a sample size double that of the original. The results could have been much worse, or much better. Either way, your results would be more reliable than an estimate based on a sample half that size, and would have produced a tighter range. Also, you may sometimes want to take a second sample of the same size, if you suspect the first was an outlier.

Let is consider one more example, this time of an even smaller prevalence and larger document collection. This is the hardest challenge of all, a near Inverted Jenny puzzler. Assume a document collection of 2,000,000 and a prevalence based on a first random sample for search-help purposes, where again only one relevant was found in the sample of 1,534 sample. This suggested there could be as many as 7,200 relevant documents (0.36% * 2,000,000). So in this second hypothetical we are talking about a dataset where the prevalence may be far less than one percent.

ei-recall_extreme_low_prevalenceAssume next that only 5,000 relevant documents were found, True Positives. A sample 1,534 of the remaining 1,995,000 documents found -3- relevant, False Negatives. The binomial intervals for 3/1534, is from 0.04% to 0.57%, producing  a projected range of False Negatives of from between 798 and 11,372 documents (1,995,000 * .04% — 1,995,000 * 0.57%). Under ei-Recall the recall range measured is 31% – 86%.

Rl = TP / (TP+FNh) = 5,000 / (5,000 + 11,372) = 30.54%.  

Rh = TP / (TP+FNl) = 5,000 / (5,000 + 798) = 86.24%.

31% – 86% is a big range. Most would think too big, but remember, it is just one quality assurance indicator among many.

Ex. 10 - 31% - 86%

Ex. 10 – 31% – 86%

The size of the range could be narrowed by a larger sample. (It is also possible to take two samples, and, with some adjustment, add them together as one sample. This is not mathematically perfect, but fairly close, if you adjust for any overlaps, which anyway would be unlikely.) Assume the same proportions where we sample 3,068 documents from 1,995,000 Negatives, and find -6- relevant, False Negatives. The binomial range is 0.07% – 0.43%. The projected number of False Negatives is 1,397 – 8,579 (1,995,000*.07% – 1,995,000*.43%). Under ei-Recall the range is 37% – 78%.

Rl = TP / (TP+FNh) = 5,000 / (5,000 + 8,579) = 36.82%.  

Rh = TP / (TP+FNl) = 5,000 / (5,000 + 1,397) = 78.16%.

Ex. 11 - 37% - 78%

Ex. 11 – 37% – 78%

The range has been narrowed, but is still very large. In situations like this, where there is a very large Negatives set, I would suggest taking a different approach. As discussed in Part One, you may want to consider a rational culling down of the Negatives. The idea is similar to that behind stratified sampling. You create a subset or strata of the entire collection of Negatives that has a higher, hopefully much higher prevalence of False Negatives than the entire set. See eg. William Webber, Control samples in e-discovery (2013) at pg. 3

CULLING.filters_MULTIMODALAlthough Webber’s paper only uses keywords as an example of an easy way to create a strata, in reality in modern legal search today there are a number of methods that could be used to create the stratas, only one of which is keywords. I use a combination of many methods that varies in accordance with the data set and other factors. I call that a multimodal method. In most cases (but not all), this is not too hard to do, even if you are doing the stratification before active machine learning begins. The non-AI based culling methods that I use, typically before active machine learning begins, include parametric Boolean keywords, concept, key player, key time, similarity, file type, file size, domains, etc.

After the predictive coding begins and ranking matures, you can also use probable relevance ranking as a method of dividing documents into strata. It is actually the most powerful of the culling methods, especially when it comes to predicting irrelevant documents. The second filter level is performed at or near the end of a search and review project. (This is all shown in the two-filter diagram above, which I may explain in greater detail in a future blog.) The second AI based filter can be especially effective in limiting the Negatives size for the ei-Recall quality assurance test. The last example will show how this works in practice.

Low_prevalence_exampleWe will begin this example as before, assuming again 2,000,000 documents where the search finds only 5,000. But this time before we take a sample of the Negatives we divide them into two strata. Assume, as we did in the example we considered in Part One, that the predictive coding resulted in a well defined distribution of ranked documents. Assume that all 5,000 documents found were in the 50%, or higher, probable relevance ranking (shown in red in the diagram). Assume that all of the 1,995,000 presumed irrelevant documents are ranked 49.9%, or less, probable relevant (shown in blue in the diagram). Finally assume that 1,900,000 of these documents are ranked 10% or less probable relevant. Thus leaving 95,000 documents ranked between 10.1% and 49.9%.

Assume also that we have good reason to believe based on our experience with the software tool used, and the document collection itself, that all, or almost all, False Negatives are contained in the 95,000 group. We therefore limit our random sample of 1,534 documents to the 95,000 lower midsection of the Negatives. Finally, assume we now find -30- relevant, False Negatives, none of them important.

ei-recall_ex_StrataThe binomial range is 0.80% – 2.01%, but this time the projected number of False Negatives is 1,254 – 2,641 (95,000*1.32%  — 95,000*2.78%). Under ei-Recall the range is 72.37% – 80.06%.

Rl = TP / (TP+FNh) = 5,000 / (5,000 + 2,641) = 72.37%.  

Rh = TP / (TP+FNl) = 5,000 / (5,000 + 1,245) = 80.06%.

We see that culling down the Negative set of documents in a defensible manner can lead to a much tighter recall range. Assuming we did the culling correctly, the resulting recall range would also be more accurate. On the other hand, if the culling was wrong, based on incorrect presumptions, then the resulting recall range would be less accurate.

Ex. 12 - 72% - 80%

Ex. 12 – 72% – 80%

The fact is, no random sampling techniques can provide completely reliable results in very low prevalence data sets. There is no free lunch, but, at least with ei-Recall the bill for your lunch is honest because it includes ranges. Moreover, with intelligent culling to increase the probable prevalence of False Negatives, you are more likely to get a good meal.

Conclusion

ei-Recall_pentagramThere are five basic advantages of ei-Recall over other recall calculation techniques:

  1. Interval Range values are calculated, not just a deceptive point value. As shown by In Legal Search Exact Recall Can Never Be Known, recall statements must include confidence interval range values to be meaningful.
  2. One Sample only is used, not two, or more. This limits the uncertainties inherent in multiple random samples.
  3. End of Project is when the sample of the Negatives is taken for the calculation. At that time the relevance scope has been fully developed.
  4. Confirmed Relevant documents that have been verified as relevant by iterative reviews, machine and human, are used for the True Positives. This eliminates another variable in the calculation.
  5. Simplicity is maintained in the formula by reliance on basic fractions and common binomial confidence interval calculators. You do not need an expert to use it.

I suggest you try ei-Recall. It has been checked out by multiple information scientists and will no doubt be subject to more peer review here and elsewhere. Be cautious in evaluating any criticisms you may read of ei-Recall from persons with a vested monetary interest in the defense of a competitive formula, especially vendors, or experts hired by vendors. Their views may be colored by their monetary interests. I have no skin in the game. I offer no products that include this method. My only goal is to provide a better method to validate large legal search projects, and so, in some small way, to improve the quality of our system of justice. The law has given me much over the years. This method, and my other writings, are my personal payback.

I offer ei-Recall to anyone and everyone, no strings attached, no payments required. Vendors, you are encouraged to include it in your future product offerings. I do not want royalties, nor even insist on credit (although you can do so if you wish, assuming you do not make it seem like I endorse your product). ei-Recall is all part of the public domain now. I have no product to sell here, nor do I want one. Although I do hope to create an online calculator soon for ei-Recall. When I do, that too will be a give away.

eLeetMy time and services as a lawyer to implement ei-Recall are not required. Simplicity is one of its strengths, although it helps if you are part of the eLeet. I think I have fully explained how it works in this lengthy article. Still, if you have any non-legal technical questions about its application, send me an email, and I will try to help you out. Gratis of course. Just realize that I cannot by law provide you with any legal advice. All articles in my blog, including this one, are purely for educational services, and are not legal advice, nor in any way a solicitation for legal services. Show this article to your own lawyer or e-discovery vendor. You do not have to be 1337 to figure it out (although it helps).


Follow

Get every new post delivered to your Inbox.

Join 4,079 other followers