In Legal Search Exact Recall Can Never Be Known – Part One

December 18, 2014
Voltaire

“Uncertainty is an uncomfortable position. But certainty is an absurd one.” VOLTAIRE

In legal search you can never know exactly what recall level you have attained. You can only know a probable range of recall. For instance, you can never know that you have attained 80% recall, but you can know that you have attained between 70% and 90% recall. Even the range is a probable range, not certain. Exact knowledge of recall is impossible because there are too many documents in legal search to ever know for certain how many of them are relevant, and how many are irrelevant. I will explain all of these things in this two-part blog, plus show you two ways to calculate probable recall range.

Difficulty of Recall in Legal Search 

60% Red Fish Recall

60% Red Fish Recall

In legal search recall is the percentage of target documents found, typically relevant documents. Thus, for instance, if you know that there are 100 relevant documents in a collection of 1,000, and you find 80 of them, then you know that you have attained 80% recall.

Exact recall calculations are possible in small volumes of documents like that because it is possible to know how many relevant documents there are. But legal search today does not involve small collections of documents. Legal search involves tens of thousands of documents, tens of millions of documents. When you get into large collections of documents like that it is impossible to know how many of the documents in the collection are relevant to any particular legal issue. That has to do with several things: human fallibility, the vagaries of legal relevance, and, to some extent, cost limitations. (Although even with unlimited funds you could never know for sure that you had found all relevant documents in a large collection of documents.)

Sampling Allows for Calculations of Probable Ranges Only

diceSince you cannot know exactly how many relevant documents there are in a large population of documents, all you can do is sample and estimate recall. When you start sampling you can never know exact values. You can only know probable ranges according to statistics. That, in a nutshell, is why it is impossible to ever know exactly what recall you have attained in a legal search project.

Even though you can never know an exact recall value, it is still worth trying to calculate recall because you can know the probable range of recall that you have attained at the end of a project.

How Probable Range Calculations Are Helpful

This qualified knowledge of recall range provides evidence, albeit limited, that your efforts to respond to a request for production of documents have been proportional and reasonable. The law requires this. Unreasonably weak or negligent search is not permitted under the rules of discovery. Failure to comply with these rules can result in sanctions, or at least costly court ordered production supplements.

Recall range calculations are helpful in that they provide some proof of the success of your search efforts. They also provide some evidence of your quality control efforts. That is the main purpose of recall calculations in e-discovery, to assist in quality control and quality assurance. Either way, probable recall range calculations can significantly buttress the defensibility of your legal search efforts.

20% Red Fish Recall

20% Red Fish Recall

In some projects the recall range may seem low. Fortunately, there are many other ways to prove reasonable search efforts beyond offering recall measurements. Furthermore, the law generally assumes reasonable efforts have been made until evidence to the contrary has been provided. For that reason evidence of reasonable, proportionate efforts may never be required.

Still, in any significant legal review project I try to make recall calculations for quality control purposes. Now that my understanding of math, sampling, and statistics have matured, when I calculate recall these days I calculate it as a probable range, not a single value. The indisputable mathematical truth is that there is no certainty in recall calculations in e-discovery. Any claims to the contrary are false.

General Example of Recall Range 

Here is a general example of what I mean by recall range, the first of several. You cannot know that you have attained 80% recall. But you can know with some probable certainty, say with the usual 95% confidence level, that you have attained between 70% and 90% recall.

You can also know that the most likely value within the range is 80% recall, but you can never for sure. You can only know the range of values, which, in turn is a function of the confidence interval used in the sampling. The confidence intervals, also known as margin of error, are in turn a function of the sample size, and, to some extent, also the size of the general collection sampled.

Sample_size

Confidence Levels

Even your knowledge of the recall range created by confidence intervals is subject to a confidence level caveat, typically 95%. That is what I mean by probable range. A confidence level of 95% simply means that if you were to take 100 different samples of the same document collection, that ninety five times out of hundred the true recall value would fall inside the confidence interval calculated from each sample. Conversely, five times out of one hundred the true recall value would fall outside the confidence interval. This may sound very complicated, and it can be very hard to understand, but the math component is all just fractions and well within any lawyer’s abilities.

William_webberA few more detailed examples should clarify, examples that I have been fortunate enough to have double checked by one of the world’s leading experts on statistical analysis like this, William Webber, who has a PhD in Information Science. He is my go to science consultant. William, like Gordon Cormack, and others, has patiently worked with me over the years to understand this kind of statistical analysis. William graciously reviewed an advance copy of this blog (actually several) and double checked and often corrected these examples. Any mistakes still remaining are purely my own.

For an example, I go back to the hypothetical search project I described in Part Three of Visualizing Data in a Predictive Coding Project. This was a search of 1,000,000 documents where I took a random sample of 1,534 documents. A sample size of 1,534 created a confidence interval of 2.5% and confidence level of 95%. This means your sample value is subject to a 2.5% error rate in both directions, high and low, for a total error range of 5%. This is a 5% error of the total One Million document population (50,000 documents), not just 5% of the 1,534 sample (77 documents).

data-visual_RANDOM_ONEIn my sample of 1,534 documents 384 were determined to be relevant and 1,150 irrelevant. This is a ratio of 25% (384/1534). This does not mean that you can then multiply 25% times the total population and know that you have exactly 250,000 relevant documents. That is where whole idea of range of probable knowledge comes in. All you can ever know is that there is between 22.5% and 27.5%, which is 25% plus or minus 2.5%, the nominal confidence interval. Thus all we can ever know from that one sample is that there are between 225,000 and 275,000 relevant documents. (This simple spread of 2.5% both ways as the interval is called a Gaussian estimation. Dr. Webber points out that this 2.5% range should be called a nominal intervalIt is only exact if there happens to be a 50% prevalence of the target in the total population, a so-called normal distribution. Exact interval values can only be attained by use of binomial interval calculations (here 22.88% – 27.28%) that takes actual prevalence into consideration. In part one of this blog I am going to ignore the binomial adjustment to try to keep these first examples easier to follow, but, in statistics the binomial distribution is the preferred calculation for intervals on proportions, not the Gaussian distribution, aka the Normal distribution.)

Corpus_data_recall_25

Even this knowledge of range is subject to the confidence level limitation. In our example the 95% confidence level means that if you were to take a random sample of 1,534 documents one hundred times, that in ninety five times out of that one hundred you would have an interval range that contains the true value. The true value in legal search is a kind of fictitious number representing the actual number of relevant documents in the collection. I say fictitious because,  as stated before, in legal search the target we are searching for – relevant documents – is somewhat nebulous, vague and elusive. Certainty is never possible in legal search, just probabilities.

Still, this legal truth problem aside, we assume in statistical sampling that the mid-ratio, here 25%, is the center of the true value, with a range of 2.5% both ways. In our hypothetical the so-called true value is from 225,000 to 275,000 relevant documents. If you repeat the sample of 1,534 documents one hundred times, you will get a variety of different intervals over the number of relevant documents in the collection. In 95% of the cases, the interval will contain the true number of relevant documents.  In 5% of the cases, the true value will fall outside the interval.

25_bell-curve-Standard_deviation_diagram

In 95% of the samples the different intervals created will include the so-called “true value” of 25%, 250,000 documents

Confidence Level Examples

In several of the one hundred samples you will probably see the exact same or nearly the same numbers. You will again find 384 of the 1,534 sample to be relevant and 1,150 irrelevant. On other samples you may have one or two more or less relevant, still creating a 25% ratio (rounding off the tenths of a percent). On another random draw of 1,534 documents you might find 370 documents are relevant and 1,164 are irrelevant. That is a difference of fourteen documents, and brings the ratio down to 24%. Still, the plus or minus 2.5% range of the 24% value is from 21.5% to 26.5%. The so-called true value of 25% is thus still well inside the range of that sample.

Only when you find 345 or fewer relevant documents, instead of 384 relevant, or when you find 422 or more relevant documents, instead of 384 relevant, will you create the five in one hundred (5%) outlier event inherent in the 95% confidence level. Do the math with me here. It is simple proportions.

If you find 345 relevant documents in your sample of 1,534, which I call the low lucky side of the confidence level, then this creates a ratio of 22.49% (345/1534=0.2249), plus or minus 2.5%. This means a range of from between 19.99% and 24.99%. This projects a range of 199,900 to 249,900 relevant documents in the entire collection. The 24.99% value is just under the interval range of the so-called true value of 25% and 250,000 relevant documents.

At the other extreme, which I call the unlucky side, as I will explain later, if you find 422 relevant documents in your sample of 1,534, then this creates a ratio of 27.51% (422/1534=0.2751), plus or minus 2.5%. This means a range of 25.01% to 30.01%. This projects a range of 250,100 to 300,100 relevant documents in the entire collection.

unlucky_bell_curve_example

The 25.01% value at the low end of the 27.51% range of plus or minus 2.5% is just over the so-called true value of 25% and 250,000 relevant documents.

unlucky_bell_curve_ex_COMBINED

In the above combined charts the true value bell curve is shown on the left. The unlucky high value bell curve is shown on the right. The low-end of the high value curve range is 25.01% (shown by the red line). This is just to the right of the 25% center point of the true value curve.

The analysis shows that in this example a variance of only 39 or 38 relevant documents is enough to create the five times out of one hundred sampling event. This means that ninety five times out of one hundred the number of relevant documents found will be from between 346 and 421. Most of the time the number of documents found will be closer to the 384. That is what confidence level means. There are important recall calculation implications to this random sample variation that I will spell out shortly, especially where only one random sample is taken.

To summarize, in this hypothetical sample of 1,534 documents, the 95% confidence level means that the outlier result where an attorney determines that less than 346 documents are relevant, or more than 421 documents are relevant, is likely to happen five times out of one hundred. This 75 document variance (421-346=75) is likely to happen because the documents chosen at random will be different. It is inherent to the process of random sampling. The variance happens even if the attorney has been perfectly consistent and correct in his or her judgments of relevance.

Inherent Vagaries of Relevance Judgments and Human Consistency Errors Create Quality Control Challenges

dice_manyThis assumption of human perfection in relevance judgment is, of course, false for most legal review projects. I call this the fuzzy lens problem of legal search. See Top Ten e-Discovery Predictions for 2014 (prediction number five). Consistency, even in reviews of small samples of 1,534 documents, only arises when special care and procedures are in place for attorney review, including multiple reviews of all grey area documents and other error detection procedures. This is because of the vagaries of relevance and inconsistencies in human judgments problem mentioned earlier. These errors in human legal judgment can be mitigated and constrained, but never eliminated entirely, especially when you are talking about large numbers of samples.

This error component in legal judgments is necessarily a part of all legal search. It adds even more uncertainties to the uncertainties already inherent in all random sampling, expressed as confidence levels and confidence intervals. As Maura Grossman and Gordon Cormack put it recently: “The bottom line is that inconsistencies in responsiveness determinations limit the ability to estimate recall.” Comments on ‘The Implications of Rule 26(g) on the Use of Technology-Assisted Review,’ Federal Courts Law Review, Vol. 7, Issue 1 (2014) at 304. The legal judgment component to legal search is another reason to be cautious in relying on recall calculations alone to verify the quality of our work.

Calculating Recall from Prevalence

You can calculate recall, the percent of the total relevant documents found, based upon your sample calculation of prevalence and the final number of relevant documents identified. Again, prevalence means the percentage of relevant documents in the collection. The final number of relevant documents identified is the total number of relevant documents found by the end of a legal search project. These are the total number of documents either produced or logged.

With these two numbers you can calculate recall. You do so by dividing the final number of relevant documents identified by the projected total number of relevant documents based on the prevalence range of the sample. It is really easier than it sounds as a couple of examples will show.

Examples of Calculating Recall from Prevalence

To start off very simple, assume that our prevalence projection was from between 10,000 to 15,000 relevant documents in the entire collection. The spot or point projection was 12,500, plus or minus 2,500 documents. (Again, I am still excluding the binomial interval calculation for simplicity of illustration purposes, but would not advise this omission for recall calculations using prevalence.)

Next assume that by the end of the project we had found 8,000 relevant documents. Our recall would be calculated as a range. The high end of the recall range would be created by dividing 8,000, the number of relevant documents found, by the low end of the total number of relevant documents projected for the whole collection, here 10,000. That gives us a high of 80% recall (8,000/10,000). The low end of the recall range is calculated by dividing 8,000 by the high end of the total number of relevant documents projected for the whole collection, here 15,000. That gives us a low of 53% recall (8,000/15,000).

recall_example1

Thus our recall rate for this project is between 53% to 80%, subject again, of course, to the 95% confidence level uncertainty. It would not be correct to simply use the spot projection of prevalence, here 12,500 documents, and say that we had attained a recall of 64% (8,000/12,500). We can only say that we have a 95% probability confidence level that we attained between 53% to 80% recall.

Ralph LoseyYes. I know what you are thinking. You have heard every vendor in the business, and most every attorney who speaks on this topic, myself included, proclaim at one time or another that an exact recall level has been attained in a review project. But these proclamations are wrong. You can only know recall range, not a single value, and even your knowledge of range must have a confidence level caveat. This article is intended to stop that imprecise usage of language. The law demands truth from attorneys and those who would serve them. If there is any profession that understands the importance of truth and precision of language, it is the legal profession.

Let us next consider our prior example where we found 384 relevant documents in our sample of 1,534 documents from a total collection of 1,000,000. This created a prevalence of from 225,000 to 275,000 relevant documents. It had a spot or point projection of 25%, with a 2.5% interval range of from 22.5% to 27.5%. (The intervals when the binomial adjustment is used are 22.88% – 27.28%.)

If at the end of the project the producing party had found 210,000 relevant documents, this would mean they may claim a recall of from between 93.33% (210,000/225,000) and 76.36%(210,000/275,000). But even then we would have to make this recall range claim of 76.36% – 93.33% with the 95% confidence interval disclaimer.

recall_example2

Impact of 95% Confidence Level

Even if you assume perfect legal judgment and consistency, multiple random draws of the same 1,000,000 collection of documents in this example could result in a projection of less than 225,000 relevant documents, or more than 275,000 relevant documents. As seen, with the 95% confidence level this happens five times out of one hundred. That is the same as one time out of twenty, or 5%.

That is acceptable odds for almost all scientific and medical research. It is also reasonable for all legal search efforts, so long as you know that this 5% caveat applies, that in one out of twenty times your range may be so far off as to not even include the true value. And, so long as you understand the impact that a 5% chance outlier sample can have on your recall calculations.

The 5% confidence level ambiguity can have a very profound effect on recall calculations based on prevalence alone. For instance, consider what happens when you take only one random sample and it happens to be a 5% outlier sample. Assume the sample happens to have less than 346 relevant documents in it, or more than 421 relevant documents. If you forget the impact of the 95% confidence level uncertainty, you might take the confidence intervals created by these extremes as certain true values. But they are not certain, not at all. You cannot know whether the one sample you took is an outlier sample without taking more samples. By chance it could have been a sample with an unusually large, or unusually small number of relevant documents in it. You might assume that your sample created a true value, but that would only be true 95% of the time.

You should always remember when taking a random sample that the documents selected may by chance not be truly representative of the whole. They may instead fall within an outlier range. You may have pulled a 5% outlier sample. This would, for instance, be the case in our hypothetical true value of 25% if you pulled a sample that happened to have less than 346 or more than 421 relevant documents.

unlucky_prevalence_exYou might forget this fact of life of random sampling and falsely assume, for instance, that your single sample of 1,534 documents, which happened to have, let’s say, 425 relevant documents in it, was representative of all one million documents. You might assume from this one sample that the prevalence of the whole collection was 27.71% (425/1534) with a 2.5% interval of from between 25.21% to 30.21% (again ignoring for now the binomial adjustment (25.48% – 30.02%)). You might assume that 27.71 % was an absolute true value, and the projected relevance range of from 252,100 to 302,100 relevant documents was a certainty.

Only if you took a large number of additional samples would you discover that your first sample was an unlucky outlier that occurs only 2.5% of the time. (You cannot just say take 19 more samples, because each one of those samples would also have a randomness element. But if you took one hundred more samples the “true value” would almost certainly come out.) By repeating the sampling many times, you might find that the average number of relevant documents was actually 384, not the 425 that you happened to draw in the first sample. You would thus find by more sampling that the true value was actually 25%, not 27.71%, that there was probably between 225,000 and 275,000 relevant documents in the entire collection, not between 252,100 and 302,100 as you first thought.

The same thing could happen on what I call the low, lucky side. You could draw a sample with, let’s say, only 342 relevant documents in it the first time out. This would create a spot projection prevalence of 22.29% (342/1534) with a range of 19.79% – 24.79%; projecting to between 197,900 – 247,900 relevant documents. The next series of samples could have an average of 384 relevant documents, our familiar range of 225,000 to 275,000.

Outliers and Luck of Random Draws

So what does this luck of the draw in random sampling mean to recall calculations? And why do I call the low side rarity lucky, and the high side rarity unlucky? The lucky or unlucky perspective is from the perspective of the legal searcher making a production of documents. From the perspective of the requesting party the opposite attributes would apply, especially if only a single sample for recall was taken for quality control purposes.

recall_example2To go back again to our standard example where we find 384 relevant documents in our sample of 1,534 from a total collection of 1,000,000. Our prevalence projection is that there is from 225,000 to 275,000 relevant documents in the total collection. If at the end of the project the producing party has found 210,000 relevant documents, this means, as previously shown, they may claim a recall of from between 93.33% (210,000/225,000) and 76.36%(210,000/275,000). But they should do so with the 95% confidence interval disclaimer.

As discussed, the interval level disclaimer means that in one time out of twenty (5%), the true value may be based on an outlier sample. Thus, for instance, in one time out of forty (2.5% of the time) the sample may have an unluckily large number of relevant documents in it, let us assume again 425 relevant, and not 384. As shown that creates a prevalence spot projection of 27.71% with a range of from 252,100 to 302,100 documents.

Assume again that the producing party finds 210,000 relevant documents. This time they may only claim a recall of from between 83.3% (210,000/252,100) and 69.51% (210,000/302,100).

recall_example_unlucky

That is why I call that the unlucky random sample for the producing party. In 95% of the random samples they would have found 384 relevant documents. They then could have claimed a significantly higher recall range of 76.36% to 93.33%. So based on bad luck alone their recall range has dropped from 76.36% – 93.33% to 69.5% – 83.3%. That is a significant difference, especially if a party is naively putting a great deal of weight on recall value alone.

It is easy to see the flip side of this random coin. The producing party could be lucky (this would happen in 2.5% of the random draws) and by chance draw a sample with less than the lower range. Let us here assume again that the random sample had only 342 relevant documents in it, and not 384. This would create a spot projection prevalence of 22.29% (342/1534) with a range of 19.79% – 24.79%; projecting between 197,900 – 247,900 relevant documents.

Then when the producing party found 210,000 relevant documents it could claim a much higher recall range. It would be from between 84.7% recall (210,000/247,900) to 106% recall (210,000/197,900). The later, 106%, is, of course, a logical impossibility, but one that happens when calculating recall based on prevalence, especially when not using the more accurate binomial calculation. We take that to mean near 100%, or near total recall.

recall_example_lucky

Under both scenarios the number of relevant documents found was the same, 210,000, but as a result of pure chance, one review project could claim from 84.7% to 100% recall, and another only 69.5% to 83.3% recall. The difference between 84.7%-100% and 69.5% -83.3% is significant, and yet is was all based on the luck of the draw. It had nothing whatsoever to do with effort, or actual success. It was just based on chance variables inherent in sampling statistics. This shows the dangers of relying on recall based on one prevalence sample.

This is one reason why I am skeptical of recall calculations, even a recall value that is correctly described in terms of a range, if it is only based on one prevalence sample. If the project can afford it, a better practice is to take a second sample. This doubles the sampling costs from around $1,500 to $3,000, assuming, as I do, that a sample of 1,534 documents can be judged, and quality controlled, for between $1,000 to $2,000. This review cost may be appropriate to determine reasonability in some, but not all projects.

When the costs of a second sample is a reasonable, proportionate expense, I suggest that the second sample not repeat the first, that it not sample again the entire collection for a comparative second calculation of prevalence. Instead, I suggest that the second sample be made for calculation of False Negatives. This means that the second sample would be limited to those documents considered to be irrelevant by the end of the project (sometimes called the discard pile).

_____________

Part Two of this blog will explain my proposed solution to take a second sample, but this time a sample to calculate false negatives, not prevalence. I will again follow the example stated at the end of Part Three of Visualizing Data in a Predictive Coding Project, plus a few others to try to make this subject crystal clear. I will show in detail how a recall range calculation can be made by combining the two samples. The recall range calculation will still have uncertainties, all do, but it will be less than the large 69.6% to 100% range shown here. The second half of this blog will also give credit where due to other excellent articles recently written on the problem of recall calculation, including several written for advanced students by William Webber and others. As Voltaire said: Originality is nothing but judicious imitation. The most original writers borrowed one from another.

To be continued….

Visualizing Data in a Predictive Coding Project – Part Three

November 30, 2014

Ralph at Niagara FallsThis is part three of my presentation of an idea for visualization of data in a predictive coding project. Please read part one and part two first. This concluding blog in the visualization series also serves as a stand alone lesson on the basics of math, sampling, probability, prevalence, recall and precision. It will summarize some of my current thoughts on quality control and quality assurances in large scale document reviews. Bottom line, there is far more to quality control than doing the math, but still, sampling and metric analysis are helpful. So too is creative visualization of the whole process.

Law, Science and Technology

Team_TriangleThis is the area in which scientists on e-discovery teams excel. I recommend that every law firm, corporate and vendor e-discovery team have at least one scientist to help them. Technologists alone are not sufficient. Discovery teams know this, and all have engineers working with lawyers, but very few yet have scientists working with engineers and lawyers. They are like two-legged stools.

Also, and this seems obvious, you need search sophisticated lawyers on e-discovery teams too. I am starting to see this error more and more lately, especially in vendors. Engineers may think they know the law, that is very common, but they are wrong. The same delusional thinking sometimes even affects scientists. Both engineers and scientists tend to over-simplify the law and do not really understand legal discovery. They do not understand the larger context and overall processes and policies.

John Tredennick

John Tredennick

For legal search to be done properly, it must not only include lawyers, the lawyers must lead. Ideally, a lawyer will be in charge, not in a domineering way (my way or the highway), but in a cooperative multi-disciplinary team sort of way. That is one of the strong points I see at Catalyst. Their team includes tons of engineers/technologists, like any vendor, but also scientists, and lawyers. Plus, and here is the key part, the CEO is an experienced search lawyer. That means not only a law degree, but years of legal experience as a practicing attorney doing discovery and trials. A fully multidisciplinary team with an experienced search lawyer as leader is, in my opinion, the ideal e-discovery team. Not only for vendors, but for corporate e-discovery teams, and, of course, law firms.

ralph_wrongMany disagree with me on this, as many laymen and non-practicing attorneys resent my law-first orientation. Technologists are now often in charge, especially on vendor teams. In my experience these technologists do not properly respect the complexity of legal knowledge and process. They often bad mouth lawyers and law firms behind their back. Their products and services suffer as a result. It is a recipe for disaster.

On many vendor teams, the lawyers are not part of the leadership, if they are on a team, it is low level and they are not respected. This is all wrong because the purpose of e-discovery teams is the search for evidence in a legal context, typically a law suit. There is only one leg of the stool that has ever studied evidence.

It takes all three disciplines for top quality legal search: scientists, technologists and lawyers. If you cannot afford a full-time scientists, then you should at least hire one as a consultant on the harder cases.

The scientists on a team may not like the kind of simplification I will present here on sampling, prevalence and recall. They typically want to go into far greater depth and provide multiple caveats on math and probability, which is fine, but it is important to start with a foundation of basics. This is what you will find here. The basics of math and probabilities, and applications of these principles from a lawyer’s point of view, not a scientist’s or engineer’s.

Professor Gordon CormackStill, the explanations here are informed by the input of several outstanding scientists. A special shout out and thanks goes to Gordon Cormack. He has been very generous with his time and patient with my incessant questions. Professor Cormack has been a preeminent voice in Information Science and search for decades now, well before he started teaming with Maura Grossman to study predictive coding. I appreciate his assistance, and, of course, any errors and oversimplifications are solely my own.

Now let’s move onto the math part you have been waiting for, and begin by revisiting the hypothetical we set out in parts one and two of this visualization series.

 Calculating and Visualizing Prevalence

Recall we have exactly 1,000,000 documents remaining for predictive coding after culling. I previously explained that this particular project began with culling and multimodal judgmental sampling, and with a random sample of 1,534 documents. Please note this is not intended to refer to all projects. This is just an example to have data flows set up for visualization purposes. If you want to see my standard work flows see LegalSearchScience.com and Electronic Discovery Best Practices, EDBP.com, on the Predictive Coding page. You will see, for instance, that another activity is always recommended, especially near the beginning of a project, namely Relevancy Dialogues (step 1).

data-visual_RANDOM_ONEAssuming a 95% confidence level, a sample of 1,534 documents creates a confidence interval of 2.5%. This means your sample is subject to a 2.5% error rate in both directions, high and low, for a total error range of 5%. This is 5% of the total One Million documents corpus (50,000 documents), not just 5% of the 1,534 sample (77 documents).

In our hypothetical the SME, who had substantial help from a top contract reviewer, studied the 1,534 sampled documents. The SME found that 384 were relevant and 1,150 were irrelevant. By the way, when done properly this review of 1,534 documents should only cost between $1,000 to $2,000, with most of that going to the SME expense, not the contract reviewer expense.

The spot projection of prevalence here is 25%. This is simple division. Divide the 384 relevant by the total population: 384/1,534. You get 25%. That means that one out of four of the documents sampled was found to be relevant. Random sampling tells us that this same ratio should apply, at least roughly, on the larger population.  You could at this point simply project the sample percentage from the sample onto the entire document population. You would thus conclude that approximately 250,000 documents will likely be relevant. But this kind of projection alone is nearly meaningless in lower prevalence situations, which are common in legal search. It is also of questionable value in this hypothetical where there is a relatively high prevalence of 25%.data-visual_RANDOM_TWO

When doing probability analysis based on sampling you must always include both the confidence level, here 95%, and the confidence interval, here 2.5%. The Confidence Level means that 5 times out of 100 the projection will be in error. More specifically, the Confidence Level means that if you were to repeat the sampling 100 times, the resulting Confidence Interval (here 2.5%) would contain the true value (here 250,000 relevant documents) at least 95% of the time. Conversely, this means that it would miss the true value at most 5% of the time.

In our hypothetical the true value is 250,000 relevant documents. On one sample you might get a Confidence Interval of 225,000 – 275,000, as we did here. But with another sample you might get 215,000 – 265,000. On another you might get 240,000 – 290,000.  These all include the true value. Occasionally (but no more than 5 times in a hundred), you might get a Confidence Interval like 190,000 – 240,000, or 260,000 – 310,000, that excludes the true value. That is what a 95% Confidence Level means.

The confidence interval range is simply calculated here by adding 2.5% to the 25%, and subtracting 2.5% to the 25%. This creates a percentage range of from between 22.5% to 27.5%. When you project this confidence interval unto the entire document collection you get a range of relevant documents of from between 225,000 (22.5%*1,000,000) and 275,000 (27.5%*1,000,000).

This simple calculation, called a Classical or Gaussian Estimation, works well in high prevalence situations, but in situations where the prevalence is low, say 3% or less, and even in this hypothetical where the prevalence is a relatively high 25%, the accuracy of the projected range can be improved by adjusting the 22.5% to 27.5% confidence interval range. The adjustment is performed by using what is called a Binomial calculation, instead of the Normal or Gaussian calculation. Ask a scientist for particulars on this, not me. I just know to use a standard Binomial Confidence Interval Calculator to determine the range in most legal search projects. For some immediate guidance, see the definitions of Binomial Estimation and Classical or Gaussian Estimation in The Grossman-Cormack Glossary of Technology Assisted Review.

data-visual_RANDOM_2With the Binomial Calculator you again enter the samples as a fraction with the numerator being the relevant documents, and the denominator the total number of documents sampled. Again, this is just like before, you divide 384 by 1,534. The basic answer is also the same 25%, the point or spot projection ratio, but the range with a Binomial Calculator is now slightly different. Instead of a simple plus or minus 2.5%, that produces 22.5%-27.5%, the binomial calculation creates a tighter range of from between 22.9% to 27.3%. The range in this hypothetical is thus a little tighter than 5%. It is a range here is 4.4% (from between 22.9 to 27.3%). Therefore the projected range of relevant documents using the Binomial interval calculation is from between 229,000 (22.9%*1,000,000) and 273,000 (27.3%*1,000,000) documents.

bell-curve-Standard_deviation_diagramThe 1,534 simple random sample of 1,000,000 document collection shows that 95 times out of 100 the correct number of relevant documents will be between 229,000 and 273,000.

This also means that no more than five times out of 100 will the calculated interval, here between 22.9% and 27.3%, fail to capture the true value, the true number of relevant documents in the collection. Sometimes the true value, the true number of relevant documents, may be less than 229,000 or greater than 273,000. This is shown in part by the graphic below, which is another visualization that I like to use to help me to visualize what is happening in a predictive coding project. Here the true value lies somewhere between the 229,ooo and 273,000, or at least 95 times out of 100 it does. When 5 times out of 100 the true value lies outside the range, the divergence is usually small. Most of the time, when the confidence interval misses the true value, it is a near miss. Cases where the confidence interval is far below, or far above, the true value is exceedingly rare.

Corpus_data_recall

The Binomial adjustment to the interval calculation is required for low prevalence populations. For instance, if the prevalence was only 2%, and the interval was again 2.5%, the error range would create a negative number, -.5% (2%-2.5%). It would be from between -.5% and 4.5%. That projection means from between -0- relevant documents to 45,000. (Obviously you can not have negative relevant documents.) The zero relevant documents is also known to be wrong because you could not have performed the calculation unless there were some relevant documents in the sample. So in this situation of low prevalence the Binomial calculation method is required to produce anything close to accurate projections.

For example, assuming again a 1,000,000 corpus, and a 95%+/-2.5% sample consisting of 1,534 documents, a 2% prevalence results from finding 31 relevant documents. Using the binomial calculator you get a range of from between 1.4% to 2.9%, instead of  between -.5% to 4.5%. The binomial based interval range results in a projection of between 14,000 relevant documents (instead of the absurd zero relevant documents) to 29,000 relevant documents.

Even with the binomial calculation adjustment, the reliability of using probability projections to calculate prevalence is the subject of much controversy among information scientists and probability statisticians (most good information scientists doing search are also probability statisticians, but not visa versa). The reliability of such range projections is controversial in situations like this, where the sample size is low, here only 1,534 documents, and the likely percentage of relevant documents is also low, here only 2%. In this second scenario where only 31 relevant documents were found in the sample, there are too few relevant documents for sampling to be as reliable as it is in higher prevalence collections. I still think you should do it. It does provide good information. But you should not rely completely on these calculations, especially when it comes to the step of trying to calculate recall. You should use all of the quality control procedures you know, including the others listed previously.

Calculating Recall Using Prevalence

Search Quadrant - standard in information scienceRecall is another percentage that represents the proportion between the total number of relevant documents in a collection, and the number of these relevant documents that have been found. So, if you happen to know that there are 10 relevant documents in a collection of 100 documents, and you correctly identify 9 relevant documents, then you have attained a 90% recall level. Referring to the hopefully familiar Search Quadrant shown right, this means that you would have one False Negative and nine True Positives. If you only found one out of the ten, you would have 10% recall (and would likely be fired for negligence). This would be nine False Positives and one True Positive.

The calculation of Precision requires information on the total number of False Positives. In the first example where you found nine of the ten relevant, if you also found nine more that you thought were relevant, but were not, they were False Positives, then what would your precision be? You have found a total of 18 documents that you thought were relevant, and it turns out that only half of them, 9 documents, were actually relevant. That means you had a precision rate of 50%. Simple. Precision could also easily be visualized by various kinds of standard graphs. I suggest that this be added to all search and review software. It is important to see, but, IMO, when it comes to legal search, the focus should be on Recall, not Precision.

gold_standard_MYTHThe problem with calculating Recall in legal search is that you never know the total number of relevant documents, that is the whole point of the search. If you knew, you would not have to search. But in fact no one ever knows. Moreover, in large document collections, there is no way to ever exactly know the total number of relevant documents. All you can ever do is calculate probable ranges. You might think that absolute knowledge could come from human review of all One Million documents in our hypothetical. But that would be wrong because humans make too many mistakes, especially with legal judgments as fluid as relevancy determinations. So too do computers, dependent as they are to the training by all too fallible humans.

Bottom line, we can never know for sure how many relevant documents are in the 1,000,000 collection, and so we can never know with certainty what our Recall rate is. But we can make an very educated guess, one that is almost certainly correct when a range of Recall percentages are used, instead of just one particular number. We can narrow down the grey area. All experienced lawyers are familiar conceptually with this problem. The law is made in a process similar to this. It arise case by case out of large grey areas of uncertainty.

The reliability of our sample based Recall guess decreases as prevalence lowers. It is a problem inherent to all random sampling. It is not unique to legal evidence search. What is unique to legal search is the importance of Recall to begin with. In many other types of search Recall is not that important. Google is the prime example of this. You do not need to find all websites with relevant information, just the more useful, generally the most popular web pages. Law is moving away from Recall focus, but slowly. And it is more of a move right now from Recall of simple relevance to Recall of the highly relevant. In that sense legal search will in the long run become more like mainstream Googlesque search. But for now the law is still obsessed with finding all of the evidence in the perhaps mistaken belief that justice requires the whole truth. But I digress.

In our initial hypothetical of a 25% prevalence, the accuracy of the recall guess is actually very high, subject primarily to the 95% confidence level limitation. Even in the lower 2% hypothetical, the recall calculation has value. Indeed, it is the basis of much scientific research concerning things like rare diseases and rare species. Again, we enter a hotly debated area of science that is beyond my expertise (although not my interest).

data-visual_Round_5Getting back to our example where we have a 95% confidence level that there are between 229,000 and 273,000 relevant documents in the 1,000,000 document collection – as described before in part one of this series, we assume that after only four rounds of machine training we have reached a point in the project where we are not seeing a significant increase in relevant documents from one round of machine training to the next. The change in document probability ranking has slowed and the visualization of the ranking distribution looks something like this upside down champagne glass shown right.

At this point a count shows that we have now found 250,000 relevant documents. This is critical information that I have not shared in the first two blogs, information that for the first time allows for a Recall calculation. I held back this information until now for simplicity purposes, plus it allowed me to add a fun math test. (Well, the winner of the test, John Tredennick, CEO of Catalyst, thought it was fun.) In reality you would keep a running count of relevant documents found, and you would have a series of Recall visualizations. Still, the critical Recall calculation takes place when you have decided to stop the review and test.

Recall-rangeAssuming we have found 250,000 relevant documents this means that we have attained anywhere from between 91.6% to 100% recall. At least it means we can have a 95% confidence level that we have attained a result somewhere in that range. Put another way, we can have a 95% confidence level that we have attained a 91.6% or higher recall rate. We cannot have 100% confidence in that result. Only 95%. That means that one time out of twenty (5% of the 95% confidence level) there may be more than 273,00 relevant documents. That in turn means that one time in twenty we may have attained less than a 91.6% recall in this circumstance.

bell-curve-Standard_deviation_diagram

The low side Recall calculation of 91.6% is derived by dividing the 250,000 found, by the high-end of the confidence interval, 273,000 documents. If the spot projection happens to be exactly right, which is rare, and in this hypo is now looking less and less likely (we have, after all, now found 250,000 relevant documents, or at least think we have), then the math would be 100% recall (250,000/250,000). That is extremely unlikely. Indeed, information scientists love to say that the only way to attain 100% recall is with 0% precision, that is, to select all documents. This statement is, among other things, a hyperbole intended to make the uncertainty point inherent in sampling and confidence levels. The 95% Confidence Level uncertainty is shown by the long tail on either side of the standard bell curve pictured above.

You can never have more than 100% recall, of course, so we do not say we have attained anywhere between 109% and 91.6% recall. The low-end estimate of 229,000 relevant documents has, at this point in the project, been shown to be wrong by the discovery and verification of 250,000 relevant documents. I say shown, not proven because of the previously mentioned liquidity of relevance and inaccuracy of humans of make consistent final judgments when, as here, vast numbers of documents are involved.

Thermometer_RecallFor a visualization of recall I like the image of a thermometer, like a fund-raising goal chart, but with a twist of two different measures. On the left side put the low-end measure, here the 2.29% confidence interval with 229,000 documents, and on the right side, the high measure, 2.73% confidence interval with 273,000 documents. You can thus chart your progress from the two perspectives at once, the low probability error rate, and the high probability error rate. This is shown on the diagram to the right. It shows the metrics of our hypothetical where we have found and confirmed 250,000 relevant documents. That just happens to represent 100% recall on the low-end of probability error range using the 2.29% confidence interval. But as explained before, the 250,000 relevant documents found also represents only 91.6% recall on the high-end using the 2.73% confidence interval. You will never really know which is accurate, except that it is safe to bet you have not in fact attained 100% recall.

Random Sample Quality Assurance Test

In any significant project, in addition to following the range of recall progress, I impose a quality assurance test at the end to look for False Negatives. Remember, this means relevant documents that have been miscoded as irrelevant. One way to do that is by running similarity searches and verification of syncing. That can catch situations involving documents that are known to be relevant. It is a way to be sure that all variations of those documents, including similar but different documents, are coded consistently. There may be reasons to call one variant relevant, and another irrelevant, but usually not. I like to put a special emphasis on this at the end, but it is only one of many quality tests and searches that a skilled searcher can and should run throughout any large review project. Visualizations could also be used to assist in this search.

But what about the False negatives that are not near duplicates or close cousins? The similarity and consistency searches will not find them. Of course you have been looking for these documents throughout the project, and at this point you think that you have found as many relevant documents as you can. You may not think you have found all relevant documents, total recall, no experienced searcher ever really believes that, but you should feel like you have found all highly relevant documents. You should have a well reasoned opinion that you have found all of the relevant documents needed to do justice. That opinion will be informed by legal principles of reasonability and proportionality.

data-visual_Round_5That opinion will also be informed by your experience in search though this document set. You will have seen for yourself that the probability rankings have divided the documents into to well defined segments, relevant and irrelevant. You will have seen that no documents, or very few, remain in the uncertainty area, the 40-60% range. You will have personally verified the machine’s predictions many times, such that you will have high confidence that the machine is properly implementing the SME’s relevance concept. You will have seen for yourself that few new relevant documents are found from one round of training to the next. You will also usually have seen that the new documents found are really just more of the same. That they are essentially cumulative in nature. All of these observations, plus the governing legal principles, go into the decision to stop the training and review, and move onto final confidentiality protection review, and then production and privilege logging.

Still, in spite of all such quality control measures, I like to add one more, one based again on random sampling. Again, I am looking for False Negatives, specifically any that are of a new and different kind of relevant document not seen before, or a document that would be considered highly relevant, even if of a type seen before. Remember, I will not have stopped the review in most projects (proportionality constraints aside), unless I was confident that I had already found all of those types of documents; already found all types of strong relevant documents, and already found all highly relevant document, even if they are cumulative. I want to find each and every instance of all hot (highly relevant) documents that exists in the entire collection. I will only stop (proportionality constraints aside) when I think the only relevant documents I have not recalled are of an unimportant, cumulative type; the merely relevant. The truth is, most documents found in e-discovery are of this type; they are merely relevant, and of little to no use to anybody except to find the strong relevant, new types of relevant evidence, or highly relevant evidence.

There are two types of random samples that I usually run for this final quality assurance test. I can sample the entire document set again, or I can limit my sample to the documents that will not be produced. In the hypothetical we have been working with, that would mean a sample of the 750,000 documents not identified as relevant. I do not do both samples, but rather one or another. But you could do both in a very large, relatively unconstrained budget project. That would provide more information. Typically in a low prevalence situation, where for instance there is only a 2% relevance shown from both the sample, and the ensuing search project, I would do my final quality assurance test with a sample of the entire document collection. Since I am looking for False Negatives, my goal is not frustrated by including the 2% of the collection already identified as relevant.

There are benefits from running a full sample again, as it allows direct comparisons with the first sample, and can even be combined with the first sample for some analysis. You can, for instance, run a full confusion matrix analysis as explained, for instance, in The Grossman-Cormack Glossary of Technology Assisted Review; also see Escape From Babel: The Grossman-Cormack Glossary.

CONFUSION MATRIX

Truly Non-Relevant Truly Relevant
Coded Non-Relevant True Negatives (“TN”) False Negatives (“FN”)
Coded Relevant False Positives (“FP”) True Positives (“TP”)

Accuracy = 100% – Error = (TP + TN) / (TP + TN + FP + FN)
Error = 100% – Accuracy = (FP + FN) / (TP + TN + FP + FN)
Elusion = 100% – Negative Predictive Value = FN / (FN + TN)
Fallout = False Positive Rate = 100% – True Negative Rate = FP / (FP + TN)
Negative Predictive Value = 100% – Elusion = TN / (TN + FN)
Precision = Positive Predictive Value = TP / (TP + FP)
Prevalence = Yield = Richness = (TP + FN) / (TP + TN + FP + FN)
Recall = True Positive Rate = 100% – False Negative Rate = TP / (TP + FN)

Special code and visualizations built into review software could make it far easier to run this kind of Confusion Matrix analysis. It is really far easier than it looks and should be user friendly automated. Software vendors should also offer basic instruction on this tool. Scientist members of an e-discovery team can help with this. Since the benefits of this kind of analysis outweigh the small loss of including the 2% already known to be relevant in the alternative low prevalence example, I typically go with a full random sample in low prevalence projects.

In our primary hypothetical we are not dealing with a low prevalence collection. It has a 25% rate. Here if I sampled the entire 1,000,000, I would in large part be wasting 25% of my sample. To me that detriment outweighs the benefits of bookend samples, but I know that some experts disagree. They love the classic confusion matrix analysis.

To complete this 25% prevalence visualization hypothetical, next assume that we take a simple random sample of the 750,000 documents only, which is sometimes called the null-set. This kind of sample is also sometimes called an Elusion test, as we are sampling the excluded documents to looks for relevant documents that have so far eluded us. We again sample 1,534 documents, again allowing us a 95% confidence level and confidence interval of plus or minus 2.5%.

Next assume in this hypothetical that we find that 1,519 documents have been correctly coded as irrelevant. (Note, most of the correct coding would come have come from machine prediction, not actual human review, but some would have been by actual prior human review.) These 1,519 documents are True Negatives. That is 99% accurate. But the SME review of the random sample did uncover 15 mistakes, 15 False Negatives. The SME decided that 15 documents out of the 1,534 sampled  had been incorrectly coded as irrelevant. That is a 01% error rate. That is pretty good, but not dispositive. What really matters is the nature of the relevancy of the 15 False Negatives. Were these important documents, or just more of the same?

I always use what is called an accept on zero error protocol for the elusion test when it comes to highly relevant documents. If any are highly relevant, then the quality assurance test automatically fails. In that case you must go back and search for more documents like the one that eluded you and must train the system some more. I have only had that happen once, and it was easy to see from the document found why it happened. It was a black swan type document. It used odd language. It qualified as a highly relevant under the rules we had developed, but just barely, and it was cumulative. Still, we tried to find more like it and ran another round of training. No more were found, but still we did a third sample of the null set just to be sure. The second time it passed.

In our hypothetical none of the 15 False Negative documents were highly relevant, not even close. None were of a new type of relevance. All were of a type seen before. Thus the test was passed.

The project then continued with the final confidential review, production and logging phases. Visualizations should be included in the software for these final phases as well, and I have several ideas, but this article is already far too long.

As I indicated in part one of this blog series, I am just giving away a few of my ideas here. For more information you will need to contact me for billable consultations, routed through my law firm, of course, and subject to my time availability with priority given to existing clients. Right now I am fully booked, but I may have time for these kind of interesting projects in a few months.

Conclusion

Ralph_FallsThe growth in general electronic discovery legal work (see EDBP for full description) has been exploding this year, so too has multidisciplinary e-discovery team work. It will, I predict, continue to grow very fast from this point forward. But the adoption of predictive coding software and predictive coding review has, to date, been an exception to this high growth trend. In fact, the adoption of predictive coding has been relatively slow. It is still only infrequently used, if at all, by most law firms, even in big cases. I spoke with many attorneys at the recent Georgetown Institute event who specialize in this field. They are all seeing the same thing and, like me, are shaking their heads in frustration and dismay.

I predict this will change too over the next two to three years. The big hindrances to the adoption of predictive coding are law firms and their general lack of knowledge and skills in predictive coding. Most law firms, both big and small, know very little about the basic methods of predictive coding. They know even less about the best practices. The ignorance is widespread among attorneys my age, and they are the ones in law firm leadership positions. The hinderance to widespread adoption of predictive coding is not lack of judicial approval. There is now plenty of case law. The hinderance is lack of knowledge and skills.

greedy

Greedy Lawyers

There is also a greed component involved for some, shall we say, less than client-centric law firms. We have to talk about this elephant in the room. Client’s already are. Some attorneys are quite satisfied with the status quo. They make a great deal of money from linear reviews, and so called advanced keyword search driven reviews. The days of paid inefficiency are numbered. Technology will eventually win out, even over fat cat lawyers. It always does.

The answers I see to the resistance issues to predictive coding are threefold:

Continued Education. We have to continue the efforts to demystify AI and active machine learning. We ned to move our instruction from theory to practice.

Improved Software. Some review software already has excellent machine training features. Some is just so-so, and some do not have this kind of document search and ranking capacity at all. My goal is to push the whole legal software industry to include active machine learning in most all of their options. Another goal is for software vendors to improve their software, and make it easier to work with by adding much more in the way of creative visualizations. That has been the main point of this series and I hope to see a response soon from the industry. Help me to push the industry. Demand these features in your review software. Look beyond the smokescreens and choose the true leaders in the field.

Client Demand. Pressure on reluctant law firms from the companies that pay the bills will have a far stronger impact than anything else.  I am talking about both corporate clients and insurers. They will, I predict, start pushing law firms into greater utilization of AI-enhanced document review. The corporate clients and insurers have the economic motivation for this change that most law firms lack. Corporate clients are also much more comfortable with the use of AI for Big Data search. That kind of pressure by clients on law firms will motivate e-discovery teams to learn the necessary skills. That will in turn motivate the software vendors to spend the money necessary to improve their software with better AI search and better visualizations.

All of the legal software on the market today, especially review software, could be improved by adding more visualizations and graphic display tools. Pictures really can be worth a thousand words. They can especially help to make advanced AI techniques more accessible and easier to understand. The data visualization ideas set forth in this series are just the tip of the iceberg of what can be done to improve existing software.


Genius Bar at Georgetown

November 23, 2014

genius_bar_logoI interrupt my current series of blogs on predictive coding visualization to report on a recent experience with a Genius Bar event. I am not talking about the computer hipster type geniuses that work at the Apple Genius Bar, although there were a few of them at the CLE too. The Apple Genius Bar types can be smart, but, as we all know, they are not really geniuses, even if that is their title. True genius is rare, especially in the Legal Bar. Wikipedia says that a genius is a person who displays exceptional intellectual ability, creativity, or originality, typically to a degree that is associated with the achievement of new advances in a domain of knowledge.

genius_bar

All of us who attended the Georgetown Advanced e-Discovery Institute this week saw a true genius in action. He did not wear the tee-shirt uniform of the Apple genius employees. He wore a bow tie. His name is John M. Facciola. His speech at Georgetown was his last public event before he retires next week as a U.S. Magistrate Judge.

Facciola_sitting

Judge Facciola’s one hour talk displayed exceptional intellectual ability, creativity and originality, just as the definition of genius requires. What else can you call a talk that features a judge channeling Socrates? An oration that uses Plato’s Apology to criticize and enlighten Twenty-First Century lawyers? …. sophists all. The intensity of John’s talk, to me at least, and I’m sure to most of the six hundred or so other lawyers in the room, also indicated a new advance in the making in the domain of knowledge of Law. Still, true genius requires that an advance in knowledge actually be achieved, not just talked about. It requires that the world itself be moved. It requires, as another genius of our day, Steve Jobs, liked to say, that a dent be made in the Universe.

Facciola_standing_thinGeniuses not only have intellectual ability, creativity and originality, they have it to such a degree that they are able to change the world. In the legal world, indeed any world, that is rare. Richard Braman was one such man. His Sedona Conference did make a dent in the legal universe. So did the Principles, and so did his crowning achievement, the Cooperation Proclamation. John Facciola is another such man, or may be, who is trying to take Cooperation to the next level, to expand it to platonic heights. To be honest, the jury is still out on whether his ingenious ideas and proposals will in fact be adopted by the Bar, will in fact lead to the achievement of new advances in a domain of knowledge. That is the true test of a real genius.

Thus, whether future generations will see John Facciola as a genius depends in no small part on all of us, as well as on what John Facciola does next. For unlike the genius of Jobs and Braman, Facciola may be retired as a judge, but he is still very much alive. His legacy is still in the making. For that we should be very grateful. I for one cannot wait to see what he does next and will continue to support his genius in the making.

All of the other judges at Georgetown made it clear where they stand on the ideas of virtue and justice that Facciola promotes. In the final judges panel each wore a funny bow tie in his honor, and were all introduced by panel leader Maura Grossman with Facciola as their last name. It was a very touching and funny moment, all at the same time. I am really glad I was there.

Facciola’s last speech as a judge reflected his own life, his own genius. It was a very personal talk, a deep talk, where, to use his words, he shared his own strong religious and spiritual convictions. In this context he shared his critique of the law as we currently know it, and of legal ethics. It was damning and based on long experience. It was real. Some might say harsh. But he balanced this with his inspirational vision of what the law could and should be in the future. A law where morality, not profit, is the rule. Where the Golden Rule trumps all others. A profession where lawyers are not sophists, that will say or do anything for their clients. He laments that in federal court today most of the litigants are big corporations, as only they can afford federal court.

Judge Facciola calls for a profession where lawyers are citizens who care, who try to do the right thing, the moral thing, not just the expedient or profitable thing for their clients. He calls for lawyers to cooperate. He calls for a complete rewrite of our codes of ethics to make them more humanistic, and at the same time, more spiritual, more Platonic, in the ancient philosophic sense of Truth and Goodness. This is the genius we saw shine at Georgetown.

It reminds me of some quotes from Plato’s Apology, a few excerpts of which Facciola also read during his last talk as a judge. Take a moment and remember with me the most famous closing argument of all times:

Men of Athens, I honor and love you; but I shall obey God rather than you, and while I have life and strength I shall never cease from the practice and teaching of philosophy, exhorting anyone whom I meet after my manner, and convincing him, saying: O my friend, why do you who are a citizen of the great and mighty and wise city of Athens, care so much about laying up the greatest amount of money and honor and reputation, and so little about wisdom and truth and the greatest improvement of the soul, which you never regard or heed at all? Are you not ashamed of this? And if the person with whom I am arguing says: Yes, but I do care; I do not depart or let him go at once; I interrogate and examine and cross-examine him, and if I think that he has no virtue, but only says that he has, I reproach him with undervaluing the greater, and overvaluing the less. And this I should say to everyone whom I meet, young and old, citizen and alien, but especially to the citizens, inasmuch as they are my brethren. For this is the command of God, as I would have you know; and I believe that to this day no greater good has ever happened in the state than my service to the God. For I do nothing but go about persuading you all, old and young alike, not to take thought for your persons and your properties, but first and chiefly to care about the greatest improvement of the soul. I tell you that virtue is not given by money, but that from virtue come money and every other good of man, public as well as private. This is my teaching, and if this is the doctrine which corrupts the youth, my influence is ruinous indeed. But if anyone says that this is not my teaching, he is speaking an untruth. Wherefore, O men of Athens, I say to you, do as Anytus bids or not as Anytus bids, and either acquit me or not; but whatever you do, know that I shall never alter my ways, not even if I have to die many times.

hemlock_cup_David-The_Death_of_Socrates_crop

For the truth is that I have no regular disciples: but if anyone likes to come and hear me while I am pursuing my mission, whether he be young or old, he may freely come. Nor do I converse with those who pay only, and not with those who do not pay; but anyone, whether he be rich or poor, may ask and answer me and listen to my words; and whether he turns out to be a bad man or a good one, that cannot be justly laid to my charge, as I never taught him anything. And if anyone says that he has ever learned or heard anything from me in private which all the world has not heard, I should like you to know that he is speaking an untruth.

Facciola_standing_thin_shrugIf Facciola’s positive, Socratic inspired, moral vision for the Law is realized, and I for one think it is possible, then it would be a great new advance in the field of Law. The legal universe would be dented again. It would cement Facciola’s own place as a great Twenty-First Century genius, right up there with Jobs and Braman.

I am sure that Judge Facciola will continue his educational efforts in the field of law after the judge title becomes honorific. I hope he will give more specific form to his reform proposals. I cannot hope that his educational efforts will increase, because they are already incredibly prodigious, but I can hope they will now focus on his legacy, on his particular genius for legal ethics.

Many of our judges and attorneys work hard on e-discovery education. Many have great intellectual ability. But not many are capable of displaying the kind of genius we saw from Facciola’s swan-song as a judge at Georgetown. It is his alma mater, and the students at the Institute, which we have taken to calling the audience these days, were filled with John’s friends and admirers. It brought out the best in Fatch.

There were over 600 students, or fans, or audience, whatever you want to call them, who attended the Georgetown event held at the Ritz Carlton in Tysons Corner. That is a lot of people, mostly all lawyers. To be honest, that was several hundred lawyers too many for any CLE event. Big may be better in data, but not in education.

I liked the Institute better in its early days when there were just a few dozen attendees. I was there near the beginning as a teacher, and considered my sessions to be classes. The people who paid to attend were considered students. That is the language we used then. Now that has all changed. Now I attend as a presenter, and the people who pay to attend are called an audience. It seems like a transition that Socrates would condemn.

The big crowd and entertainment aspects of this years Georgetown Institute reminded me of a big event in Canada last month where I was honored to make the keynote on the first day. I talked about Technology and the Future of the Law, and, as usual, had my razzle dazzle Keynote slides. (I don’t use PowerPoint.) On the second day they had a second keynote. I was surprised to learn he was a professional motivational speaker. Not even a lawyer. My honor faded quickly. The keynote was all salesman rah rah, with no mention of the law at all. That’s not right in my book. It also made me wonder why I was really asked to give the first day’s keynote. Oh well, it was otherwise a great event. But I am now starting to tone down my slides. If I could tone down my enthusiasm, I would too, but I’ve tried, and that’s not possible.

John FacciolaThe task of putting on a show for a large, 600 plus audience was too great a challenge for almost all of the presenters at Georgetown. Do not get me wrong, all of the attorneys tagged to present knew their stuff, but being an expert, and an educator, are very different things. Being an expert and an entertainer are almost night and day. Very, very few experts have the skills of Facciola to do that, who, by the way used no slides at all. (I cannot, however, help but think how it might have been improved by the projection of a large holographic image of Socrates.)

Most of the sessions I attended at Georgetown were like any other CLE, fairly boring. We presenters (at least we were not called performers) were all told to engage our audience, to get them talking, but that almost never happened. The shows were no doubt educational, at least to those who had not seen them before. But entertaining? Even slightly amusing? No, not really. Oh, a few of the panels had their moments, and some were very interesting at times, even to me. A couple even made me laugh a few times. But only one was pure genius. The solo performance of Judge John Facciola.

Fatch_keyboardI found especially compelling his role-playing as Socrates, along with his quotes of Plato, where he read from the Greek original of his high school book from long ago. Judge Facciola presented with a light and witty hand both his dark condemnations of our profession’s failings, and his hope for a different, more virtuous future. His sense of humor of the human predicament made it all work. Humor is a quality possessed of most geniuses, and near geniuses. John radiates with it, and makes you smile, even if you cannot hear or understand all of his words. And even if many of his words anger you. I have no doubt some who heard this talk did not like his bluntness, nor his call for spirituality and a complete rewrite with non-lawyer participation of our professional code of ethics. Well, they did not like Socrates either. It comes with the turf of know-nothing truth-tellers. That is what happens when you speak truth to power.

I thought of trying to share the contents of John’s Apology by consulting my notes and memory. But that could never do it justice. I am no Plato. And really, truth be told, I know Nothing. You have to see the full video of John’s talk for yourself. And you can. Yes! Unlike Socrates’ last talk, Georgetown filmed John’s talk. Not only that, they filmed the whole CLE event. I suspect Georgetown will profit handsomely from all of this. John, of course, was paid nothing, and he would have it no other way.

Dear Georgetown advisors, and Dean Center, good citizens and friends all, please make a special exception regarding payment for the video of John Facciola’s talk. In the spirit of Socrates and your mission as educators, I respectfully request that you publish it online, in full, free of charge. Not the whole event, mind you, but John’s talk, all of his talk. Everyone should see this, not just the bubble people, not just Georgetown graduates and insiders. Let anyone, whether they be rich or poor, listen to these words. Put it on YouTube. Circulate it as widely as you can. Let me know and I will help you to get the word out. Give it away. No charge. You know that is what Socrates would demand.

In the meantime for all of my dear readers not lucky enough to have been there, here is a short fair use video that I made of Judge Facciola’s concluding words. Here he makes a humorous reference to the final passage he had previously quoted in full from Plato’s Apology. This is at the very end where Socrates asks his friends to punish his sons, the way he has tormented them, should they fall from the way of virtue. Having a son myself, I will finish this blog with the full quote from Plato and make the same request of you all. And I do not mean the humorous reference to long hair in Facciola’s concluding joke, I mean the real Socratic reference to  virtue over money and a puffed up sense of self-importance. A reference that we should all take to heart, not just Adam.

socrates3Do to my sons as I have done to you.

Still I have a favour to ask of them. When my sons are grown up, I would ask you, O my friends, to punish them; and I would have you trouble them, as I have troubled you, if they seem to care about riches, or anything, more than about virtue; or if they pretend to be something when they are really nothing,—then reprove them, as I have reproved you, for not caring about that for which they ought to care, and thinking that they are something when they are really nothing. And if you do this, both I and my sons will have received justice at your hands.

The hour of departure has arrived, and we go our ways—I to die, and you to live. Which is better God only knows.


Visualizing Data in a Predictive Coding Project – Part Two

November 16, 2014

visual-numbersThis is part two of my presentation of an idea for visualization of data in a predictive coding project. Please read part one first.

As most of you already know, the ranking of all documents according to their probable relevance, or other criteria, is the purpose of predictive coding. The ranking allows accurate predictions to me made as to how the documents should be coded. In part one I shared the idea by providing a series of images of a typical document ranking process. I only included a few brief verbal descriptions. This week I will spell it out and further develop the idea. Next week I hope to end on a high note with random sampling and math.

Vertical and Horizontal Axis of the Images

Raw_DataThe visualizations here presented all represent a collection of documents. It is supposed to be pointillist image, with one point for each document. At the beginning of a document review project, before any predictive coding training has been applied to the collection, the documents are all unranked. They are relatively unknown. This is shown by the fuzzy round cloud of unknown data.

Once the machine training begins all documents start to be ranked. In the most simplistic visualizations shown here the ranking is limited to predicted relevance or irrelevance. Of course, the predictions could be more complex, and include highly relevant and privilege, which is what I usually do. It could also include various other issue classifications, but I usually avoid this for a variety of reasons that would take us too far astray to explain.

Once the training and ranking begin the probability grid comes into play. This grid creates both a vertical and horizontal axis. (In the future, we could add third dimensions too, but let’s start simple.)  The one public comment received so far stated that the vertical axis on the images showing percentages adjacent to the words “Probable Relevant” might give people the impression that it is the probability of a document being relevant. Well, I hope so, because that is exactly what I was trying to do!

The vertical axis shows how the documents are ranked. The horizontal axis shows the number of documents, roughly, at each ranking level. Remember, each point is supposed to represent a specific, individual document. (In the future we could add family overlays, but again, let’s start simple.) A single dot in the middle would represent one document. An empty space would represent zero documents. A wide expanse of horizontal dots would represent hundreds or thousand of documents, depending on the scale.

The diagram below visualizes a situation common where ranking has just begun and the computer is uncertain as to how to classify the documents. It classifies most in the 37.5% to 67.5% range of probable relevance. It is all about fifty fifty at this point. This is the kind of spread you would expect to see if training began with only random sampling input. The diagram indicates that the computer does not really know much yet about the data. It does not yet have any real idea as to which documents are relevant, and which are not.

Vertical_ranking_overlay

The vertical axis of the visualization is the key.  It is intended to show a running grid from 99% probable relevant to 0.01% probable relevant. Note that 0.01% probable relevant is another way of saying 99.9% probable irrelevant, but remember, I am trying to keep this simple. More complex overlays may be more to the liking of some software users. Also note that the particular numbers I show on the these diagrams is arbitrary: 0.01%, 12.5%, 25%, 37.5%, 50%, 67.5%, 75%, 87.5%, 99.9%, I would prefer to see more detail here, and perhaps add a grid showing a faint horizontal line at every 10% interval. Still, the fewer lines shown here does have a nice aesthetic appeal, plus it was easier for me to create on the fly for this blog.

Again, let me repeat to be very clear. The vertical grid on these diagrams represents the probable ranking from least likely to be relevant on the bottom, to most likely on the top. The horizontal grid shows the number of documents. It is really that simple.

Why Data Visualization Is Important

visualize 2This kind of display of documents according to a vertical grid of probable relevance is very helpful because it allows you to see exactly how your documents are ranked at any one point in time. Just as important, it helps you to see how the alignment changes over time. This empowers you to see how your machine training impacts the distribution.

This kind of direct, immediate feedback greatly facilitates human computer interaction (what I call in my approximate 50 articles on predictive coding the hybrid approach). It makes it easier for the natural human intelligence to connect with the artificial intelligence. It makes it easier for the human SMEs involved to train the computer. The humans, typically attorneys or their surrogates, are the ones with the expertise on the legal issues in the case. This visualization allows them to see immediately what impact particular training documents have upon the ranking of the whole collection. This helps them to select effective training documents. It helps them to attain the goal of separation of relevant from irrelevant documents. Ideally they would be clustered on both the bottom and top of the vertical axis.

For this process to work it is important for the feedback to be grounded in actual document review, and not be a mere intellectual exercise. Samples of documents in the various ranking strata must be inspected to verify, or not, whether the ranking is accurate. That can vary from strata to strata. Moreover, as everyone quickly finds out, each project is different, although certain patterns do tend to emerge. The diagrams used as an example in this blog represent one such typical pattern, although greatly compressed in time. In reality the changes shows here from one diagram to another would be more gradual and have a few unexpected bumps and bulges.

Visualizations like this will speed up the ranking and the review process. Ultimately the graphics will all be fully interactive. By clicking on any point in the graphic you will be taken to the particular document or documents that it represents. You click and drag and you are taken to a whole set of documents selected. For instance, you may want to see all documents between 45% and 55%, so you would select that range in the graphic. Or you may want to see all documents in the top 5% probable relevance ranking, so you select that top edge of the graphic. These documents will instantly be shown in the review database. Most good software already has document visualizations with similar linking capacities. So we are not reinventing the Wheel here, just applying these existing software capacities to new patterns, namely to document rankings.

These graphic features will allow you to easily search the ranking locations. This will in turn allow you to verify, or correct, the machine’s learning. Where you find that the documents clicked have a correct prediction of relevance, you verify by coding as relevant, or highly relevant. Where the documents clicked have an incorrect prediction, you correct by coding the document properly. That is how the computer learns. You tell it yes when it gets it right, and no when it gets it wrong.

At the beginning of a project many predictions of relevance and irrelevance will be incorrect. These errors will diminish as the training progress, as the correct predictions are verified, and erroneous predictions are corrected. Fewer mistakes will be made as the machine starts to pick up the human intelligence. To me it seems like a mind to computer transference. More of the predictions will be verified, and the document distributions will start to gather on both end of the vertical relevance axis. Since the volume of documents is represented by the horizontal axis, more documents will start to bunch together at both the top and bottom of the vertical axis. Since document collections in legal search usually contain many more irrelevant documents than relevant, you will typically see most documents on the bottom.

Visualizations of an Exemplar Predictive Coding Project

In the sample considered here we see unnaturally rapid training. It would normally take many more rounds of machine training than are shown in these four diagrams. In fact, with a continuous active training process, there could be hundreds of rounds per day. In that case the visualization would look more like an animation than a series of static images. But again, I have limited the process here for simplicity sake.

1000000_docsAs explained previously, the first thing that happens to the fuzzy round cloud of unknown data before any training begins is that the data is processed, deduplicated, deNisted, and non-text and other documents unsuitable for analytics are removed. In addition other necessarily irrelevant documents to this particular project are bulk-culled out. For example, ESI such as music files, some types of photos, and many email domains, like, for instance, emails from publications such as the NY Times. By good fortune in this example exactly One Million documents remain for predictive coding.

RandomWe begin with some multimodal judgmental sampling, and with a random sample of 1,534 documents. (They are the yellow dots.) Assuming a 95% confidence level, do you know what confidence interval this creates? I asked this question before and repeat it again, as the answer will not come until the final math installment next week.

Next we assume that an SME, and or his or her surrogates, reviewed the 1,534 sample and found that 384 were relevant and 1,150 were irrelevant. Do you know what prevalence rate this creates? Do you know the projected range of relevant documents within the confidence interval limits of this sample? That is the most important question of all.

Next we do the first round of machine training proper. The first round of training is sometimes called the seed set. Now the document ranking according to probable relevance and irrelevance begins. Again for simplicity sake, we assume that the analytics is directed towards relevance alone. In fact, most projects would also include high-relevance and privilege.

data-visual_Round_2In this project the data ball changed to the following distribution. Note the lighter colors represent less density of documents. Red documents represent documents coded or predicted as relevant, and blue as irrelevant. All predictive coding projects are different and the distributions shown here are just one among near countless possibilities. Here there are already more documents trained on irrelevance, than relevance. This is in spite of the fact that the active search was to find relevant documents, not irrelevant documents. This is typical in most review projects where you have many more irrelevant than relevant documents overall, and where it is easier to spot and find irrelevant than relevant.

data-visual_Round_3Next we see the data after the second round of training. The division of the collection of documents into relevant and irrelevant is beginning to form. The largest of collection of documents are the blue points at the bottom. They are the documents that the computer predicts are irrelevant based on the training to date. There are also a large collection of points shown in red at the top. They are the ones where the computer now thinks there is a high probability of relevance. Still, the computer is uncertain about the vast majority of the documents: the red in the third strata from the top, the blue in the third strata from the bottom, and the many in the grey, the 37.5% to 67.5% probable relevance range. Again we see an overall bottom heavy distribution. This is a typical pattern because it is usually easier to train on irrelevance than relevance.

As noted before, the training could be continuous. Many software programs offer that feature. But I want to keep the visualizations here simple, and not make an animation, and so I do not assume here a literally continuous active learning. Personally, although I do like to keep the training continuous throughout the review, I like the actual computer training to come in discrete stages that I control. That gives me a better understanding of the impact of my machine training. The SME human trains the machine, and, in an ideal situation, the machine also trains the SME. That is the kind of feedback that these visualizations enhance.

data-visual_Round_4Next we see the data after the third round of training. Again, in reality it would typically take more rounds of training than three to reach this relatively mature state, but I am trying to keep this example simple. If a project did progress this fast, it would probably be because a large number of documents were used in the prior rounds.  The documents about which the computer is now uncertain — the grey area, and the middle two brackets — is now much smaller.

The computer now has a high probability ranking for most of the probable relevant and probable irrelevant documents. The largest number of documents are the blue bottom, where the computer predicts they have a near zero chance of being classified relevant. Again, most of the  probable predictions, those in the top and bottom three brackets, are located in the bottom three brackets. Those are the documents predicted to have less that a 37.5% chance of being relevant. Again, this kind of distribution is typical, but there can be many variances from project to project. We here see a top loading where most of the probable relevant documents are in the top 12.5% percent ranking. In other words, they have an 87.5% probable relevant ranking, or higher.

data-visual_Round_5Next we see the data after the fourth round of training. It is an excellent distribution at this point. There are relatively few documents in the middle. This means there are relatively few documents about which the computer is uncertain as to its probable classification. This pattern is one factor among several to consider in deciding whether further training and document review are required to complete your production.

Another important metric to consider is the total number of documents found to be probable relevant, and comparison with the random sample prediction. Here is where math comes in, and understanding of what random sampling can and cannot tell you about the success of a project. You consider the spot projection of total relevance based on your initial prevalence calculation, but much more important, you consider the actual range of documents under the confidence interval. That is what really counts when dealing with prevalence projections and random sampling. That is where the plus or minus  confidence interval comes into play, as I will explain in detail the third and final installment to this blog.

PrevalenceIn the meantime, here is  the document count of the distribution roughly pictured in the final diagram above, which to me looks like an upside down, fragile champagne glass. We see that exactly 250,000 documents have a 50% or higher probable relevance ranking, and 750,000 documents have a 49.9% or less probable relevance ranking. Of the probable relevant documents, there are 15,000 documents that fall in the 50% to 67.5% range. There are another 10,000 documents that fall in the 37.5% to 49.9% probable relevance range. Again, this is also fairly common as we often see less on the barely irrelevant side that we do on the barely relevant side. As a general rule I review with humans all documents that are 50% or higher probable relevance, and do not review the rest. I do however sample and test the rest, the documents with less than a 50% probable relevance ranking. Also, in some projects I review far less than the top 50%. That all depends on proportionality constraints, and document ranking distribution, the kind of distributions that these visualizations will show.

In addition to this metrics analysis, another important factor to consider in whether our search and review efforts are now complete, is how much change in ranking there has been from one training round to the next. Sometimes there may be no change at all. Sometimes there may only be very slight changes. If the changes from the last round are large, that is an indication that more training should still be tried, even if the distribution already looks optimal, as we see here.

Another even more important quality control factor is how correct the computer has been in the last few rounds of its predictions. Ideally, you want to see the rate of error decreasing to a point where you see no errors in your judgmental samples. You want your testing of the computer’s prediction to show that it has attained a high degree of precision. That means there are few documents predicted relevant, that actual review by human SMEs show are in fact irrelevant. This kind of error is known as a False Positive. Much more important to quality evaluation is to the discovery of documents predicted irrelevant, that are actually relevant. This kind of error is known as a False Negative. The False Negatives are your real concern in most projects because legal search is usually focused on recall, not precision, at least within reason.

The final distinction to note in quality control is one that might seem subtle, but really is not. You must also factor in relevance weight. You never want a False Negative to be a highly relevant document. If that happens to me, I always commence at least one more round of training. Even missing a document that is not highly relevant, not hot, but is a strong relevant document, and one of a type not seen before, is typically a cause for further training. This is, however, not an automatic rule as with the discovery of a hot document. It depends on a variety of factors having to do with relevance analysis of the particular case and document collection.

In our example we are going to assume that all of the quality control indicators are positive, and a decision has been made to stop training and move on to a final random sample test.

A second random sample comes next. That test and visualization will be provided next week, along with the promised math and sampling analysis.

Math Quiz

I part one, and again here, I asked some basic math questions on random sampling, prevalence, and recall. So far no one has attempted to answer the questions posed. Apparently, most readers here do not want to be tested. I do not blame them. This is also what I find in my online training program, e-DiscoveryTeamTraining.com, where only a small percentage of the students who take the program elect to be tested. That is fine with me as it means one less paper to grade, and most everyone passes anyway. I do not encourage testing. You know if you get it or not. Testing is not really necessary.

The same applies to answering math questions in a public blog. I understand the hesitancy. Still, I hope many privately tried, or will try, to solve the questions and came up with the correct answers. In part three of this blog I will provide the answers, and you will know for sure if you got it right. There is still plenty of time to try to figure it out on your own. The truly bold can post it online in the comments below. Of course, this is all pretty basic stuff to try experts of this craft. So, to my fellow experts out there, you have yet another week to take some time and strut your stuff by sharing the obvious answers. Surely I am not the only one in the e-discovery world bold enough to put their reputation on the line by sharing their opinions and analysis in public for all to see (and criticize). Come on. I do it every week.

Math and sampling are important tools for quality control, but as Professor Gordon Cormack, a true wizard in the area of search, math, and sampling likes to point out, sampling alone has many inherent limitations. Gordon insists, and I agree, that sampling should only be part of a total quality control program. You should never just rely on random sampling alone, especially in low prevalence collections. Still, when sampling, prevalence, and recall are included as part of an overall QC effort, the net effect is very reassuring. Unless I know that I have an expert like Gordon on the other side, and so far that has never happened, I want to see the math. I want to know about all of the quality control and quality assurance steps taken to try to find the information requested. If you are going to protect your client, you need to learn this too, or have someone at hand who already knows it.

This kind of math, sampling, and other process disclosures should convince even the most skeptical adversary or judge. That is why it is important for all attorneys involved with legal research to have a clear mathematical understanding of the basics. Visualizations alone are inadequate, but, for me at least, visualizations do help a lot. All kinds of data visualizations, not just the ones here presented, provide important tools to help lawyers to understand how a search project is progressing.

Challenge to Software Vendors

challengeThe simplicity of the design of the idea presented here is a key part of the power and strength of the visualization. It should not be too difficult to write code to implement this visualization. We need this. It will help users to better understand the process. It will not cost too much to implement, and what it does cost should be recouped soon in higher sales. Come on vendors, show me you are listening. Show me you get it. If you have a software demo that includes this feature, then I want to see it. Otherwise not.

All good predictive coding software already ranks the probable relevance of documents, so we are not talking about an enormous coding project. This feature would simply add a visual display to calculations already being made. I could hand make these calculations myself using an Excel spreadsheet, but that is time consuming and laborious. This kind of visualization lends itself to computer generation.

I have many other ideas for predictive coding features, including other visualizations, that are much more complex and challenging to implement. This simple grid explained here is an easy one to implement, and will show me, and the rest of our e-discovery community, who the real leaders are in software development.

Conclusion

Ralph_2013_beard_frownThe primary goal of the e-Discovery Team blog is educational, to help lawyers and other e-discovery professionals. In addition, I am trying to influence what services and products are provided in e-discovery, both legal and technical. In this blog I am offering an idea to improve the visualizations that most predictive software already provide. I hope that all vendors will include this feature in future releases of their software. I have a host of additional ideas to improve legal search and review software, especially the kind that employs active machine learning. They include other, much more elaborate visualization schemes, some of which have been alluded to here.

Someday I may have time to consult on all of the other, more complex ideas, but, in the meantime, I offer this basic idea for any vendor to try out. Until vendors start to implement even this basic idea, anyone can at least use their imagination, as I now do, to follow along. These kind of visualizations can help you to understand the impact of document ranking on your predictive coding review projects. All it takes is some idea as to the number of documents in various probable relevance ranking strata. Try it on your next predictive coding project, even if it is just rough images from your own imagination (or Excel spreadsheet). I am sure you will see for yourself how helpful this can be to monitor and understand the progress of your work.

 

 


Hadoop, Data Lakes, Predictive Analytics and the Ultimate Demise of Information Governance – Part Two

November 2, 2014

recordsThis is the second part of a two-part blog, please read part one first.

AI-Enhanced Big Data Search Will Greatly Simplify Information Governance

Information Governance is, or should be, all about finding the information you need, when you need it, and doing so in a cheap and efficient manner. Information needs are determined by both law and personal preferences, including business operation needs. In order to find information, you must first have it. Not only that, you must keep it until you need it. To do that, you need to preserve the information. If you have already destroyed information, really destroyed it I mean, not just deleted it, then obviously you will not be able to find it. You cannot find what does not exist, as all Unicorn chasers eventually find out.

Too_Many_RecordsThis creates a basic problem for Information Governance because the whole system is based on a notion that the best way to find valuable information is to destroy worthless information. Much of Information Governance is devoted to trying to determine what information is a valuable needle, and what is worthless chaff. This is because everyone knows that the more information you have, the harder it is for you to find the information you need. The idea is that too much information will cut you off. These maxims were true in the pre-AI-Enhanced Search days, but are, IMO, no longer true today, or, at least, will not be true in the next five to ten years, maybe sooner.

In order to meet the basic goal of finding information, Information Governance focuses its efforts on the proper classification of information. Again, the idea was to make it simpler to find information by preserving some of it, the information you might need to access, and destroying the rest. That is where records classification comes in.

The question of what information you need has a time element to it. The time requirements are again based on personal and business operations needs, and on thousand of federal, state and local laws. Information governance thus became a very complicated legal analysis problem. There are literally thousands of laws requiring certain types of information to be preserved for various lengths of time. Of course, you could comply with most of these laws by simply saving everything forever, but, in the past, that was not a realistic solution. There were severe limits on the ability to save information, and the ability to find it. Also, it was presumed that the older information was, the less value it had. Almost all information was thus treated like news.

These ideas were all firmly entrenched before the advent of Big Data and AI-enhanced data mining. In fact, in today’s world there is good reason for Google to save every search, ever done, forever. Some patterns and knowledge only emerge in time and history. New information is sometimes better information, but not necessarily so. In the world of Big Data all information has value, not just the latest.

paper records management warehouseThis records life-cycle ideas all made perfect sense in the world of paper information. It cost a lot of money to save and store paper records. Everyone with a monthly Iron Mountain paper records storage bill knows that. Even after the computer age began, it still cost a fair amount of money to save and store ESI. The computers needed to buy and maintain digital storage used to be very expensive. Finding the ESI you needed quickly on a computer was still very difficult and unreliable. All we had at first was keyword search, and that was very ineffective.

Due to the costs of storage, and the limitations of search, tremendous efforts were made by record managers to try to figure out what information was important, or needed, either from a legal perspective, or a business necessity perspective, and to save that information, and only that information. The idea behind Information Management was to destroy the ESI you did not need or were not required by law to preserve. This destruction saved you money, and, it also made possible the whole point of Information Governance, to find the information you wanted, when you wanted it.

Back in the pre-AI search days, the more information you had, the harder it was to find the information you needed. That still seems like common sense. Useless information was destroyed so that you could find valuable information. In reality, with the new and better algorithms we now have for AI-enhanced search, it is just the reverse. The more information you have, the easier it becomes to find what you want. You now have more information to draw upon.

That is the new reality of Big Data. It is a hard intellectual paradigm to jump, and seems counter-intuitive. It took me a long time to get it. The new ability to save and search everything cheaply and efficiently is what is driving the explosion of Big Data services and products. As the save everything, find anything way of thinking takes over, the classification and deletion aspects of Information Governance will naturally dissipate. The records lifecycle will transform into virtual immortality. There is no reason to classify and delete, if you can save everything and find anything at low cost. The issues simplify; they change to how to save and  search, although new collateral issues of security and privacy grow in importance.

Save and Search v. Classify and Delete

The current clash in basic ideas concerning Big Data and Information Governance is confusing to many business executives. According to Gregory Bufithis who attended a recent event in Washington D.C. on Big Data sponsored by EMC, one senior presenter explained:

The C Suite is bedeviled by IG and regulatory complexity. … 

The solution is not to eliminate Information Governance entirely. The reports of its complete demise, here or elsewhere, are exaggerated. The solution is to simplify IG. To pare it down to save and search. Even this will take some time, like I said, from five to ten years, although there is some chance this transformation of IG will go even faster than that. This move away from complex regulatory classification schemes, to simpler save and search everything, is already being adopted by many in the high-tech world. To quote Greg again from the private EMC event in D.C. in October, 2014:

Why data lakes? Because regulatory complexity and the changes can kill you. And are unpredictable in relationship to information governance. …

So what’s better? Data lakes coupled with archiving. Yes, archiving seems emblematic of “old” IT. But archiving and data lifecycle management (DLM) have evolved from a storage focus, to a focus on business value and data loss prevention. DLM recognizes that as data gets older, its value diminishes, but it never becomes worthless. And nobody is throwing out anything and yes, there are negative impacts (unnecessary storage costs, litigation, regulatory sanctions) if not retained or deleted when it should be.

But … companies want to mine their data for operational and competitive advantage. So data lakes and archiving their data allows for ingesting and retain all information types, structured or unstructured. And that’s better.

Because then all you need is a good search platform or search system … like Hadoop which allows you to sift through the data and extract the chunks that answer the questions at hand. In essence, this is a step up from OLAP (online analytical processing). And you can use “tag sift sort” programs like Data Rush. Or ThingWorx which is an approach that monitors the stream of data arriving in the lake for specific events. Complex event processing (CEP) engines can also sift through data as it enters storage, or later when it’s needed for analysis.

Because it is all about search.

Recent Breakthroughs in Artificial Intelligence
Make Possible Save Everything, Find Anything

AIThe New York Times in an opinion editorial this week discussed recent breakthroughs in Artificial Intelligence and speculated on alternative futures this could create. Our Machine Masters, NT Times Op-Ed, by David Brooks (October 31, 2014). The Times article quoted extensively another article in the current issue of Wired by technology blogger Kevin Kelly: The Three Breakthroughs That Have Finally Unleashed AI on the World. Kelly argues, as do I, that artificial intelligence has now reached a breakthrough level. This artificial intelligence breakthrough, Kevin Kelly argues, and David Brook’s agrees, is driven by three things: cheap parallel computation technologies, big data collection, and better algorithms. The upshot is clear in the opinion of both Wired and the New York Times: “The business plans of the next 10,000 start-ups are easy to forecast: Take X and add A.I. This is a big deal, and now it’s here.

These three new technology advances change everything. The Wired article goes into the technology and financial aspects of the new AI; it is where the big money is going and will be made in the next few decades. If Wired is right, then this means in our world of e-discovery, companies and law firms will succeed if, and only if, they add AI to their products and services. The firms and vendors who add AI to document review, and project management, will grow fast. The non-AI enhanced vendors, non-AI enhanced software, will go out of business. The law firms that do not use AI tools will shrink and die.

David_BrooksThe Times article by David Brooks goes into the sociological and philosophical aspects of the recent breakthroughs in Artificial Intelligence:

Two big implications flow from this. The first is sociological. If knowledge is power, we’re about to see an even greater concentration of power.  … [E]ngineers at a few gigantic companies will have vast-though-hidden power to shape how data are collected and framed, to harvest huge amounts of information, to build the frameworks through which the rest of us make decisions and to steer our choices. If you think this power will be used for entirely benign ends, then you have not read enough history.

The second implication is philosophical. A.I. will redefine what it means to be human. Our identity as humans is shaped by what machines and other animals can’t do. For the last few centuries, reason was seen as the ultimate human faculty. But now machines are better at many of the tasks we associate with thinking — like playing chess, winning at Jeopardy, and doing math. [RCL – and, you might add, better at finding relevant evidence.]

On the other hand, machines cannot beat us at the things we do without conscious thinking: developing tastes and affections, mimicking each other and building emotional attachments, experiencing imaginative breakthroughs, forming moral sentiments. [RCL – and, you might add, better at equitable notions of justice and at legal imagination.]

In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it.

In the cold, utilitarian future, on the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.

I’m happy Pandora can help me find what I like. I’m a little nervous if it so pervasively shapes my listening that it ends up determining what I like. [RCL – and, you might add, determining what is relevant, what is fair.]

I think we all want to master these machines, not have them master us.

ralph_wrongAlthough I share the concerns of the NY Times about mastering machines and alternative future scenarios, my analysis of the impact of the new AI is focused and limited to the Law. Lawyers must master the AI-search for evidence processes. We must master and use the better algorithms, the better AI-enhanced software, not visa versa. The software does not, nor should it, run itself. Easy buttons in legal search are a trap for the unwary, a first step down a slippery slope to legal dystopia. Human lawyers must never over-delegate our uniquely human insights and abilities. We must train the machines. We must stay in charge and assert our human insights on law, relevance, equity, fairness and justice, and our human abilities to imagine and create new realities of justice for all. I want lawyers and judges to use AI-enhanced machines, but I never want to be judged by a machine alone, nor have a computer alone as a lawyer.

The three big new advances that are allowing better and better AI are nowhere near to threatening the jobs of human judges or lawyers, although they will likely reduce their numbers, and certainly will change their jobs. We are already seeing these changes in Legal Search and Information Governance. Thanks to cheap parallel computation, we now have Big Data Lakes stored in thousands of inexpensive, cloud computers that are operating together. This is where open-sourced software like Hadoop comes in. They make the big clusters of computers possible. The better algorithms is where better AI-enhanced Software comes in. This makes it possible to use predictive coding effectively and inexpensively to find the information needed to resolve law suits. The days of vast numbers of document reviewer attorneys doing linear review are numbered. Instead, we will see a few SMEs, working with small teams of reviewers, search experts, and software experts.

The role of Information Managers will also change drastically. Because of Big Data, cheap parallel computing, and better algorithms, it is now possible to save everything, forever, at a small cost, and to quickly search and find what you need. The new reality of Save Everything, Find Anything undercuts most of the rationale of Information Governance. It is all about search now.

Conclusion

Ralph_Losey_2013_abaNow that storage costs are negligible, and search far more efficient, the twin motivators of Information Science to classify and destroy are gone, or soon will be. The key remaining tasks of Information Governance are now preservation and search, plus relatively new ones of security and privacy. I recognize that the demise of the importance of destruction of ESI could change if more governments enact laws that require the destruction of ESI, like the EU has done with Facebook posts and the so-called “right to be forgotten law.” But for now, most laws are about saving data for various times, and do not require data be destroyed. Note that the new Delaware law on data destruction still keeps it discretionary on whether to destroy personal data or not. House Bill No. 295 – The Safe Destruction of Documents Containing Personal Identifying Information. It only places legal burdens and liability for failures to properly destroy data. This liability for mistakes in destruction serves to discourage data destruction, not encourage it.

Preservation is not too difficult when you can economically save everything forever, so the challenging task remaining is really just one of search. That is why I say that Information Governance will become a sub-set of search. The save everything forever model will, however, create new legal work for lawyers. The cybersecurity protection and privacy aspects of Big Data Lakes are already creating many new legal challenges and issues. More legal issues are sure to arise with the expansion of AI.

Automation, including this latest Second Machine Age of mental process automation, does not eliminate the need for human labor. It just makes our work more interesting and opens up more time for leisure. Automation has always created new jobs as fast as it has eliminated old ones. The challenge for existing workers like ourselves is to learn the new skills necessary to do the new jobs. For us e-discovery lawyers and techs, this means, among other things, acquiring new skills to use AI-enhanced tools. One such skill, the ability for HCIR, human computer information retrieval, is mentioned in most of my articles on predictive coding. It involves new skill sets in active machine learning to train a computer to find the evidence you want from large collections of data sets, typically emails. When I was a law student in the late 1970s, I could never have dreamed that this would be part of my job as a lawyer in 2014.

The new jobs do not rely on physical or mental drudgery and repetition. Instead, they put a premium on what makes up distinctly human, our deep knowledge, understanding, wisdom, and intuition; our empathy, caring, love and compassion; our morality, honesty, and trustworthiness; our sense of justice and fairness; our ability to change and adapt quickly to new conditions; our likability, good will, and friendliness; our imagination, art, wisdom, and creativity. Yes, even our individual eccentricities, and our all important sense of humor. No matter how far we progress, let us never lose that! Please be governed accordingly.


Follow

Get every new post delivered to your Inbox.

Join 3,654 other followers