Predictive Coding 3.0

 

ralphlosey_cartoon_smallBefore describing the new version of Predictive Coding methodology shown in the chart animation, version 3.0, this blog will review and describe the prior versions predominantly used in the e-discovery world, including the main patents involved. The more recent U.S. patents of Maura Grossman and Gordon Cormack will also be reviewed. Their work seems fairly close to Predictive Coding 3.0, although we have no affiliation whatsoever, except for the fact that I am one of the many admirers of their research and writings.

Overview of the Three Generations of Predictive Coding Software

First generation Predictive Coding, version 1.0, entered the market in 2009. It used active machine learning with methodology requirements built into the software that you begin the review with an SME coding a random selection of several thousand documents. The random documents included a secret set of documents not identified to the user, any user, called a control set. The secret control set supposedly allowed you to objectively monitor your progress in Recall and Precision of the relevant documents from the total set. It also supposedly prevented lawyers from gaming the system. Version 1.0 software also had two distinct stages, one for training and another for review. The next generation of version 2.0 methodology combined the two-stages into one,where training continued continuously throughout the review. The method of Predictive Coding 3.0 again combines the two-stages into one, but also eliminates the secret control set. Random sampling itself remains, that is the third step in the eight-step version 3.0 process, but the secret set of random documents, the control set, is eliminated.

Although the use of a control set is basic to all scientific research and statistical analysis, it does not work in legal search. The EDRM, which apparently still promotes the use of a methodology with control sets, explains that the control set:

… is a random sample of documents drawn from the entire collection of documents, usually prior to starting Assisted Review training rounds. … The control set is coded by domain experts for responsiveness and key issues. … [T]he coded control set is now considered the human-selected ground truth set and used as a benchmark for further statistical measurements we may want to calculate later in the project. As a result, there is only one active control set in Assisted Review for any given project. … [C]ontrol set documents are never provided to the analytics engine as example documents. Because of this approach, we are able to see how the analytics engine categorizes the control set documents based on its learning, and calculate how well the engine is performing at the end of a particular round. The control set, regardless of size or type, will always be evaluated at the end of every round—a pop quiz for Assisted Review. This gives the Assisted Review team a great deal of flexibility in training the engine, while still using statistics to report on the efficacy of the Assisted Review process.

Control Sets: Introducing Precision, Recall, and F1 into Relativity Assisted Review (a kCura white paper adopted by EDRM).

Grossman_DavidThe original white paper written by David Grossman, entitled Measuring and Validating the Effectiveness of Relativity Assisted Review, is cited by EDRM as support for their position on the validity and necessity of control sets. In fact, the paper does not support this proposition. The author of this Relativity White Paper, David Grossman, is a Ph.D. now serving as the associate director of the Georgetown Information Retrieval Laboratory, a faculty affiliate at Georgetown University, and an adjunct professor at IIT in Chicago. He is an leading expert in text retrieval and has no connections with Relativity except to write this one small paper. I spoke with David Grossman on October 30, 2015. He confirmed that the validity, or not, of control sets in legal search was not the subject of his investigation. His paper does not address this issue. In fact, he has no opinion of the validity of control sets in the context of legal search.

David’s one study for kCura was limited to the narrow questions of: (1) whether statistical sampling creates representative samples, and (2) whether the retrieval of relevant documents improved during two rounds of predictive coding type training. The first question was very basic and the answer was, of course, yes, sampling works. The issue of control sets was not considered. Even though control sets were mentioned, it was never his intent to measure their effectiveness per se.

The second issue was also very basic, and his answer again was, of course, yes, training works. Still, he carefully qualified that answer and concluded only that he observed “improved effectiveness with almost each new round that was tried in our testing.” Measuring and Validating the Effectiveness of Relativity Assisted Review at pg 5. In my conversations with David he also confirmed that he did not design any of the Relativity software nor any of its methods. He was also unaware of the controversies in legal search, including the effectiveness of using control sets, and my view that the “ground truth” at the beginning of a search project was more like quick sand. Although David Grossman has never done a legal search project, he has done many other types of real-world searches. He volunteered that he has frequently had that same quicksand type of experience where the understanding of relevance evolves as the search progresses.

The problem with the use of the control set in legal search is that the SMEs, what EDRM here refers to as the domain experts, never know the full truth of document responsiveness at the beginning of a project. This is something that evolves over time. The understanding of relevance changes over time, changes as particular documents are examined. The control set fails and creates false results because “the human-selected ground truth set and used as a benchmark for further statistical measurements” is never correct, especially at the beginning of a large review project. Only at the end of a project are we in a position to determine  a “ground truth” and “benchmark” for statistical measurements.

This problem was recognized by another information retrieval expert, William Webber, PhD. William does have experience with legal search and has been kind enough to help me through technical issues involving sampling many times on this blog. Here is how Dr. Webber puts it in his blog Confidence intervals on recall and eRecall:

Using the control set for the final estimate is also open to the objection that the control set coding decisions, having been made before the subject-matter expert (SME) was familiar with the collection and the case, may be unreliable.

Having done many reviews myself, and served as the SME on most of them, I am much more emphatic than William and do not couch my opinion with “may be unreliable.” To me there is no question that at least some of the SME control set decisions at the start of a review are almost certainly unreliable.

KEYS_cone.filter-copyAnother reason control sets fail in legal search is the very low prevalence typical of the ESI collections searched. We only see high prevalence when the document collection was keyword filtered. The original collections are always low, usually less that 5%, and often less than 1%. About the highest prevalence collection I have ever searched was the Oracle collection in the EDI search contest, and it had obviously been heavily filtered by a variety of methods. That is not a best practice because the filtering often removes the relevant documents from the collection, making it impossible for predictive coding to ever find them. See eg, William Webber’s analysis of the Biomet case where this kind of keyword filtering was used before predictive coding began. What is the maximum recall in re Biomet?Evaluating e-Discovery (4/24/13). Webber shows that in Biomet this method first filtered out over 40% of the relevant documents. This doomed the second filter predictive coding review to a maximum possible recall of 60%, even if it was perfect, meaning it would otherwise have attained 100% recall, which (almost) never happens. The Biomet case thus very clearly shows the dangers of over-reliance on keyword filtering.

The control set approach cannot work in legal search because the size of the random sample, much less the portion of the sample allocated to the control set, is never even close to large enough to include a representative document from each type of relevant documents in the corpus, much less the outliers. So even if the benchmark were not on such shifting grounds, it would still fail because it is incomplete. The result is likely to be overtraining of the document types to those that happened to hit in the control set, which is exactly what the control set is supposed to prevent. This kind of overfitting can and does happen even without exact knowledge of the documents in the control set. That is an additional problem separate and apart from relevance shift. It is a problem solved by the multimodal search aspects of predictive coding 3.0.

William_webberAgain William Webber has addressed this issue in his typical understated manner. He points out in Why training and review (partly) break control sets the futility of  using of control sets to measure effectiveness because the sets are incomplete:

Direct measures of process effectiveness on the control set will fail to take account of the relevant and irrelevant documents already found through human assessment.

A naïve solution to this problem to exclude the already-reviewed documents from the collection; to use the control set to estimate effectiveness only on the remaining documents (the remnant); and then to combine estimated remnant effectiveness with what has been found by manual means. This approach, however, is incorrect: as documents are non-randomly removed from the collection, the control set ceases to be randomly representative of the remnant. In particular, if training (through active learning) or review is prioritized towards easily-found relevant documents, then easily-found relevant documents will become rare in the remnant; the control set will overstate effectiveness on the remnant, and hence will overstate the recall of the TAR process overall. …

In particular, practitioners should be wary about the use of control sets to certify the completeness of a production—besides the sequential testing bias inherent in repeated testing against the one control set, and the fact that control set relevance judgments are made in the relative ignorance of the beginning of the TAR process. A separate certification sample should be preferred for making final assessments of production completeness.

Control sets are a good idea in general, and the basis of most scientific research, but it simply does not work in legal search. It was built into the version 1.0 software by engineers and scientists who had little understanding of legal search. They apparently had, and some still have, no real grasp at all as to how relevance is refined and evolves during the course of any large document review, nor of the typical low prevalence of relevance. The normal distribution in probability statistics is just never found in legal search. The whole theory behind the secret control set myth in legal search is that the initial relevance coding of these documents was correct, immutable and complete; that it should be used to objectively judge the rest of the coding in the project. That is not true. In point of fact, many documents determined to be relevant or irrelevant at the beginning of a project may be considered the reverse by the end. Many more types of relevant documents are never even included in the control set. That is not because of a bad luck or a weak SME, but because of the natural progression of the understanding of the probative value of various types of documents over the course of a review. It is also because of the natural rarity of relevant evidence in unfiltered document collections.

All experienced lawyers know how relevance shifts during a case. But the scientists and engineers who designed the first generation software did not know this, and anyway, it contravened their dogma of the necessity of control sets. They could not bend their minds to the reality of indeterminate, rare legal relevance. In legal search the target is always moving and always small. Also, the data itself can often change as new documents are added to the collection. In other areas of information retrieval, the target is solid granite, simple Newtonian, and big, or at least bigger than just a few percent. Outside of legal search it may make sense to talk of an immutable ground truth. In legal search the ground truth is discovered. It emerges as part of the process, often including surprise court rulings and amended causes of action. It is in flux. The truth is rare. The truth is relative.

schrodinger_quantum_uncertainityThe parallels of legal search with quantum mechanics are obvious. The documents have to be observed before they will manifest certainly as either relevant or irrelevant. Uncertainty is inherent to information retrieval in legal search. Get used to it. That is reality on many levels, including the law.

The control set based procedures were not only over-complicated, they were inherently defective. They were based on an illusion of certainty, an illusion of a ground truth benchmark magically found at the beginning of a project before document review even began. There were supposedly SME wizards capable of such prodigious feats. I have been an SME in many, many topics of legal relevance in my over 38 plus years of legal practice. SMEs are human, all too human. There is no magic wizard behind the curtain. Moreover, the understanding of any good SME naturally evolves over time as previously unknown, unseen documents are unearthed and analyzed. Legal understanding is not static. The theory of a case is not static. All experienced trial lawyers know this. The case you start out with is never the one you end up with. You never really know if Schrodinger’s cat is alive or dead. You get used to that after a while. Certainty comes from the final rulings of the last court of appeals.

The use of magical control sets doomed many a predictive coding project to failure. Project team leaders thought they had high recall, because the secret control set said they did, yet they still missed key documents. They still had poor recall and poor precision, or at least far less than their control set analysis led them to believe. See: Webber, The bias of sequential testing in predictive coding, June 25, 2013, (“a control sample used to guide the producing party’s process cannot also be used to provide a statistically valid estimate of that process’s result.”) I still hear stores from reviewers where they find precision of less than 50% using Predictive Coding 1.o methods, sometimes far less. That always seems shocking to me, unbelievable, as I have never had a predictive coding project (where I have, of course, always used these 3.0 methods) with less than 80% precision, and many times reviewers find 95% plus precision.

Many attorneys who worked with predictive coding software version 1.0, where they did not see their projects overtly crash and burn, as when missed smoking gun documents later turn up, or where reviewers see embarrassingly low precision, were nonetheless suspicious of the results. Even if not suspicious, they were discouraged by the complexity and arcane control set process from every trying predictive coding again. As attorney and search expert J. William (Bill) Speros likes to say, they could smell the junk science in the air. They were right. I do not blame them for rejecting predictive coding 1.0. I did. But unlike many, I created by own method, here called version 3.0.

At first iI could not understand why so many of my search expert friends did not enjoy the same level of success that I did, or Maura Grossman did, or a few others like us in the industry. In fact, I heard more complaints about predictive coding than praise. I have finally understood (yes, I admit to being fairly slow on this realization) that they were following the version 1.0 predictive coding methods of the vendors they used. That explained their failures, their frustrations. I never did followed the 1.0 procedures. Maura Grossman never even used any of the vendor software. The many frustrated with predictive coding 1.0 were also told by some vendors to leave behind their other search skills and tools, and just use predictive coding type searches. I also have always rejected this too, and instead used a multimodal approach.

funny_wizardThe control set fiction also put an unnecessarily heavy burden upon SMEs. They were supposed to review thousands of random documents at the beginning of a project, sometimes tens of thousands, and successfully classify them, not only for relevance, but sometimes also for a host of sub-issues. Some gamely tried, and went along with the pretense of omnipotence. After all, the documents in the control set were kept secret so no one would ever know if any particular document they coded was correct of not. But most SMEs simply refused to spend days and days coding random documents. They refused to pay the pretend wizard game. They correctly intuited that they had better things to do with their time, plus many clients did not want to spend over $500 per hour to have their senior trial lawyers reading random emails, most of which would be irrelevant. So the SMES would delegate this tedious task to other, less experienced attorneys, ones who were even less qualified to play God.

I have heard many complaints from lawyers that predictive coding is too complicated and did not work for them. These complaints were justified. The control set and two-step review process were the culprits, not the active machine learning process. The control set has done great harm to the legal profession. As one of the few writers in e-discovery free from vendor influence, much less control (you will never see any ads here), I am here to blow the whistle, to put an end to the vendor hype. No more secret control sets. Let us simplify and get real. Lawyers who have tried predictive coding before and given up, come back and try Predictive Coding 3.0. It has been purged of vendor hype and bad science and proven effective many times.

Blue_Lexie_robot_blackVendors – Do want to increase your business and predictive coding users? Then make sure your software will work with Predictive Coding 3.0 and make sure your experts understand 3.0 methods. Mr. EDR already allows for use of version 3.0, and the Kroll Ontrack experts now know how to use these methods with him. But even Mr. EDR, my current favorite software, needs to be improved and purged of his needless control set complexities. Predictive Coding 3.0 is much simpler, and more accurate, than any prior method.

Users – If your vendor is version 3.0 compliant, then come back and give predictive coding another try. I am sure you will be pleasantly surprised this time.

Version 3.0 is CAL Based and Control Free 

Version 1.0 type software, which is still being manufactured by many vendors today, has a built-in two-step process as mentioned earlier. It requires you to train documents, and then after training, review a certain total of ranked documents, as guided by your control set recall calculations. Version 2.0 of Predictive Coding eliminated the two-step process, and made the training continuous. For that reason version 2.0 is also called continuous active learning or CAL. It did not, however, explicitly reject the random sample step and its control set nonsense.

Predictive Coding 3.0 builds on the CAL improvements in 2.0, but also eliminates the secret control set and mandatory initial review of a random sample for this set. This and other process improvements in Predictive Coding 3.0 significantly reduce the burden on busy SMEs, and significantly improves the recall estimates, and thus improves the quality of the reviews.

In Predictive Coding 3.0 the secret control set basis of recall calculation are replaced with a prevalence based random sample guide, and elusion based quality control samples. These can be done with contract lawyers and only minimal involvement by SME. See Zero Error Numerics. The final elusion type recall calculation is done at the end of the project, when final relevance has been determined. See: EI-Recall. Moreover, in the 3.0 process the sample documents are not secret. They are known and adjusted as the definitions of relevance change over time to better control your recall range estimates. That is a major improvement.

The secret control set never worked, and it is high time it be expressly abandoned, because: (1) relevance is never static, it changes over the course of the review; (2) the random selection size was typically too small for statistically meaningful calculations; (3) the random selection was typically too small in low prevalence collections (the last majority in legal search) for complete training selections; and (4) it supposedly required a senior SMEs personal attention for days of document review work, a mission impossible for most e-discovery teams.

Predictive Coding 1.0 and the First Patents

USPTOWhen predictive coding first entered the legal marketplace in 2009 the legal methodology used by lawyers for predictive coding was dictated by the software manufacturers, mainly the engineers who designed the software. See egLeading End-to-End eDiscovery Platform Combines Unique Predictive Coding Technology with Random Sampling to Revolutionize Document Review (2009 Press Release). Recommind was an early leader, which is one reason I selected them for the Da Silva Moore v. Publicis Groupe case back in 2011. On April 26, 2011, Recommind was granted a patent for predictive coding: Patent No. 7,933,859, entitled Full-Text Systems and methods for predictive coding. The search algorithms in the patent used Probabilistic Latent Semantic Analysis, an already well-established statistical analysis technique for data analysis. (Recommind obtained two more patents with the same name in 2013: Patent No. 8,489,538 on July 16, 2013; and Patent No. 8,554,716 on October 8, 2013.)

As the title of all of these patents indicate, the methods of use of the text analytics technology in the software were key to the patent claims. As is typical for patents, many different method variables were described to try to obtain as wide a protection as possible. The core method was shown in Figure Four of the 2011 patent.

Recommind_Patent4

This essentially describes the method that I now refer to as Predictive Coding Version 1.0. It is the work flow I had in mind when I first designed procedures for the Da Silva Moore case. In spite of the Recommind patent, this basic method was followed by all vendors who added predictive coding features to their software in 2011, 2012 and thereafter. It is still going on today. Many of the other vendors also received patents for their predictive coding technology and methods, or applications are pending. See eg. Equivio, patent applied for on June 15, 2011 and granted on September 10, 2013, patent number  8,533,194; Kroll Ontrack, application  20120278266, April 28, 2011.

To my knowledge there has been no litigation between vendors. My guess is they all fear invalidation on the basis of lack of innovation and prior art.

The engineers, statisticians and scientists who designed the first predictive coding software are the people  who dictated to lawyers how the software should be used in document review. None of the vendors seemed to have consulted practicing lawyers in creating these version 1.0 methods. I know I was not involved.

Ralph Losey

Losey in 2011 when first arguing against the methods of version 1.0

I also remember getting into many arguments with these technical experts from several companies back in 2011. That was when the predictive coding 1.0 methods hardwired into their software were first explained to me. I objected right away to the secret control set. I wanted total control of my search and review projects. I resented the secrecy aspects. There were enough black boxes in the new technology already. I was also very dubious of the statistical projections. In my arguments with them, sometimes heated, I found that they had little real grasp of how legal search was actually conducted or the practice of law. My arguments were of no avail. And to be honest, I had a lot to learn. I was not confident of my positions, nor knowledgable enough of statistics. All I knew for sure is that I resented their trying to control my well-established, pre-predictive coding search methods. Who were they to dictate how I should practice law, what procedures I should follow? These scientists did not understand legal relevance, nor how it changes over time during the course of any large-scale review. They did not understand the whole notion of the probative value of evidence and the function of e-discovery as trial preparation. They did not understand weighted relevance, and the 7+/2 rule of judge and jury persuasion. I gave up trying, and just had the software modified to suit my needs. They would at least agree to do that to placate me.

Part of the reason I gave up trying back in 2011 is that I ran into a familiar prejudice from this expert group. It was a prejudice against lawyers common to most academics and engineers. As a high-tech lawyer since 1980 I have faced this prejudice from non-lawyer techies my whole career. They assume we were all just a bunch of weasels, not to be trusted, and with little or no knowledge of technology and search. They have no idea at all about legal ethics or professionalism, nor of our experience with the search for evidence. They fail to understand the central role of lawyers in e-discovery, and how our whole legal system, not just discovery, is based on the honesty and integrity of lawyers. We need good software from them, not methods to use the software, but they knew better. It was frustrating, believe me. So I gave up on the control set arguments and moved on. Until today.

In the arrogance of the first designers of predictive coding, an arrogance born of advanced degrees in entirely different fields, these information scientists and engineers presumed they knew enough to tell all lawyers how to use predictive coding software. They were blind to their own ignorance. The serious flaws inherent in Predictive Coding Version 1.0 are the result.

Predictive Coding Version 2.0 Adopts CAL

The first major advance in predictive coding methodology was to eliminate the dual task phases present in Predictive Coding 1.0. The first phase of the two-fold version 1.0 procedure was to use active learning to train the classifier. This would take several rounds of training and eventually the software would seem to understand what you were looking for. Your concept of relevance would be learned by the machine. Then the second phase would begin. In phase two you actually reviewed the documents that met the ranking criteria. In other words, you would use predictive coding in phase one to cull out the probable irrelevant documents, and then you would be done with predictive coding. (In some applications you might continue to use predictive coding for reviewer batch assignment purposes only, but not for training.) The next phase two was all about review to confirm the prediction of classification, usually relevance. In phase two you would just review, and not also train.

In my two ENRON experiments in 2012 I did not follow this two-step procedure. I just kept on training until I could not find any more relevant documents. A Modest Contribution to the Science of Search: Report and Analysis of Inconsistent Classifications in Two Predictive Coding Reviews of 699,082 Enron Documents. (Part One); Comparative Efficacy of Two Predictive Coding Reviews of 699,082 Enron Documents(Part Two); Predictive Coding Narrative: Searching for Relevance in the Ashes of Enron (in PDF form and the blog introducing this 82-page narrative, with second blog regarding an update); Borg Challenge: Report of my experimental review of 699,082 Enron documents using a semi-automated monomodal methodology (a five-part written and video series comparing two different kinds of predictive coding search methods).

I did not think much about it at the time, but by continuing to train I used a, to me, perfectly reasonable departure from the version 1.0 method. I was using what is now promoted as the new and improved Predictive Coding 2.0. In this 2.0 version you combine training and review. The training is continuous. The first round of document training might be called the seed set, if you wish, but it is nothing particularly special. All rounds of training are important and the training should continue as the review proceeds, unless there are some logistical reasons not to. After all, training and review are both part of the same review software, or should be. It just makes good common sense to do that, if your software allows you to. If you review a document, then you might as well at least have the option to include it in the training. There is no logical reason for a cut-off point in the review process where training stops. I really just came up with that notion in Da Silva for simplicity sake.

In predictive coding 2.0 you do Continuous Active Learning, or CAL for short, a term which was, I think, first coined by Gordon Cormack and Maura Grossman. Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014. It just makes much more sense to keep training as long as you can, if your software allows you to do that.

There are now several vendors that promote the capacity of continuous training and have it built into their review software, including Kroll Ontrack. The vendor most vocal about it, however, and the one who promotes the term Predictive Coding 2.0, is Catalyst. Apparently many vendors still use the old dual task, stop training approach of version 1.0. And, most vendors still use, or at least give lip service to, the previously sacrosanct random secret control set features of version 1.0.

John Tredennick

John Tredennick

The well-known Denver law technology sage, John Tredennick, CEO of Catalyst, writes about 2.0 continuously. Here is just one of many good explanations John has made about CAL, this one from his article with the catchy name “A TAR is Born: Continuous Active Learning Brings Increased Savings While Solving Real-World Review Problems” (note these diagrams are his, not mine, and he here calls predictive coding TAR):

How Does CAL Work?

CAL turns out to be much easier to understand and implement than the more complicated protocols associated with traditional TAR reviews.

Catalyst_TAR1.0

A TAR 1.0 review is typically built around the following steps:

1. A subject matter expert (SME), often a senior lawyer, reviews and tags a sample of randomly selected documents to use as a “control set” for training.
2. The SME then begins a training process using Simple Passive Learning or Simple Active Learning. In either case, the SME reviews documents and tags them relevant or non-relevant.
3. The TAR engine uses these judgments to build a classification/ranking algorithm that will find other relevant documents. It tests the algorithm against the control set to gauge its accuracy.
4. Depending on the testing results, the SME may be asked to do more training to help improve the classification/ranking algorithm.
5. This training and testing process continues until the classifier is “stable.” That means its search algorithm is no longer getting better at identifying relevant documents in the control set.

Even though training is iterative, the process is finite. Once the TAR engine has learned what it can about the control set, that’s it. You turn it loose to rank the larger document population (which can take hours to complete) and then divide the documents into categories to review or not. There is no opportunity to feed reviewer judgments back to the TAR engine to make it smarter.

TAR 2.0: Continuous Active Learning

In contrast, the CAL protocol merges training with review in a continuous process. Start by finding as many good documents as you can through keyword search, interviews, or any other means at your disposal. Then let your TAR 2.0 engine rank the documents and get the review team going.

Catalyst_Tar.2.0

As the review progresses, judgments from the review team are submitted back to the TAR 2.0 engine as seeds for further training. Each time reviewers ask for a new batch of documents, they are presented based on the latest ranking. To the extent the ranking has improved through the additional review judgments, reviewers receive better documents than they otherwise would have.

After this blog first published John contact me and said his software never had a control set, and so in this sense his Catalyst software is already fully Predictive Coding 3.0 compliant. Even if your software has control set features, you can probably still disable them. That is what I do with the Kroll Ontrack software that I typically use (see eg MrEDR.com). I am talking about a method of use here, not a specific algorithm, nor patentable invention. So unless the software you uses forces you do a two-step process, or makes you use a control set, you can use these version 3.0 methods with it. Still, some modifications of the software would be advantageous to streamline and simplify the whole process that is inherent in Predictive Coding 3.0. For this reason I call on all software vendors to eliminate the secret control set now and the dual step process.

Version 3.0 Rejects the Use of Control and Seed Sets

Recommind_Patent_control_setThe main problem for me with the 1.0 work-flow methodology for Predictive Coding was not the two-fold nature of train then review, which is what 2.0 addressed, but its dependence on creation of a secret control set and seed set at the beginning of a project. That is the box labeled 430 in Figure Four to the Recommind patent. It is shown in Tredennick’s Version 1.0 diagram on the left as control set and seed set. The need for a random secret control set and seed set became an article of faith based on black letter statistics rules. Lawyers just accepted it without question as part of version 1.0 predictive coding. It is also one reason that the two-fold method of train then review, instead of CAL 2.0, is taking so long for some vendors to abandon.

Based on my experience and experiments with predictive coding methods since 2011, the random control set and seed set are both unnecessary. The secret control set is especially suspect. It does not work in real-world legal review projects, or worse, provides statistical mis-information as to recall. As mentioned, that is primarily because in the real world of legal practice relevance is a continually evolving concept. It is never the same at the beginning of a project, when the control set is created, as at the end. The engineers who designed version 1.0 simply did not understand that. They were not lawyers and did not appreciate the flexibility of the relevance. They did not know about concept drift. They did not understand the inherent vagaries and changing nature of the search target in a large document review project. They also did not understand how human SMEs were, how they often disagree with themselves on the classification of the same document even without concept drift. As mentioned, they were also blinded by their own arrogance, tinged with antipathy against lawyers.

They did understand statistics. I am not saying their math was wrong. But they did not understand evidence, did not understand relevance, did not understand relevance drift (or, as I prefer to call it, relevance evolution), and did not understand efficient legal practice. Many I have talked to did not have any real understanding of how lawyers worked at all, much less document review. Most were just scientists or statisticians. They meant well, but they did harm nonetheless. These scientists did not have any legal training. If they were any lawyers on the version 1.0 software development team, they were not heard, or had never really practiced law. (As a customer, I know I was brushed off.) Things have gotten much better in this regard since 2008 and 2009, but still, many vendors have not gotten the message. They still manufacture version 1.0 type predictive coding software.

Jeremy Pickens, Ph.D., Catalyst’s in-house information scientist, seems to agree with my assessment of control sets. See Pickens, An Exploratory Analysis of Control Sets for Measuring E-Discovery ProgressDESI VI 2015, where he reports on an his investigation of the effectiveness of control sets to measure recall and precision. Jeremy used the Grossman and Cormack TAR Evaluation Toolkit for his data and gold standards. Here is his conclusion:

A popular approach in measuring e-discovery progress involves the creation of a control set, holding out randomly selected documents from training and using the quality of the classification on that set as an indication of progress on or quality of the whole. In this paper we do an exploratory data analysis of this approach and visually examine the strength of this correlation. We found that the maximum-F1 control set approach does not necessarily always correlate well with overall task progress, calling into question the use of such approaches. Larger control sets performed better, but the human judgment effort to create these sets have a significant impact on the total cost of the process as a whole.

predictive_coding_3.0A secret control set is not a part of the Predictive Coding 3.0 method. As will be explained, I still have random selection reviews for prevalence and quality control purposes – Steps Three and Seven – but the documents are not secret and they are typically used for training (although they do not have to be). Moreover, version 3.0 eliminates any kind of special first round of training seed set, random based or otherwise. The first time the machine training begins is simply the first round. Sometimes it is big, sometimes it is not. It all depends on my technical and legal analysis of the data presented or circumstances of the project. It also all depends on my legal analysis and the disputed issues of fact in the law suit or other legal investigation. That is the kind of thing that lawyers do everyday. No magic required, not even high intelligence; only background and experience as a practicing lawyer are required.

The seed set is dead. So too is the control set. Other statistical methods must be used to calculate recall ranges and other numeric parameters beyond the ineffective control set method. Other methods beyond just statistics must be used to evaluate the quality and success of a review project. See eg. EI-Recall and Zero Error Numerics that includes statistics, but is not limited to it).

A full description of the eight-step model used to describe Predictive Coding 3.0 will follow, step by step, in part two of this article.

Grossman and Cormack Patents

I do not claim any patents or other intellectual property rights to Predictive Coding 3.0, aside from copyrights to my writings, and certain trade secrets that I use, but have not published or disclosed outside of my circle of trust. Hopefully my 3.0 method does not infringe any existing patent claims. In the course of writing this article I happened to notice, for the first time, that my 3.0 method appears to have several features in common with some of the descriptions of predictive coding work flow in the predictive coding patents of Gordon Cormack and Maura Grossman. Their patents are all entitled Full-Text Systems and methods for classifying electronic information using advanced active learning technique: December 31, 2013, 8,620,842, Cormack; April 29, 2014, 8,713,023, Grossman and Cormack; and,  September 16, 2014, 8,838,606, Grossman and Cormack.

The slight similarities are not too surprising. My development of the Predictive Coding 3.0 method was based in part on their research and publications. It was also based on my studies of the publications of others, the prior art, as well as my own research and experiments with a variety of predictive coding experiments. Finally, like Maura Grossman, the 3.0 methods developed out of my experience with real-world legal predictive coding projects since 2011. All seem like obvious methods to me.

The Grossman and Cormack patents and patent applications are interesting for a number of reasons. I suggest you read them. For instance, they all contain the following paragraph in the Background section explaining why their invention is needed. As you can see it criticizes all of the existing version 1.0 software on the market at the time of their applications (2013) (emphasis added):

Generally, these e-discovery tools require significant setup and maintenance by their respective vendors, as well as large infrastructure and interconnection across many different computer systems in different locations. Additionally, they have a relatively high learning curve with complex interfaces, and rely on multi-phased approaches to active learning. The operational complexity of these tools inhibits their acceptance in legal matters, as it is difficult to demonstrate that they have been applied correctly, and that the decisions of how to create the seed set and when to halt training have been appropriate. These issues have prompted adversaries and courts to demand onerous levels of validation, including the disclosure of otherwise non-relevant seed documents and the manual review of large control sets and post-hoc document samples. Moreover, despite their complexity, many such tools either fail to achieve acceptable levels of performance (i.e., with respect to precision and recall) or fail to deliver the performance levels that their vendors claim to achieve, particularly when the set of potentially relevant documents to be found constitutes a small fraction of a large collection.

They then indicate that their invention overcomes these problems and is thus a significant improvement over prior art. In Figure Eleven of their patent (shown below) they describe one such improvement, “an exemplary method 1100 for eliminating the use of seed sets in an active learning system in accordance with certain embodiments.”

Grossman_Cormack_Patent11

These are basically the same kind of complaints that I have made here against Predictive Coding 1.0 and 2.0. I understand the criticisms regarding complex interfaces, that rely on multi-phased approaches to active learning. If the software forces use of control set and seed set nonsense, then it is an overly complex interface. (It is not overly complex if it allows other types of search, such as keyword, similarity or concept, for this degree of complexity is necessary for a multimodal approach.) I also understand their criticism of the multi-phased approaches to active learning, which was fixed in 2.0 and CAL.

The Grossman & Cormack criticism about low prevalence document collections, which is the rule, not the exception in legal search, is also correct. It is another reason the control set approach cannot work in legal search. The number of relevant documents to be found constitutes a small fraction of a large collection and so the control set random sample is very unlikely to be representative, much less complete. That is an additional problem separate and apart from relevance shift.

About the only complaint the Grossman & Cormack patent makes that I do not understand is the gripe about large infrastructure and interconnection across many different computer systems in different locations. For Kroll Ontrack software at least, that is the vendor’s problem, not the attorneys. All the user does is sign on to a secure cloud server.

Notice that there is no seed set or control set in the Grossman & Cormack patent diagram as you see in the old Recommind patent. Much of the rest of the patent, in so far as I am able to understand the arcane patent language used, consists of applications of CAL techniques that have been tested and explained in their writings, including many additional variables and techniques not mentioned in their articles. See egEvaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014. Their patent includes CAL methods, of course, but also eliminates the use of seed sets. I presume this means they also eliminate control sets, at least in some of their methods. If true, then in that sense their patents are like my own 3.0 innovation.

To be continued and concluded with a lengthy description of the Predictive Coding version 3.o eight-step method.

11 Responses to Predictive Coding 3.0

  1. Greetings Ralph:

    Thanks for the kind words about me and Catalyst.

    However, we have never used a control set in our TAR 2.0 process, nor do we believe in them. Our engine ranks all of the documents all of the time and can do so in minutes. From the beginning, we built Insight Predict around continuous ranking of all the documents. We never needed to use a control set.

    As I point out in my book “TAR for Smart People,” we measure training progress in part by measuring the movement in document ranking as the review progresses. That allows us to see exactly what the population is doing over time. Stability in our world means that the document ranking as a whole is no longer changing much. We don’t use stability the way TAR 1.0 engines do (as a signal to stop training) but it is a helpful indicator of how things are going.

    Because we don’t use a control set, we can handle rolling productions without the need to toss out initial training. As new documents are loaded, they simply integrate into the ranking. To the extent they are similar to other documents, they fit right into the mix. To the extent they are different, we pick them up through our contextual diversity algorithm.

    Ultimately, it seems almost silly to suggest that a 500 document control set meaningfully represents the much larger document population for training purposes. For a richness estimate, sure. But nothing more.

    Thanks as always for your insightful writing.

    John Tredennick

  2. Ralph Artigliere says:

    Ralph,
    The value of your decision to evolve to less blogs with more comprehensive and deeper insight is validated by this excellent piece, which I am sure will generate valuable dialogue. Kudos and thank you.

  3. […] help lawyers to find electronic evidence in a systematic, repeatable, and verifiable manner. See: Predictive Coding 3.0, part one and part two. The hybrid search method of AI human computer interaction developed in […]

  4. […] is the second part of a two-part article. Part One of Predictive Coding 3.0 described the errors in Predictive Coding 1.0 and 2.0, errors that are […]

  5. […] a recent blog post, Ralph Losey lays out a case for the abolishment of control sets in e-discovery, particularly if […]

  6. […] in time. (It might if it uses a control set, but that is a different story, explained in my article Predictive Coding 3.0). The software I use has no trouble at all disregarding any early training if it later finds that […]

  7. Joshua Rubin says:

    Thank you for a typically excellent analysis, Ralph.

    I think that there is another very important corollary to the fact that reviewers’ understanding increases over the course of the review. The corollary is that predictive coding workflows should include a new “loopback” method for maximizing speed, efficiency, precision, and recall. The loopback method is the reverse of what some existing predictive coding systems already provide. It is based on assumption that the best training judgments (or texts) are the *latest* training judgments.

    Some existing predictive coding systems can show us which actual reviewer judgments differ from the predictions based on all of the earlier training judgments. There has been an implicit assumption that the earlier training judgments are the “gold standard.” The point of these comparisons has been to see how “stable” the predictions have become by measuring the decline in those differences over the course of the review.

    But as you’ve pointed out, Ralph, many of those early judgments may be erroneous in hindsight. And yet they will continue to influence the predictions, and to appear to validate later erroneous judgments. So the system will perpetuate errors unless we can find which of the earlier judgments were, in hindsight, erroneous. By eliminating the erroneous early judgments, the system can then predict based on only the best judgments.

    There is an easy way for us to pinpoint those erroneous earlier judgments. What I’m proposing is that the system should provide the ability to use only the *latest* training judgments to make “predictions” about earlier training judgments, and then to show us where the earlier training judgments differ from those “predictions.” This would give us an opportunity to find and eliminate erroneous earlier judgments from the training base. Once we have eliminated these early erroneous judgments from the training base, the system can then make better predictions based on a cleaner set of training judgments.

    Where the system allows reviewers to elect whether a document that has been categorized should also be used for training, the system could also use its clarified predictions to show us which documents that have been categorized, but have not been designated as training documents, may have been miscategorized.

    This loopback method would provide a better method for determining stabilization. Perhaps more importantly, it would provide a much cleaner final product.

    At a minimum, we could use this “loopback” method toward the end of the review. We could also use it continuously during the course of the review. Continuous loopback could materially cut the time and expense of the review.

    What do you think?

    Best,
    Joshua rubin

    • Ralph Losey says:

      Thanks for the comment.

      Some of what you’re proposing is already automated in KO’s tool, EDR, and some is something I do manually. All made easier by not using control sets, of course. I note, however, that while concept drift is one reason control sets do not work (there are others), I find that in most projects it (concept shift) is not so pervasive as to require too much concern as to training. The EDR classifier is robust enough to handle some inconsistencies, and we must balance the desire for perfection with the cost/time factors. I’m quite happy with any precision in 90s. I’ve seen other methods with precision rates in the 20s!

      I invite you to join me in discrediting the Control Set method that is so confusing to our brothers and sisters in the Bar who are trying to use predictive coding. I think it is the source of much harm and false confidence as to Recall. I need other bloggers to join me in speaking out.

      Also waiting for some scholarly articles about the control set fallacies too. All I see are kowtowing to this established orthodoxy.

Leave a Reply

Discover more from e-Discovery Team

Subscribe now to keep reading and get access to the full archive.

Continue reading