Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part Three

This is part three of what has now become a four part blog: Latest Grossman and Cormack Study Proves Folly of Using Random Search for Machine Training – Part One and Part Two.

Professor Gordon Cormack

Professor Gordon Cormack

Yes, my article on this experiment and report by Professor Gordon Cormack and attorney Maura Grossman is rapidly becoming as long as the report itself, and, believe it or not, I am not even going into all of the aspects in this very deep, multifaceted study. I urge you to read the report. It is a difficult read for most, but worth the effort. Serious students will read it several times. I know I have. This is an important scientific work presenting unique experiments that tested common legal search methods.

The Cormack Grossman paper was peer reviewed by other scientists and presented at the major event for information retrieval scientists, called the annual ACM SIGIR conference. 12_acm-logo-medACM is the Association for Computing Machinery, the world’s largest educational and scientific computing society. SIGIR is the Special Interest Group On Information Retrieval section of ACM. Hundreds of scientists and academics served on organizing committees for the 2014 SIGIR conference in Australia. They came from universities and large corporate research labs from all over the world, including Google, Yahoo, and IBM. Here is a list with links to all of the papers presented.

All attorneys who do legal search should at least have a rudimentary understanding of the findings of Cormack and Grossman on the predictive coding training methods analyzed in this report. That is why I am making this sustained effort to provide my take on it, and make their work a little more accessible. Maura and Gordon have, by the way, generously given of their time to try to insure that my explanations are accurate. Still, any mistakes made on that account are solely my own.

Findings of Cormack Grossman Study

rouletteHere is how Cormack and Grossman summarize their findings:

The results presented here do not support the commonly advanced position that seed sets, or entire training sets, must be randomly selected [19, 28] [contra 11]. Our primary implementation of SPL, in which all training documents were randomly selected, yielded dramatically inferior results to our primary implementations of CAL and SAL, in which none of the training documents were randomly selected.

Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014, at pgs. 7-8.

CAL v. SPL

Now for the details of the results comparing the previously described methods of CAL, SAL and SPL. First, let us examine the comparison between the CAL and SPL machine training methods. To refresh your memory, CAL is simplistic type of multimodal training method wherein two methods are used. Keyword search results are used in the first round of training. In all following rounds, high probability ranked search results are used. SPL is a pure random method, a monomodal method. With SPL all documents are selected by random sampling for training in all rounds.

CALSPL

Cormack and Grossman found that the “CAL protocol achieves higher recall than SPL, for less effort, for all of the representative training-set sizes.” Id. at pg. 4. This means you can find more relevant documents using CAL than a random method, and you can do so faster and thus with less expense.

To drill down even deeper into their findings it is necessary to look at the graphs in the report that show how the search progressed through all one-hundred rounds of training and review for various document collections. This is shown for CAL v. SPL in Figure 1 of the report. Id. at pg. 5. The line with circle dots at the top of each graph plots the retrieval rate of CAL, the clear winner on each of the eight search tasks tested. The other three lines show the random approach, SPL, using three different training-set sizes.Cormack_Grossman_Fig1  Cormack and Grossman summarize the CAL v. SPL findings as follows:

After the first 1,000 documents (i.e., the seed set), the CAL curve shows a high slope that is sustained until the majority of relevant documents have been identified. At about 70% recall, the slope begins to fall off noticeably, and effectively plateaus between 80% and 100% recall. The SPL curve exhibits a low slope for the training phase, followed by a high slope, falloff, and then a plateau for the review phase. In general, the slope immediately following training is comparable to that of CAL, but the falloff and plateau occur at substantially lower recall levels. While the initial slope of the curve for the SPL review phase is similar for all training-set sizes, the falloff and plateau occur at higher recall levels for larger training sets. This advantage of larger training sets is offset by the greater effort required to review the training set: In general, the curves for different training sets cross, indicating that a larger training set is advantageous when high recall is desired.

CAL v. SAL

The Cormack Grossman experiment also compared the CAL and SAL methods. Recall the SAL method is another simple multimodal method where only two methods are used to select training documents. Keywords are again used in the first round only, just like the CAL protocol. Thereafter, in all subsequent rounds of training machine selected documents are used based on the machine’s uncertainty of classification. That means the search is focused on the midrange ranked documents about which the machine is most uncertain.

CALSAL

Cormack and Grossman found that “the CAL protocol generally achieves higher recall than SAL,” but the results were closer and more complex. Id. At one point in the training SAL became as good as CAL, it achieved a specific recall value with the nearly the same efforts as CAL from that point forward. The authors found that was due to the fact that many high probability documents began to be used by the machine as uncertainty selected documents. This happened after all of the mid-scoring documents had been used up. In other words, at some point the distinction between the two methods was decreased, and more high probability documents were used in SAL, in almost the same way they were used in CAL. That allowed SAL to catch up with CAL and, in effect, become almost as good.

This catch up point is different in each project. As Cormack and Grossman explain:

Once stabilization occurs, the review set will include few documents with intermediate scores, because they will have previously been selected for training. Instead, the review set will include primarily high-scoring and low-scoring documents. The high-scoring documents account for the high slope before the inflection point; the low-scoring documents account for the low slope after the inflection point; the absence of documents with intermediate scores accounts for the sharp transition. The net effect is that SAL achieves effort as low as CAL only for a specific recall value, which is easy to see in hindsight, but difficult to predict at the time of stabilization.

This inflection point and other comparisons can be easily seen in Figure 2 of the report (shown below). Id. at pg. 6. Again the line with circle dots at the top of each graph, the one that always starts off fastest, plots the retrieval rate of CAL. Again, it does better than in each of the eight search tasks tested. The other three lines show the uncertainty approach, SAL, using three different training-set sizes. CAL does better than SAL in all eight of the matters, but the differences are not nearly as great as the comparison between CAL and SPL.

Cormack_Grossman_Fig2 Cormack and Grossman summarize the CAL v. SAL findings as follows:

Figure 2 shows that the CAL protocol generally achieves higher recall than SAL. However, the SAL gain curves, unlike the SPL gain curves, often touch the CAL curves at one specific inflection point. The strong inflection of the SAL curve at this point is explained by the nature of uncertainty sampling: Once stabilization occurs, the review set … (see quote above for the rest of this sentence.)

This experiment compared one type of simple multimodal machine training method with another. It found that with the data sets tested, and other standard procedures set forth in the experiment, the method which used high ranking documents for training, what William Webber calls the Relevance method, performed somewhat better than the method that used mid-ranked documents, what Webber calls the Uncertainty method.

This does not mean that the uncertainty method should be excluded from a full multimodal approach in real world applications. It just means that here, in this one experiment, albeit a very complex and multifaceted experiment, the relevance method outperformed the uncertainty method.

I have found that in the real world of very complex (messy even) legal searches, it is good to use both high and mid-ranked documents for training, what Cormack and Grossman call CAL and SAL, and what Webber calls Relevance, and Uncertainty training. It all depends on the circumstances, including the all important cost component. In the real world you use every method you can think of to help you to find what you are looking for, not just one or two, but dozens.

Grossman and Cormack know this very well too, which I know from private conservations with them on this, and also from the conclusion to their report:

There is no reason to presume that the CAL results described here represent the best that can be achieved. Any number of feature engineering methods, learning algorithms, training protocols, and search strategies might yield substantive improvements in the future. The effect of review order and other human factors on training accuracy, and thus overall review effectiveness, may also be substantial.

Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014, at pg. 9.

Ralph Losey with some of his many computer tools

My practical takeaway from the Cormack Grossman experiment is that focusing on high ranking documents is a powerful search method. It should be given significant weight in any multimodal approach, especially when the goal is to quickly find as many relevant documents as possible. The “continuous” training aspects of the CAL approach are also intriguing, that is you keep doing machine training throughout the review project and batch reviews accordingly. This could become a project management issue, but, if you can pull it off within proportionality and requesting party constraints, it just makes common sense to do so. You might as well get as much help from the machine as possible and keep getting its probability predictions for as long as you are still doing reviews and can make last minute batch assignments accordingly.

I have done several reviews in such a continuous training manner without really thinking about the fact the machine input was continuous, including my first Enron experiment. Predictive Coding Narrative: Searching for Relevance in the Ashes of Enron. But this causes me to rethink the flow chart shown below that I usually use to explain the predictive coding process. The work flow shown is not a CAL approach, but rather a SAL type of approach where there is a distinct stop in training after step five, and the review work in step seven is based on the last rankings established in step five.

predictive.coding

The continuous work flow is slightly more difficult to show in a diagram, and to implement, but it does make good common sense if you are in a position to pull it off. Below is the revised workflow to update the language and show how the training continues throughout the review.

predictive.coding_CAL

Machine training is still done in steps four and five, but then continues in steps four, five and seven. There are other ways it could be implemented of course, but this is the CAL approach I would use in a review project where such complex batching and continuous training otherwise makes sense. Of course, it is not necessary in any project were the review in steps four and five effectively finds all of the relevant documents required. This is what happened in my Enron experiment. Predictive Coding Narrative: Searching for Relevance in the Ashes of EnronThere was no need to do a proportional final review, step seven, because all the relevant documents had already been reviewed as part of the machine training review in steps four and five. In the Enron experiment I skipped step seven and when right from step six to step eight, production. I have been able to do this is other projects as well.

Strengths of a Relevancy Weighted Type of CAL

The findings in this experiment as to the strengths of using Relevancy training confirm what I have seen in most of my search projects. I usually start with the high end documents to quickly help me to teach the machine what I am looking for. I find that this is a good way to start training. Again, it just makes common sense to do so. It is somewhat like teaching a human, or a dog for that matter. You teach the machine relevance classification by telling it when it is right (positive reinforcement), and when it is wrong. This kind of feedback is critical in all learning. In most projects this kind of feedback on predictions of highly probable relevance is the fastest way to get to the most important documents. For those reasons I agree with Cormack and Grossman’s conclusion that CAL is a superior method to quickly find the most relevant documents:

CAL also offers the reviewer the opportunity to quickly identify legally significant documents that can guide litigation strategy, and can readily adapt when new documents are added to the collection, or new issues or interpretations of relevance arise.

Id. But then again, I would never rely on just Relevancy CAL type searches alone. It gets results fast, but also tends to lead to a somewhat myopic focus on the high end where you may miss new, different types of relevant documents. For that reason, I also use SAL types of searches to include the mid range documents from the Uncertainty method. That is an important method to help the machine to better understand what documents I am looking for. As Cormack and Grossman put it:

The underlying objective of CAL is to find and review as many of the responsive documents as possible, as quickly as possible. The underlying objective of SAL, on the other hand, is to induce the best classier possible, considering the level of training effort. Generally, the classier is applied to the collection to produce a review set, which is then subject to manual review.

Id. at 8.

Similarity and other concept type search methods are also a good way to quickly find as many responsive documents as possible. So too are keyword searches, and not just in the first round, but for any round. Further, this experiment, which is already very complex (to me at least), does not include the important real world component of highly relevant versus merely relevant documents. I never just train on relevancy alone, but always include a hunt for the hot documents. I want to try to train the machine to understand the difference between the two classifications. Cormack and Grossman do not disagree. As they put it, “any number of feature engineering methods, learning algorithms, training protocols, and search strategies” could improve upon a CAL only approach.

There are also ways to improve the classifier in addition to focus on mid range probability documents, although I have found that uncertainty method is the best way to improve relevance classifications. But, it also helps to be sure your training on the low end is also right, meaning review of some of the high probability irrelevant documents. Both relevant and irrelevant training are helpful. Personally, I also like to include some random aspects, especially at first, to be sure I did not miss any outlier type documents, and be sure I have a good feel for the irrelevant documents of these custodians too. Yes, chance has to place too, so long as it does not take over and become the whole show.

Supplemental Findings on Random Search

diceIn addition to comparing CAL with SAL and SPL, Cormack and Grossman experimented with what would happen to the effectiveness of both the CAL and SAL protocols if more random elements were added to the methods. They experimented with a number of different variables, including substituting random selection, instead of keyword, for the initial round of training (seed set).

As you would expect, the general results were to decrease the effectiveness of every search method wherein random was substituted, either for keyword, high ranking relevance, or mid ranking relevance (uncertainty). The negative impact was strongest in datasets where prevalence was low, which is typical in litigation. Cormack and Grossman tested eight datasets where the prevalence of responsive documents varied from 0.25% to 3.92%, which, as they put it: “is typical for the legal matters with which we have been involved.” The size of the sets tested ranged 293,000 documents to just over 1.1 million. The random based search of lowest prevalence dataset tested, matter 203, the one with a 0.25% prevalence rate, was, in their words, a spectacular failure. Conversely, the negative impact was lessened with higher prevalence datasets. Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014, at pg. 7.

Cormack and Grossman responded to the popular misconception that predictive coding does not work in such low prevalence datasets.

Others assert that these are examples of “low-prevalence” or “low-richness” collections, for which TAR is unsuitable [19]. We suggest that such assertions may presuppose an SPL protocol [11], which is not as effective on low-prevalence datasets. It may be that SPL methods can achieve better results on higher-prevalence collections (i.e., 10% or more responsive documents).

Id. at 9.

In fact, information scientists have been working with low prevalence datasets for decades, which is one reason Professor Cormack had a ready collection of pre-coded documents by which to measure recall, a so-called gold standard of assessments from prior studies. Cormack and Grossman explain that the lack of pre-tested datasets with high prevalence is the reason they did not use such collections for testing. They also speculate that if such high prevalence datasets are tested, then the random only (SPL) method would do much better than it did in the low prevalence datasets they used in their experiment.

However, no such collections were included in this study because, for the few matters with which we have been involved where the prevalence exceeded 10%, the necessary training and gold-standard assessments were not available. We conjecture that the comparative advantage of CAL over SPL would be decreased, but not eliminated, for high-prevalence collections.

Id.

They are probably right, if the datasets have a higher prevalence, then the chances are that random samples will find more relevant documents for training. But that still does not make the blind draw a better way to find things than looking with your eyes wide open. Plus, the typical way to attain high yield datasets is by keyword filtering out large segments of the raw data before beginning a predictive coding search. When you keyword filter like that before beginning machine training the chances are you will leave behind a significant portion, if not most of the relevant documents. Keyword filtering often has low recall, or when broad enough to include most of the relevant documents, it is very imprecise. Then you are back to the same low prevalence situation.

Better to limit filtering before machine training to obvious irrelevant, or ESI not appropriate for training, such as non-text documents like photos, music and voice mail. Use other methods to search for those types of ESI. But do not use keyword filtering on text documents simply to create an artificially high prevalence just because the random based software you use will only work that way. That is the tail wagging the dog.

For more analysis and criticism on using keywords to create artificially high prevalence, a practice Cormack and Grossman call Collection Enrichment, see another excellent article they wrote: Comments on “The Implications of Rule 26(g) on the Use of Technology-Assisted Review”7 Federal Courts Law Review 286 (2014) at pgs. 293-295, 300-301. This article also contains good explanations of the instant study with CAL, SAL and SPL. See especially Table 1 at pg. 297.

The negative impact of random elements on machine training protocols is a no duh to experienced searchers. See eg. the excellent series of articles by John Tredennick, including his review on the Cormack Grossman study: Pioneering Cormack/Grossman Study Validates Continuous Learning, Judgmental Seeds and Review Team Training for Technology Assisted Review.

It never helps to turn to lady luck, to random chance, to improve search. Once you start relying on dice to decide what to do, you are just spinning your wheels.

Supplemental Findings on Keywords and Random Search

go fishCormack and Grossman also tested what would happen if keywords were used instead of random selections, even when the keywords were not tested first against the actual data. This poor practice of using unverified keywords is what I call the Go Fish approach to keyword search. Child’s Game of “Go Fish” is a Poor Model for e- Discovery Search(October 2009). Under this naive approach attorneys simply guess what keywords might be contained on relevant documents without testing how accurate their guesses are. It is a very simplistic approach to keyword search, yet, nevertheless, is still widely employed in the legal profession. This approach has been criticized by many, including Judge Andrew Peck in his excellent Gross Construction opinion, the so called wake-up call for NY attorneys on search. William A. Gross Construction Associates, Inc. v. American Manufacturers Mutual Insurance Co., 256 F.R.D. 134 (S.D.N.Y. 2009).

Cormack and Grossman also tested what would happen if such naive keyword selections were used instead of the high or mid probability methods (CAL and SAL) for machine training. The naive keywords used in these supplemental comparison tests did fairly well. This is consistent with my multimodal approach, where all kinds of search methods are used in all rounds of training.

The success of naive keyword selection for machine training is discussed by Cormack and Grossman as an unexpected finding (italics and parens added):

Perhaps more surprising is the fact that a simple keyword search, composed without prior knowledge of the collection, almost always yields a more effective seed set than random selection, whether for CAL, SAL, or SPL. Even when keyword search is used to select all training documents, the result is generally superior to that achieved when random selection is used. That said, even if (random) passive learning is enhanced using a keyword-selected seed or training set, it (passive learning) is still dramatically inferior to active learning. It is possible, in theory, that a party could devise keywords that would render passive learning competitive with active learning, but until a formal protocol for constructing such a search can be established, it is impossible to subject the approach to a controlled scientific evaluation. Pending the establishment and scientific validation of such a protocol, reliance on keywords and (random) passive learning remains a questionable practice. On the other hand, the results reported here indicate that it is quite easy for either party (or for the parties together) to construct a keyword search that yields an effective seed set for active learning.

Id. at 8.

Cormack and Grossman summarize their findings on the impact of keywords in the first round of training (seed set) on CAL, SAL and SPL:

In summary, the use of a seed set selected using a simple keyword search, composed prior to the review, contributes to the effectiveness of all of the TAR protocols investigated in this study.

Keywords still have an important place in any multimodal, active, predictive coding protocol. This is, however, completely different from using keywords, especially untested naive keywords, to filter out the raw data in a misguided attempt to create high prevalence collections, all so that the random method (passive) might have some chance of success.

To be continued . . . in Part Four I will conclude with final opinions and analysis and my friendly recommendations for any vendors still using random-only training protocols. 

3 Responses to Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part Three

  1. […] Cormack Study Proves Folly of Using Random Search for Machine Training – Parts One,  Two,  Three and Four. The part that intrigued me was the continuous aspect to the training, as I spelled out in […]

  2. […] This is the conclusion of my four part blog: Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part One and Part Two and Part Three. […]

  3. […] Study Proves Folly of Using Random Search For Machine Training – Part One and Part Two and Part Three. After remaining silent for some time in the face of constant vendor potshots, Professor Cormack […]

Leave a Reply

Discover more from e-Discovery Team

Subscribe now to keep reading and get access to the full archive.

Continue reading