Are You The Lit Support Tech?

August 16, 2014

Ralph_2013_beard_frownThe video below shows what it is like on a Friday afternoon in the Litigation Support Departments of most law firms in the U.S. and U.K.. That’s because most lawyers are still clueless about technology in general, much less the intricacies of electronic discovery work and ESI processing. There are some exceptions of course. A few firms have it all together and everyone gets it. They are called sole proprietorships. Then there are also some large law firms like mine where this does not occur. Why? What’s our secret you may wonder? We outsourced our whole litigation support to a top vendor. I’m sure you know who they are. Now as the partner in charge of our litigation support department, my Friday afternoons are relatively stress free. I just approve KO doing it, and they do it well. It is their problem, and their core competence.

I no longer have to supervise a litigation support department. Instead I manage a relationship with a vendor. It is much more pleasant, believe me. When not working on projects and serving clients, I focus my internal e-discovery firm management time on the training and education of my firm’s lawyers and paralegals. IMO this is the way it should be. Law firms should stick to their core competency, practicing law and teaching law, and should not try to run little vendor corps in their midst.

With a smoothly running outsourced deal my Friday afternoons are much nicer. I can now look at this video and laugh. My cringing at the same time has substantially diminished. The anguish and bitter ironies are almost all gone. I know that it is getting better every day, that real learning is happening, and, for me, these scenes have disappeared, well, at least they’ve moved to a better venue. To those of you still living the old dream, you have my sympathies and condolences. Know that there is a way out.

 


Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part Four

August 3, 2014

This is the conclusion of my four part blog: Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part One and Part Two and Part Three.

Cormack and Grossman’s Conclusions

Maura-and-Gordon_Aug2014Gordon Cormack and Maura Grossman have obviously put a tremendous amount of time and effort into this study. In their well written conclusion they explain why they did it, as well as provide a good summary of their findings

Because SPL can be ineffective and inefficient, particularly with the low-prevalence collections that are common in ediscovery, disappointment with such tools may lead lawyers to be reluctant to embrace the use of all TAR. Moreover, a number of myths and misconceptions about TAR appear to be closely associated with SPL; notably, that seed and training sets must be randomly selected to avoid “biasing” the learning algorithm.

This study lends no support to the proposition that seed or training sets must be random; to the contrary, keyword seeding, uncertainty sampling, and, in particular, relevance feedback – all non-random methods – improve significantly (P < 0:01) upon random sampling.

While active-learning protocols employing uncertainty sampling are clearly more effective than passive-learning protocols, they tend to focus the reviewer’s attention on marginal rather than legally significant documents. In addition, uncertainty sampling shares a fundamental weakness with passive learning: the need to define and detect when stabilization has occurred, so as to know when to stop training. In the legal context, this decision is fraught with risk, as premature stabilization could result in insufficient recall and undermine an attorney’s certification of having conducted a reasonable search under (U.S.) Federal Rule of Civil Procedure 26(g)(1)(B).

This study highlights an alternative approach – continuous active learning with relevance feedback – that demonstrates superior performance, while avoiding certain problems associated with uncertainty sampling and passive learning. CAL also offers the reviewer the opportunity to quickly identify legally significant documents that can guide litigation strategy, and can readily adapt when new documents are added to the collection, or new issues or interpretations of relevance arise.

Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014, at pg. 9.

The insights and conclusions of Cormack and Grossman are perfectly in accord with my own experience and practice with predictive coding search efforts, both with messy real world projects, and the four controlled scientific tests I have done over the last several years (only two of which have to date been reported, and the fourth is still in progress). I agree that a relevancy approach that emphasizes high ranked documents for training is one of the most powerful search tools we now have. So too is uncertainty training (mid ranked) when used judiciously, as well as keywords, and a number of other methods. All the many tools we have to find both relevant and irrelevant documents for training should be used, depending on the circumstances, including even some random searches.

In my view, we should never use just one method to select documents for machine training, and ignore the rest, even when it is a good method like Cormack and Grossman have shown CAL to be. When the one method selected is the worst of all possible methods, as random search has now been shown to be, then the monomodal approach is a recipe for ineffective, over-priced review.

Why All the Foolishness with Random Search?

random samplingAs shown in Part One of this article, it is only common sense to use what you know to find training documents, and not rely on a so-called easy way of rolling dice. A random chance approach is essentially a fool’s method of search. The search for evidence to do justice is too important to leave to chance. Cormack and Grossman did the legal profession a favor by taking the time to prove the obvious in their study. They showed that even very simplistic mutlimodal search protocols, CAL and SAL, do better at machine training than monomodal random only.

scientist on simpsonInformation scientists already knew this rather obvious truism, that multimodal is better, that the roulette wheel is not an effective search tool, that random chance just slows things down and is ineffective as a machine training tool. Yet Cormack and Grossman took the time to prove the obvious because the legal profession is being led astray. Many are actually using chance as if it that were a valid search method, although perhaps not in the way they describe. As Cormack and Grossman explained in their report:

While it is perhaps no surprise to the information retrieval community that active learning generally outperforms random training [22], this result has not previously been demonstrated for the TAR Problem, and is neither well known nor well accepted within the legal community.

Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014 at pg. 8.

As this quoted comment suggests, everyone in the information science search community knew this already, that the random only approach to search is inartful. So do most lawyers, especially the ones with years of hands-on experience in search for relevant ESI. So why in the world is random search only still promoted by some software companies and their customers? Is it really to address the so called problem of “not knowing what you don’t know.” That is the alleged inherent bias of using knowledge to program the AI. The total-random approach is also supposed to prevent overt, intentional bias, where lawyers might try to mis-train the AI searcher algorithm on purpose. These may be the stated reasons by vendors, but there are other reasons. There must be, because these excuses do not hold water. This was addressed in Part One of this article.

This bias-avoidance claim must just be an excuse because there are many better ways to counter myopic effects of search driven too narrowly. There are many methods and software enhancements that can be used to avoid overlooking important, not yet discovered types of relevant documents. For instance, allow machine selection of uncertain documents, as was done here with the SAL protocol. You could also include some random document selection into the mix, and not just make the whole thing random. It is not all or nothing, not logically at least, but perhaps it is as a practical matter for some software.

My preferred solution to the problem of “not knowing what you don’t know” is to use a combination of all those methods, buttressed by a human searcher that is aware of the limits of knowledge. In mean, really! The whole premise behind using random as the only way to avoid a self-looping trap of “not knowing what you don’t know” assumes that the lawyer searcher is a naive boob or dishonest scoundrel. It assumes lawyers are unaware that they don’t know what they don’t know. Please, we know that perfectly well. All experienced searchers know that. This insight is not just the exclusive knowledge of engineers and scientists. Very few attorneys are that arrogant and self absorbed, or that naive and simplistic in their approach to search.

No, this whole you must use random only search to avoid prejudice is just a smoke screen to hide real reason a vendor sells software that only works that way. The real reason is that poor software design decisions were made in a rush to get predictive coding software to market. Software was designed to only use random search because it was easy and quick to build software like that. It allowed for quick implementation of machine training. Such simplistic types of AI software may work better than poorly designed keyword searches, but it is still far inferior to more complex machine training system, as Cormack and Grossman have now proven. It is inferior to a multimodal approach.

The software vendors with random only training need to move on. They need to invest in their software to adopt a multimodal approach. In fact, it appears that many have already done so, or are in the process. Yes, such software enhancements take time and money to implement. But we need software search tools for adults. Stop all of the talk about easy buttons. Lawyers are not simpletons. We embrace hard work. We are masters of complexity. Give us choices. Empower the software so that more than one method can be used. Do not force us to use only random selection.

We need software tools that respect the ability of attorneys to perform effective searches for evidence. This is our sand box. That is what we attorneys do, we search for evidence. The software companies are just here to give us tools, not to tell us how to search. Let us stop the arguments and move on to discuss more sophisticated search methods and tools that empower complex methods.

Attorneys want software with the capacity to integrate all search functions, including random, into a mulitmodal search process. We do not want software with only one type of machine training ability, be it CAL, SAL or SPL. We do not want software that can only do one thing, and then have the vendor build a false ideology around their one capacity that says their method is the best and only way. These are legal issues, not software issues.

Attorneys do not just want one search tool, we want a whole tool chest. The marketplace will sort out whose tools are best, so will science. For vendors to remain competitive they need to sell the biggest tool chest possible, and make sure the tools are well built and perform as advertised. Do not just sell us a screwdriver and tell us we do not need a hammer and pliers too.

Leave the legal arguments as to reasonability and rules to lawyers. Just give us the tools and we lawyers will find the evidence we need. We are experts at evidence detection. It is in our blood. It is part of our proud heritage, our tradition.

King_Solomon_JudgeFinding evidence is what lawyers do. The law has been doing this for millennia. Think back to story of the judicial decision of King Solomon. He decided to award the child to the woman he saw cry in response to his sham decision to cut the baby in half. He based his decision on the facts, not ideology. He found the truth in clever ways built around facts, around evidence.

Lawyers always search to find evidence so that justice can be done. The facts matter. It has always been an essential part of what we do. Lawyers always adapt with the times. We always demand and use the best tools available to do our job. Just think of Abraham Lincoln who readily used telegraphs, the great new high-tech invention of his day. When you want to know the truth of what happened in an event that took place in the recent past, you hire a lawyer, not an engineer nor scientist. That is what we are trained to do. We separate the truth from the lies. With great tools we can and will do an even better job.

Many multimodal based software vendors already understand all of this. They build software that empowers attorneys to leverage their knowledge and skills. That is why we use their tools. Empowerment of attorneys with the latest AI tools empowers our entire system of justice. That is why the latest Cormack Grossman study is so important. That is why I am so passionate about this. Join with us in this. Demand diversity and many capacities in your search software, not just one.

Vendor Wake Up Call and Plea for Change

Ralph_x-mas_2013My basic message to all manufacturers of predictive coding software who use only one type of machine training protocol is to change your ways. I mean no animosity at all. Many of you have great software already, it is just the monomondal method built into your predictive coding features that I challenge. This is a plea for change, for diversity. Sell us a whole tool chest, not just a single, super-simple tool.

Yes, upgrading software takes time and money. But all software companies need to do that anyway to continue to supply tools to lawyers in the Twenty-First Century. Take this message as both a wake up call and a respectful plea for change.

Dear software designers: please stop trying to make the legal profession look only under the random lamp. Treat your attorney customers like mature professionals who are capable of complex analysis and skills. Do not just assume that we do not know how to perform sophisticated searches. I am not the only attorney with multimodal search skills. I am just the only one with a blog who is passionate about it. There are many out there with very sophisticated skills and knowledge. They may not be as old (I prefer to say experienced) and loud mouthed (I prefer to say outspoken) as I am, but they are just as skilled. They are just as talented. More importantly, their numbers are growing rapidly. It is a generation thing too, you know. Your next generation of lawyer customers are just as comfortable with computers and big data as I am, maybe more so. Do you really doubt that Adam Losey and his generation will not surpass our accomplishments with legal search. I don’t.

Dear software designers: please upgrade your software and get with the multi-feature program. Then you will have many new customers, and they will be empowered customers. Do not have the money to do that? Show your CEO this article. Lawyers are not stupid. They are catching on, and they are catching on fast. Moreover, these scientific experiments and reports will keep on too. The truth will come out. Do you want to be survive the inevitable vendor closures and consolidation? Then you need to invest in more sophisticated, fully featured software. Your competitors are.

Dear software designers: please abandon the single feature approach, then you will be welcome in the legal search sandbox. I know that the limited functionality software that some of you have created is really very good. It already has many other search capacities. It just needs to be better integrated with predictive coding. Apparently some single feature software already produces decent results, even with the handicap of random-only. Continue to enhance and build upon your software. Invest in the improvements needed to allow for full multimodal, active, judgmental search.

Conclusion

Flashlights_taticalrandom only search method for predictive coding training documents is ineffective. The same applies to any other training method if it is applied to the exclusion of all others. Any experienced searcher knows this. Software that relies solely on a random only method should be enhanced and modified to allow attorneys to search where they know. All types of training techniques should be built into AI based software, not just random. Random may be easy, but is it foolish to only search under the lamp post. It is foolish to turn a blind eye to what you know. Attorneys, insist on having your own flashlight that empowers you to look wherever you want. Shine your light wherever you think appropriate. Use your knowledge. Equip yourself with a full tool chest that allows you to do that.


Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part Three

July 27, 2014

This is part three of what has now become a four part blog: Latest Grossman and Cormack Study Proves Folly of Using Random Search for Machine Training – Part One and Part Two.

Professor Gordon Cormack

Professor Gordon Cormack

Yes, my article on this experiment and report by Professor Gordon Cormack and attorney Maura Grossman is rapidly becoming as long as the report itself, and, believe it or not, I am not even going into all of the aspects in this very deep, multifaceted study. I urge you to read the report. It is a difficult read for most, but worth the effort. Serious students will read it several times. I know I have. This is an important scientific work presenting unique experiments that tested common legal search methods.

The Cormack Grossman paper was peer reviewed by other scientists and presented at the major event for information retrieval scientists, called the annual ACM SIGIR conference. 12_acm-logo-medACM is the Association for Computing Machinery, the world’s largest educational and scientific computing society. SIGIR is the Special Interest Group On Information Retrieval section of ACM. Hundreds of scientists and academics served on organizing committees for the 2014 SIGIR conference in Australia. They came from universities and large corporate research labs from all over the world, including Google, Yahoo, and IBM. Here is a list with links to all of the papers presented.

All attorneys who do legal search should at least have a rudimentary understanding of the findings of Cormack and Grossman on the predictive coding training methods analyzed in this report. That is why I am making this sustained effort to provide my take on it, and make their work a little more accessible. Maura and Gordon have, by the way, generously given of their time to try to insure that my explanations are accurate. Still, any mistakes made on that account are solely my own.

Findings of Cormack Grossman Study

rouletteHere is how Cormack and Grossman summarize their findings:

The results presented here do not support the commonly advanced position that seed sets, or entire training sets, must be randomly selected [19, 28] [contra 11]. Our primary implementation of SPL, in which all training documents were randomly selected, yielded dramatically inferior results to our primary implementations of CAL and SAL, in which none of the training documents were randomly selected.

Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014, at pgs. 7-8.

CAL v. SPL

Now for the details of the results comparing the previously described methods of CAL, SAL and SPL. First, let us examine the comparison between the CAL and SPL machine training methods. To refresh your memory, CAL is simplistic type of multimodal training method wherein two methods are used. Keyword search results are used in the first round of training. In all following rounds, high probability ranked search results are used. SPL is a pure random method, a monomodal method. With SPL all documents are selected by random sampling for training in all rounds.

CALSPL

Cormack and Grossman found that the “CAL protocol achieves higher recall than SPL, for less effort, for all of the representative training-set sizes.” Id. at pg. 4. This means you can find more relevant documents using CAL than a random method, and you can do so faster and thus with less expense.

To drill down even deeper into their findings it is necessary to look at the graphs in the report that show how the search progressed through all one-hundred rounds of training and review for various document collections. This is shown for CAL v. SPL in Figure 1 of the report. Id. at pg. 5. The line with circle dots at the top of each graph plots the retrieval rate of CAL, the clear winner on each of the eight search tasks tested. The other three lines show the random approach, SPL, using three different training-set sizes.Cormack_Grossman_Fig1  Cormack and Grossman summarize the CAL v. SPL findings as follows:

After the first 1,000 documents (i.e., the seed set), the CAL curve shows a high slope that is sustained until the majority of relevant documents have been identified. At about 70% recall, the slope begins to fall off noticeably, and effectively plateaus between 80% and 100% recall. The SPL curve exhibits a low slope for the training phase, followed by a high slope, falloff, and then a plateau for the review phase. In general, the slope immediately following training is comparable to that of CAL, but the falloff and plateau occur at substantially lower recall levels. While the initial slope of the curve for the SPL review phase is similar for all training-set sizes, the falloff and plateau occur at higher recall levels for larger training sets. This advantage of larger training sets is offset by the greater effort required to review the training set: In general, the curves for different training sets cross, indicating that a larger training set is advantageous when high recall is desired.

CAL v. SAL

The Cormack Grossman experiment also compared the CAL and SAL methods. Recall the SAL method is another simple multimodal method where only two methods are used to select training documents. Keywords are again used in the first round only, just like the CAL protocol. Thereafter, in all subsequent rounds of training machine selected documents are used based on the machine’s uncertainty of classification. That means the search is focused on the midrange ranked documents about which the machine is most uncertain.

CALSAL

Cormack and Grossman found that “the CAL protocol generally achieves higher recall than SAL,” but the results were closer and more complex. Id. At one point in the training SAL became as good as CAL, it achieved a specific recall value with the nearly the same efforts as CAL from that point forward. The authors found that was due to the fact that many high probability documents began to be used by the machine as uncertainty selected documents. This happened after all of the mid-scoring documents had been used up. In other words, at some point the distinction between the two methods was decreased, and more high probability documents were used in SAL, in almost the same way they were used in CAL. That allowed SAL to catch up with CAL and, in effect, become almost as good.

This catch up point is different in each project. As Cormack and Grossman explain:

Once stabilization occurs, the review set will include few documents with intermediate scores, because they will have previously been selected for training. Instead, the review set will include primarily high-scoring and low-scoring documents. The high-scoring documents account for the high slope before the inflection point; the low-scoring documents account for the low slope after the inflection point; the absence of documents with intermediate scores accounts for the sharp transition. The net effect is that SAL achieves effort as low as CAL only for a specific recall value, which is easy to see in hindsight, but difficult to predict at the time of stabilization.

This inflection point and other comparisons can be easily seen in Figure 2 of the report (shown below). Id. at pg. 6. Again the line with circle dots at the top of each graph, the one that always starts off fastest, plots the retrieval rate of CAL. Again, it does better than in each of the eight search tasks tested. The other three lines show the uncertainty approach, SAL, using three different training-set sizes. CAL does better than SAL in all eight of the matters, but the differences are not nearly as great as the comparison between CAL and SPL.

Cormack_Grossman_Fig2 Cormack and Grossman summarize the CAL v. SAL findings as follows:

Figure 2 shows that the CAL protocol generally achieves higher recall than SAL. However, the SAL gain curves, unlike the SPL gain curves, often touch the CAL curves at one specific inflection point. The strong inflection of the SAL curve at this point is explained by the nature of uncertainty sampling: Once stabilization occurs, the review set … (see quote above for the rest of this sentence.)

This experiment compared one type of simple multimodal machine training method with another. It found that with the data sets tested, and other standard procedures set forth in the experiment, the method which used high ranking documents for training, what William Webber calls the Relevance method, performed somewhat better than the method that used mid-ranked documents, what Webber calls the Uncertainty method.

This does not mean that the uncertainty method should be excluded from a full multimodal approach in real world applications. It just means that here, in this one experiment, albeit a very complex and multifaceted experiment, the relevance method outperformed the uncertainty method.

I have found that in the real world of very complex (messy even) legal searches, it is good to use both high and mid-ranked documents for training, what Cormack and Grossman call CAL and SAL, and what Webber calls Relevance, and Uncertainty training. It all depends on the circumstances, including the all important cost component. In the real world you use every method you can think of to help you to find what you are looking for, not just one or two, but dozens.

Grossman and Cormack know this very well too, which I know from private conservations with them on this, and also from the conclusion to their report:

There is no reason to presume that the CAL results described here represent the best that can be achieved. Any number of feature engineering methods, learning algorithms, training protocols, and search strategies might yield substantive improvements in the future. The effect of review order and other human factors on training accuracy, and thus overall review effectiveness, may also be substantial.

Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014, at pg. 9.

Ralph Losey with some of his many computer tools

My practical takeaway from the Cormack Grossman experiment is that focusing on high ranking documents is a powerful search method. It should be given significant weight in any multimodal approach, especially when the goal is to quickly find as many relevant documents as possible. The “continuous” training aspects of the CAL approach are also intriguing, that is you keep doing machine training throughout the review project and batch reviews accordingly. This could become a project management issue, but, if you can pull it off within proportionality and requesting party constraints, it just makes common sense to do so. You might as well get as much help from the machine as possible and keep getting its probability predictions for as long as you are still doing reviews and can make last minute batch assignments accordingly.

I have done several reviews in such a continuous training manner without really thinking about the fact the machine input was continuous, including my first Enron experiment. Predictive Coding Narrative: Searching for Relevance in the Ashes of Enron. But this causes me to rethink the flow chart shown below that I usually use to explain the predictive coding process. The work flow shown is not a CAL approach, but rather a SAL type of approach where there is a distinct stop in training after step five, and the review work in step seven is based on the last rankings established in step five.

predictive.coding

The continuous work flow is slightly more difficult to show in a diagram, and to implement, but it does make good common sense if you are in a position to pull it off. Below is the revised workflow to update the language and show how the training continues throughout the review.

predictive.coding_CAL

Machine training is still done in steps four and five, but then continues in steps four, five and seven. There are other ways it could be implemented of course, but this is the CAL approach I would use in a review project where such complex batching and continuous training otherwise makes sense. Of course, it is not necessary in any project were the review in steps four and five effectively finds all of the relevant documents required. This is what happened in my Enron experiment. Predictive Coding Narrative: Searching for Relevance in the Ashes of EnronThere was no need to do a proportional final review, step seven, because all the relevant documents had already been reviewed as part of the machine training review in steps four and five. In the Enron experiment I skipped step seven and when right from step six to step eight, production. I have been able to do this is other projects as well.

Strengths of a Relevancy Weighted Type of CAL

The findings in this experiment as to the strengths of using Relevancy training confirm what I have seen in most of my search projects. I usually start with the high end documents to quickly help me to teach the machine what I am looking for. I find that this is a good way to start training. Again, it just makes common sense to do so. It is somewhat like teaching a human, or a dog for that matter. You teach the machine relevance classification by telling it when it is right (positive reinforcement), and when it is wrong. This kind of feedback is critical in all learning. In most projects this kind of feedback on predictions of highly probable relevance is the fastest way to get to the most important documents. For those reasons I agree with Cormack and Grossman’s conclusion that CAL is a superior method to quickly find the most relevant documents:

CAL also offers the reviewer the opportunity to quickly identify legally significant documents that can guide litigation strategy, and can readily adapt when new documents are added to the collection, or new issues or interpretations of relevance arise.

Id. But then again, I would never rely on just Relevancy CAL type searches alone. It gets results fast, but also tends to lead to a somewhat myopic focus on the high end where you may miss new, different types of relevant documents. For that reason, I also use SAL types of searches to include the mid range documents from the Uncertainty method. That is an important method to help the machine to better understand what documents I am looking for. As Cormack and Grossman put it:

The underlying objective of CAL is to find and review as many of the responsive documents as possible, as quickly as possible. The underlying objective of SAL, on the other hand, is to induce the best classier possible, considering the level of training effort. Generally, the classier is applied to the collection to produce a review set, which is then subject to manual review.

Id. at 8.

Similarity and other concept type search methods are also a good way to quickly find as many responsive documents as possible. So too are keyword searches, and not just in the first round, but for any round. Further, this experiment, which is already very complex (to me at least), does not include the important real world component of highly relevant versus merely relevant documents. I never just train on relevancy alone, but always include a hunt for the hot documents. I want to try to train the machine to understand the difference between the two classifications. Cormack and Grossman do not disagree. As they put it, “any number of feature engineering methods, learning algorithms, training protocols, and search strategies” could improve upon a CAL only approach.

There are also ways to improve the classifier in addition to focus on mid range probability documents, although I have found that uncertainty method is the best way to improve relevance classifications. But, it also helps to be sure your training on the low end is also right, meaning review of some of the high probability irrelevant documents. Both relevant and irrelevant training are helpful. Personally, I also like to include some random aspects, especially at first, to be sure I did not miss any outlier type documents, and be sure I have a good feel for the irrelevant documents of these custodians too. Yes, chance has to place too, so long as it does not take over and become the whole show.

Supplemental Findings on Random Search

diceIn addition to comparing CAL with SAL and SPL, Cormack and Grossman experimented with what would happen to the effectiveness of both the CAL and SAL protocols if more random elements were added to the methods. They experimented with a number of different variables, including substituting random selection, instead of keyword, for the initial round of training (seed set).

As you would expect, the general results were to decrease the effectiveness of every search method wherein random was substituted, either for keyword, high ranking relevance, or mid ranking relevance (uncertainty). The negative impact was strongest in datasets where prevalence was low, which is typical in litigation. Cormack and Grossman tested eight datasets where the prevalence of responsive documents varied from 0.25% to 3.92%, which, as they put it: “is typical for the legal matters with which we have been involved.” The size of the sets tested ranged 293,000 documents to just over 1.1 million. The random based search of lowest prevalence dataset tested, matter 203, the one with a 0.25% prevalence rate, was, in their words, a spectacular failure. Conversely, the negative impact was lessened with higher prevalence datasets. Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014, at pg. 7.

Cormack and Grossman responded to the popular misconception that predictive coding does not work in such low prevalence datasets.

Others assert that these are examples of “low-prevalence” or “low-richness” collections, for which TAR is unsuitable [19]. We suggest that such assertions may presuppose an SPL protocol [11], which is not as effective on low-prevalence datasets. It may be that SPL methods can achieve better results on higher-prevalence collections (i.e., 10% or more responsive documents).

Id. at 9.

In fact, information scientists have been working with low prevalence datasets for decades, which is one reason Professor Cormack had a ready collection of pre-coded documents by which to measure recall, a so-called gold standard of assessments from prior studies. Cormack and Grossman explain that the lack of pre-tested datasets with high prevalence is the reason they did not use such collections for testing. They also speculate that if such high prevalence datasets are tested, then the random only (SPL) method would do much better than it did in the low prevalence datasets they used in their experiment.

However, no such collections were included in this study because, for the few matters with which we have been involved where the prevalence exceeded 10%, the necessary training and gold-standard assessments were not available. We conjecture that the comparative advantage of CAL over SPL would be decreased, but not eliminated, for high-prevalence collections.

Id.

They are probably right, if the datasets have a higher prevalence, then the chances are that random samples will find more relevant documents for training. But that still does not make the blind draw a better way to find things than looking with your eyes wide open. Plus, the typical way to attain high yield datasets is by keyword filtering out large segments of the raw data before beginning a predictive coding search. When you keyword filter like that before beginning machine training the chances are you will leave behind a significant portion, if not most of the relevant documents. Keyword filtering often has low recall, or when broad enough to include most of the relevant documents, it is very imprecise. Then you are back to the same low prevalence situation.

Better to limit filtering before machine training to obvious irrelevant, or ESI not appropriate for training, such as non-text documents like photos, music and voice mail. Use other methods to search for those types of ESI. But do not use keyword filtering on text documents simply to create an artificially high prevalence just because the random based software you use will only work that way. That is the tail wagging the dog.

For more analysis and criticism on using keywords to create artificially high prevalence, a practice Cormack and Grossman call Collection Enrichment, see another excellent article they wrote: Comments on “The Implications of Rule 26(g) on the Use of Technology-Assisted Review”7 Federal Courts Law Review 286 (2014) at pgs. 293-295, 300-301. This article also contains good explanations of the instant study with CAL, SAL and SPL. See especially Table 1 at pg. 297.

The negative impact of random elements on machine training protocols is a no duh to experienced searchers. See eg. the excellent series of articles by John Tredennick, including his review on the Cormack Grossman study: Pioneering Cormack/Grossman Study Validates Continuous Learning, Judgmental Seeds and Review Team Training for Technology Assisted Review.

It never helps to turn to lady luck, to random chance, to improve search. Once you start relying on dice to decide what to do, you are just spinning your wheels.

Supplemental Findings on Keywords and Random Search

go fishCormack and Grossman also tested what would happen if keywords were used instead of random selections, even when the keywords were not tested first against the actual data. This poor practice of using unverified keywords is what I call the Go Fish approach to keyword search. Child’s Game of “Go Fish” is a Poor Model for e- Discovery Search(October 2009). Under this naive approach attorneys simply guess what keywords might be contained on relevant documents without testing how accurate their guesses are. It is a very simplistic approach to keyword search, yet, nevertheless, is still widely employed in the legal profession. This approach has been criticized by many, including Judge Andrew Peck in his excellent Gross Construction opinion, the so called wake-up call for NY attorneys on search. William A. Gross Construction Associates, Inc. v. American Manufacturers Mutual Insurance Co., 256 F.R.D. 134 (S.D.N.Y. 2009).

Cormack and Grossman also tested what would happen if such naive keyword selections were used instead of the high or mid probability methods (CAL and SAL) for machine training. The naive keywords used in these supplemental comparison tests did fairly well. This is consistent with my multimodal approach, where all kinds of search methods are used in all rounds of training.

The success of naive keyword selection for machine training is discussed by Cormack and Grossman as an unexpected finding (italics and parens added):

Perhaps more surprising is the fact that a simple keyword search, composed without prior knowledge of the collection, almost always yields a more effective seed set than random selection, whether for CAL, SAL, or SPL. Even when keyword search is used to select all training documents, the result is generally superior to that achieved when random selection is used. That said, even if (random) passive learning is enhanced using a keyword-selected seed or training set, it (passive learning) is still dramatically inferior to active learning. It is possible, in theory, that a party could devise keywords that would render passive learning competitive with active learning, but until a formal protocol for constructing such a search can be established, it is impossible to subject the approach to a controlled scientific evaluation. Pending the establishment and scientific validation of such a protocol, reliance on keywords and (random) passive learning remains a questionable practice. On the other hand, the results reported here indicate that it is quite easy for either party (or for the parties together) to construct a keyword search that yields an effective seed set for active learning.

Id. at 8.

Cormack and Grossman summarize their findings on the impact of keywords in the first round of training (seed set) on CAL, SAL and SPL:

In summary, the use of a seed set selected using a simple keyword search, composed prior to the review, contributes to the effectiveness of all of the TAR protocols investigated in this study.

Keywords still have an important place in any multimodal, active, predictive coding protocol. This is, however, completely different from using keywords, especially untested naive keywords, to filter out the raw data in a misguided attempt to create high prevalence collections, all so that the random method (passive) might have some chance of success.

To be continued . . . in Part Four I will conclude with final opinions and analysis and my friendly recommendations for any vendors still using random-only training protocols. 


Follow

Get every new post delivered to your Inbox.

Join 3,379 other followers