Reinventing the Wheel: My Discovery of Scientific Support for “Hybrid Multimodal” Search

April 21, 2013

reinventing the wheelGetting predictive coding software is just part of the answer to the high-cost of legal review. Much more important is how you use it, which in turn depends, at least in part, on which software you get. That is why I have been focusing on methods for using the new technologies. I have been advocating for what I call the hybrid multimodal method. I created this method on my own over many years of legal discovery. As it turns out, I was merely reinventing the wheel. These methods are already well-established in the scientific information retrieval community. (Thanks to information scientist Jeremy Pickens, an expert in collaborative search, who helped me to find the prior art.)

In this blog I will share some of the classic information science research that supports hybrid multimodal. It includes the work of  Gary Marchionini, Professor and Dean of the School of Information and Library Sciences of U.N.C. at Chapel Hill, and UCLA Professor Marcia J. Bates who has advocated for a multimodal approach to search since 1989. Study of their writings has enabled me to better understand and refine my methods. I hope you will also explore with me the literature in this field. I provide links to some of the books and literature in this area for your further study.

Advanced CARs Require Completely New Driving Methods

First I need to set the stage for this discussion by use of the eight-step diagram show below. This is one of the charts I created to teach the workflow I use in a typical computer assisted review (CAR) project. You have seen it here many times before. For a full description of the eight steps see the Electronic Discovery Best Practices page on predictive coding.

predictive coding work flow

The iterated steps four and five in this work-flow are unique to predictive coding review. They are where active learning takes place. The Grossman-Cormack Glossary defines active learning as:

An Iterative Training regimen in which the Training Set is repeatedly augmented by additional Documents chosen by the Machine Learning Algorithm, and coded by one or more Subject Matter Expert(s).

The Grossman-Cormack Glossary of Technology-Assisted Review,  2013 Fed. Cts. L. Rev. 7 (2013). at pg.

Beware of any co-called advanced review software that does not include these steps; they are not bona-fide predictive coding search engines. My preferred active learning process is threefold:

1.  The computer selects documents for review where the software classifier is uncertain of the correct classification. This helps the classifier algorithms to learn by adding diversity to the documents presented for review. This in turn helps to locate outliers of a type your initial judgmental searches in step two (and  five) of the above diagram have missed. This is machine-selected sampling, and, according to a basic text in information retrieval engineering, a process is not a bona fide active learning search without this ability. Manning, Raghavan and Schutze, Introduction to Information Retrieval, (Cambridge, 2008) at pg. 309.

2.  Some reasonable percentage of the documents presented for human review in step five are selected at random. This again helps maximize recall and premature focus on the relevant documents initially retrieved.

3.  Other relevant documents that a skilled reviewer can find using a variety of search techniques. This is called judgmental sampling. After the first round of training, a/k/a the seed set, judgmental sampling by a variety of search methods is used based on the machine selected or random selected documents presented for review. Sometimes the subject matter expert (“SME”) human reviewer may follow a new search idea unrelated to the documents presented.  Any kind of searches can be used for judgmental sampling, which is why I call it a multimodal search. This may include some linear review of selected custodians or dates, parametric Boolean keyword searches, similarity searches of all kinds, concept searches, as well as several unique predictive coding probability searches.

The initial seed set generation, step two in the chart, should also use some random samples, plus judgmental multimodal searches. Steps three and six in the chart always use pure random samples and rely on statistical analysis. For more on the three types of sampling see my blog, Three-Cylinder Multimodal Approach To Predictive Coding.

My insistence on the use of multimodal judgmental sampling in steps two and five to locate relevant documents follows the consensus view of information scientists specializing in information retrieval, but is not followed by several prominent predictive coding vendors. They instead rely entirely on machine selected documents for training, or even worse, rely entirely on random selected documents to train the software. In my writings I call these processes the Borg approach, after the infamous villans in Star Trek, the Borg, an alien race that assimilates people. (I further differentiate between three types of Borg in Three-Cylinder Multimodal Approach To Predictive Coding.) Like the Borg, these approaches unnecessarily minimize the role of individuals, the SMEs. They exclude other types of search to supplement an active learning process. I advocate the use of all types of search, not just predictive coding.

Hybrid Human Computer Information Retrieval

human-and-robots

In contradistinction to Borg approaches, where the machine controls the learning process, I advocate a hybrid approach where Man and Machine work together. In my hybrid CARs the expert reviewer remains in control of the process, and their expertise is leveraged for greater accuracy and speed. The human intelligence of the SME is a key part of the search process. In the scholarly literature of information science this hybrid approach is known as Human–computer information retrieval (HCIR).

The classic text in the area of HCIR, which I endorse, is Information Seeking in Electronic Environments (Cambridge 1995) by Gary Marchionini, Professor and Dean of the School of Information and Library Sciences of U.N.C. at Chapel Hill. Professor Marchionini speaks of three types of expertise needed for a successful information seeker:

1.  Domain Expertise. This is equivalent to what we now call SME, subject matter expertise. It refers to a domain of knowledge. In the context of law the domain would refer to particular types of lawsuits or legal investigations, such as antitrust, patent, ERISA, discrimination, trade-secrets, breach of contract, Qui Tam, etc. The knowledge of the SME on the particular search goal is extrapolated by the software algorithms to guide the search. If the SME also has System Expertise, and Information Seeking Expertise, they can drive the CAR themselves.   Otherwise, they will need a chauffeur with such expertise, one who is capable of learning enough from the SME to recognize the relevant documents.

2.  System Expertise. This refers to expertise in the technology system used for the search. A system expert in predictive coding would have a deep and detailed knowledge of the software they are using, including the ability to customize the software and use all of its features. In computer circles a person with such skills is often called a power-user. Ideally a power-user would have expertise in several different software systems.

3.  Information Seeking Expertise. This is a skill that is often overlooked in legal search. It refers to a general cognitive skill related to information seeking. It is based on both experience and innate talents. For instance, “capabilities such as superior memory and visual scanning abilities interact to support broader and more purposive examination of text.” Professor Marchionini goes on to say that: “One goal of human-computer interaction research is to apply computing power to amplify and augment these human abilities.” Some lawyers seem to have a gift for search, which they refine with experience, broaden with knowledge of different tools, and enhance with technologies. Others do not, or the gift is limited to interviews and depositions.

Id. at pgs.66-69, with the quotes from pg. 69.

All three of these skills are required for an attorney to attain expertise in legal search today, which is one reason I find this new area of legal practice so challenging. It is difficult, but not impossible like this Penrose triangle.

Penrose_triangle_Expertise

It is not enough to be an SME, or a power-user, or have a special knack for search. You have to be able to do it all. However, studies have shown that of the three skill-sets, System Expertise, which in legal search primarily means mastery of the particular software used, is the least important. Id. at 67. The SMEs are more important, those  who have mastered a domain of knowledge. In Professor Marchionini’s words:

Thus, experts in a domain have greater facility and experience related to information-seeking factors specific to the domain and are able to execute the subprocesses of information seeking with speed, confidence, and accuracy.

Id. That is one reason that the Grossman Cormack glossary builds in the role of SMEs as part of their base definition of computer assisted review:

A process for Prioritizing or Coding a Collection of electronic Documents using a computerized system that harnesses human judgments of one or more Subject Matter Expert(s) on a smaller set of Documents and then extrapolates those judgments to the remaining Document Collection.

Grossman-Cormack Glossary at pg. 21 defining TAR.

According to Marchionini, Information Seeking Expertise, much like Subject Matter Expertise, is also more important than specific software mastery. Id. This may seem counter-intuitive in the age of Google, where an illusion of simplicity is created by typing in words to find websites. But legal search of user-created data is a completely different type of search task than looking for information from popular websites. In the search for evidence in a litigation, or as part of a legal investigation, special expertise in information seeking is critical, including especially knowledge of multiple search techniques and methods. Again quoting Professor Marchionini:

Expert information seekers possess substantial knowledge related to the factors of information seeking, have developed distinct patterns of searching, and use a variety of strategies, tactics and moves.

Id. at 70.

In the field of law this kind of information seeking expertise includes the ability to understand and clarify what the information need is, in other words, to know what you are looking for, and articulate the need into specific search topics. This important step precedes the actual search, but should thereafter continue as an integral part of the process. As one of the basic texts on information retrieval written by Gordon Cormack, et al, explains:

Before conducting a search, a user has an information need, which underlies and drives the search process. We sometimes refer to this information need as a topic …

Buttcher, Clarke & Cormack, Information Retrieval: Implementation and Evaluation of Search Engines (MIT Press, 2010) at pg. 5.

The importance of pre-search refining of the information need is stressed in the first step of the above diagram of my methods, ESI Discovery Communications. It seems very basic, but is often under appreciated, or overlooked entirely in the litigation context. In legal discovery information needs are often vague and ill-defined, lost in overly long requests for production and adversarial hostility. In addition to concerted activity up front to define relevance, the issue of information need should be kept in mind throughout the project. Typically our understanding of relevance evolves as our understanding of what really happened in a dispute emerges and grows.

At the start of an e-discovery project we are almost never searching for specific known documents. We never know for sure what information we will discover. That is why the phrase information seeking is actually more appropriate for legal search than information retrieval. Retrieval implies that particular facts exist and are already known; we just need to look them up. Legal search is not like that at all. It is a process of seeking and discovery. Again quoting Professor Marchionini:

The term information seeking is preferred to information retrieval because it is more human oriented and open ended. Retrieval implies that the object must have been “known” at some point; most often, those people who “knew” it organized it for later “knowing” by themselves or someone else. Seeking connotes the process of acquiring knowledge; it is more problem oriented as the solution may or may not be found.

Information Seeking in Electronic Environments, supra at 5-6.

Legal search is a process of seeking information, not retrieving information. It is a process of discovery, not simple look-up of known facts. More often than not in legal search you find the unexpected, and your search evolves as it progresses. Concept shift happens. Or you find nothing at all. You discover that the requesting party has sent you hunting for Unicorns, for evidence that simply does not exist. For example, the plaintiff alleges discrimination, but a search through tens of thousands of defendant’s emails shows no signs of it.

Information scientists have been talking about the distinction between machine oriented retrieval and human oriented seeking for decades. The type of discovery search that lawyers do is referred to in the literature (without any specific mention of law or legal search) as exploratory search. See: White & Roth, Exploratory Search: Beyond the Query-Response Paradigm (Morgan & Claypool, 2009). Ryen W. White, Ph.D., a senior researcher at Microsoft Research, builds on the work of Marchionini and gives this formal definition of exploratory search:

Exploratory search can be used to describe an information-seeking problem context that is open-ended, persistent, and multi-faceted; and to describe information-seeking processes that are opportunistic, iterative, and multi-tactical. In the first sense, exploratory search is commonly used in scientific discovery, learning, and decision-making contexts. In the second sense, exploratory tactics are used in all manner of information seeking and reflect seeker preferences and experience as much as the goal.

Id. at 6. He could easily have added legal discovery to this list, but like most information scientists, seems unacquainted with the law and legal search.

White and Roth point out that exploratory search typically uses a multimodal (berrypicking) approach to information needs that begin as vague notions. A many-methods-approach helps the information need to evolve and become more distinct and meaningful over time. They contend that the information-seeking strategies need to be supported by system features and user interface designs, bringing humans more actively into the search process. Id. at 15. That is exactly what I mean by a hybrid process where lawyers are actively involved in the search process.

The fully Borg approach has it all wrong. They use a look-up approach to legal search that relies as much as possible on fully automated systems. The user interface for this type of information retrieval software is designed to keep humans out of the search, all in the name of ease of use and impartiality. The software designers of these programs, typically engineers working without adequate input from lawyers, erroneously assume that e-discovery is just a retrieval task. They erroneously assume that predictive coding always starts with well-defined information needs that do not evolve with time. Some engineers and lit-support techs may fall for this myth, but all practicing lawyers know better. They know that legal discovery is an open-ended, persistent, and multi-faceted process of seeking.

Hybrid Multimodal Computer Assisted Review

Professor Marchionini notes that information seeking experts develop their own search strategies, tactics and moves. The descriptive name for the strategies, tactics and moves that I have developed for legal search is Hybrid Multimodal Computer Assisted Review Bottom Line Driven Proportional Strategy. See eg. Bottom Line Driven Proportional Review (2013). For a recent federal opinion approving this type of hybrid multimodal search and review seeIn Re: Biomet M2a Maagnum Hip Implant Products Liability Litigation (MDL 2391), Case No. 3:12-MD-2391, (N.D. Ind., April 18, 2013); also seeIndiana District Court Approves Multimodal Computer Assisted Review.

I refer to this method as a multimodal because, although the predictive coding type of searches predominate (shown on the below diagram as Intelligent Review or IR), other modes of search are also employed. As described, I do not rely entirely on random documents, or computer selected documents. The other types of methods used in a multimodal process are shown in this search pyramid.

Pyramid Search diagram

Most information scientists I have spoken to agree that it makes sense to use multiple methods in legal search and not just rely on any single method. UCLA Professor Marcia J. Bates first advocated for using multiple search methods back in 1989, which she called berrypicking. Bates, Marcia J., The Design of Browsing and Berrypicking Techniques for the Online Search Interface, Online Review 13 (October 1989): 407-424. As Professor Bates explained in 2011 in Quora:

An important thing we learned early on is that successful searching requires what I called “berrypicking.” … Berrypicking involves 1) searching many different places/sources, 2) using different search techniques in different places, and 3) changing your search goal as you go along and learn things along the way. This may seem fairly obvious when stated this way, but, in fact, many searchers erroneously think they will find everything they want in just one place, and second, many information systems have been designed to permit only one kind of searching, and inhibit the searcher from using the more effective berrypicking technique.

This berrypicking approach, combined with HCIR exploratory search, is what I have found from practical experience works best with legal search. They are the Hybrid Multimodal aspects of my Computer Assisted Review Bottom Line Driven Method.

Conclusion

Predictive_coding_trianglesNow that we have shown that courts are very open to predictive coding, we need to move on to a different, more sophisticated discussion. We need to focus on analysis of different predictive coding search methods, the strategies, tactics and moves. We also need to understand and discuss what skill-sets and personnel are required to do it properly. Finally, we need to begin to discuss the different types of predictive coding software.

There is much more to discuss concerning the use predictive coding than whether or not to make disclosure of seed sets or irrelevant training documents. Although that, and court approval, are the only things most expert panels have talked about so far. The discussion on disclosure and work-product should continue, but let us also discuss the methods and skills, and, yes, even the competing software.

We cannot look to vendors alone for the discussion and analysis of predictive coding software and competing methods of use. Obviously they must focus on their own software. This is where independent practitioners have an important role to play in the advancement of this powerful new technology.

Join with me in this discussion by your comments below or send me ideas for proposed guest blogs. Vendors are of course welcome to join in the discussion, and they make great hosts for search SME forums. Vendors are an important part of any successful e-discovery team. You cannot do predictive coding review without their predictive coding software, and, as with any other IT product, some software is much better than others.


An Elusive Dialogue on Legal Search: Part Two – Hunger Games and Hybrid Multimodal Quality Controls

September 3, 2012

This is a continuation of last week’s blog, An Elusive Dialogue on Legal Search: Part One where the Search Quadrant is Explained. The quadrant and random sampling are not as elusive as Peeta Mellark in The Hunger Games shown right, but almost. Indeed, as most of us lawyers did not major in math or information science, these new techniques can be hard to grasp. Still, to survive in the vicious games often played these days in litigation, we need to  find a way. If we do, we can not only survive, we can win, even if we are from District 12 and the whole world is watching our every motion.

The emphasis in the second part of this essay is on quality controls and how such efforts, like search itself, must be multimodal and hybrid. We must use a variety of quality assurance methods – we must be multimodal. To use the Hunger Games analogy, we must use both bow and rope, and camouflage too. And we must employ both our skilled human legal intelligence and our computer intelligence – we must be hybrid; Man and machine, working together in perfect harmony, but with Man in charge. That is the only way to survive the Hunger Games of litigation in the 21st Century. The only way the odds will be ever in your favor.

Recall and Elusion

But enough fun with Hunger Games, Search Quadrant terminology, nothingness, and math, and back to Herb Rotiblat’s long comment on my earlier blog, Day Nine of a Predictive Coding Narrative.

Recall and Precision are the two most commonly used measures, but they are not the only ones. The right measure to use is determined by the question that you are trying to answer and by the ease of asking that question.

Recall and Elusion are both designed to answer the question of how complete we were at retrieving all of the responsive documents. Recall explicitly asks “of all of the responsive documents in the collection, what proportion (percentage) did we retrieve?” Elusion explicitly asks “What proportion (percentage) of the rejected documents were truly responsive?” As recall goes up, we find more of the responsive documents, elusion, then, necessarily goes down; there are fewer responsive documents to find in the reject pile. For a given prevalence or richness as the YY count goes up (raising Recall), the YN count has to go down (lowering Elusion). As the conversation around Ralph’s report of his efforts shows, it is often a challenge to measure recall.

This last comment was referring to prior comments made in my same Day Nine Narrative blog by two other information scientists William Webber and Gordon Cormack. I am flattered that they all seem to read my blog, and make so many comments, although I suspect they may be master game-makers of sorts like we saw in Hunger Games.

The earlier comments of Webber and Cormack pertained to point projection of yield and the lower and upper intervals derived from random samples. All things I was discussing in Day Nine. Gordon’s comments focused on the high-end of possible interval error and said you cannot know anything for sure about recall unless you assume the worst case scenario high-end of the confidence interval. This is true mathematically and scientifically, I suppose (to be honest, I do not really know if it is true or not, but I learned long ago not to argue science with a scientist, and they do not seem to be quibbling amongst themselves, yet.) But it certainly is not true legally, where reasonability and acceptable doubt (a kind of level of confidence), such as a preponderance of the evidence, are always the standard, not perfection and certainty. It is not true in manufacturing quality controls either.

But back to Herb’s comment, where he picks up on their math points and elaborates concerning the Elusion test that I used for quality control.

Measuring recall requires you to know or estimate the total number of responsive documents. In the situation that Ralph describes, responsive documents were quite rare, estimated at around 0.13% prevalence. One method that Ralph used was to relate the number of documents his process retrieved with his estimated prevalence. He would take as his estimate of Recall, the proportion of the estimated number of responsive documents in the collection as determined by an initial random sample.

Unfortunately, there is considerable variability around that prevalence estimate. I’ll return to that in a minute. He also used Elusion when he examined the frequency of responsive documents among those rejected by his process. As I argued above, Elusion and Recall are closely related, so knowing one tells us a lot about the other.

One way to use Elusion is as an accept-on-zero quality assurance test. You specify the maximum acceptable level of Elusion, as perhaps some reasonable proportion of prevalence. Then you feed that value into a simple formula to calculate the sample size you need (published in my article the Sedona Conference Journal, 2007). If none of the documents in that sample comes up responsive, then you can say with a specified level of confidence that responsive documents did not occur in the reject set at a higher rate than was specified. As Gordon noted, the absence of a responsive document does not prove the absence of responsive documents in the collection.

The Sedona Conference Journal article Herb referenced here is called Search & Information Retrieval Science. Also, please recall that my narrative states, without using the exact same language, that my accept-on-zero quality assurance test pertained to Highly Relevant documents, not relevant documents. I decided in advance that if my random sample of excluded documents included any that were Highly Relevant documents, then I would consider the test a failure and initiate another round of predictive coding. My standard for merely relevant documents was secondary and more malleable, depending on the probative value and uniqueness of any such false negatives. False negatives are what Herb calls YN, and we also now know is called D in the Search Quadrant with totals shown again below.

Back to Herb’s comment, who, by the way looks a bit like President Snow, don’t you think? Herb is now going to start talking about Recall, which as we now know is A/G, and is a measure of accuracy that I did not directly make or claim.

If you want to directly calculate the recall rate after your process, then you need to draw a large enough random sample of documents to get a statistically useful sample of responsive documents. Recall is the proportion of responsive documents that have been identified by the process. The 95% confidence range around an estimate is determined by the size of the sample set. For example, you need about 400 responsive documents to know that you have measured recall with a 95% confidence level and a 5% confidence interval. If only 1% of the documents are responsive, then you need to work pretty hard to find the required number of responsive documents. The difficulty of doing consistent review only adds to the problem. You can avoid that problem by using Elusion to indirectly estimate Recall.

The Fuzzy Lens Problem Again

The reference to the difficulty of doing consistent review refers to the well documented inconsistency of classification among human reviewers. That is what I called in Secrets of Search, Part One, as the fuzzy lens problem that makes recall such an ambiguous measure in legal search. It is ambiguous because when large data sets are involved the value for G (total relevant) is dependent upon  human reviewers. The inconsistency studies show that the gold standard of measurement by human review is actually just dull lead.

Let me explain again in shorthand, and please fell free to refer to the Secrets of Search trilogy and original studies for the full story. Roitblot’s own well-known study of a large-scale document review showed that human reviewers only agreed with each other on average of 28% of the time. Roitblat, Kershaw, and Oot, Document categorization in legal electronic discovery: computer classification vs. manual review. Journal of the American Society for Information Science and Technology, 61(1):70–80, 2010. An earlier study by one of the leading information scientists in the world, Ellen M. Voorhees, found a 40% agreement rate between human reviewers. Variations in relevance judgments and the measurement of retrieval effectiveness, 36:5 Information Processing & Management 697, 701 (2000). Voorhees concluded that with 40% agreement rates it was not possible to measure recall any higher than 65%. Information scientist William Webber calculated that with a 28% agreement rate a recall rate cannot be reliably measured above 44%. Herb Rotiblat and I dialogued about this issue before the last time in Reply to an Information Scientist’s Critique of My “Secrets of Search” Article

I prepared the graphics below to illustrate this problem of measurement and the futility of recall calculations when the measurements are made by inconsistent reviewers.

Until we can crack the inconsistent reviewer problem, we can only measure recall vaguely, as we see on the left, or at best the center, and can only make educated guesses as to the reality on the right. The existence of the error has been proven, but as Maura Grossman and Gordon Cormack point out, there is a dispute as to the cause of the error. In one analysis that they did of TREC results they concluded that the inconsistencies were caused by human error, not a difference of opinion on what was relevant or not. Inconsistent Responsiveness Determination in Document Review: Difference of Opinion or Human Error? But, regardless of the cause, the error remains.

Back to Herb’s Comment.

One way to assess what Ralph did is to compare the prevalence of responsive documents in the set before doing predictive coding with their prevalence after using predictive coding to remove as many of the responsive documents as possible. Is there a difference? An ideal process will have removed all of the responsive documents, so there will be none left to find in the reject pile.

That question of whether there is a difference leads me to my second point. When we use a sample to estimate a value, the size of the sample dictates the size of the confidence interval. We can say with 95% confidence that the true score lies within the range specified by the confidence interval, but not all values are equally likely. A casual reader might be led to believe that there is complete uncertainty about scores within the range, but values very near to the observed score are much more likely that values near the end of the confidence interval. The most likely value, in fact, is the center of that range, the value we estimated in the first place. The likelihood of scores within the confidence interval corresponds to a bell shaped curve.

This is a critical point. It means that the point projections, a/k/a, the spot projections, can be reliably used. It means  that even though you must always qualify any findings that are based upon random sampling by stating the applicable confidence interval, the possible range of error, you may still reliably use the observed score of the sample in most data sets, if a large enough sample size is used to create low confidence interval ranges. Back to Herb’s Comment.

Moreover, we have two proportions to compare, which affects how we use the confidence interval. We have the proportion of responsive documents before doing predictive coding. The confidence interval around that score depends on the sample size (1507) from which it was estimated. We have the proportion of responsive documents after predictive coding. The confidence interval around that score depends on its sample size (1065). Assuming that these are independent random samples, we can combine the confidence intervals (consult a basic statistics book for a two sample z or t test or http://facstaff.unca.edu/dohse/Online/Stat185e/Unit3/St3_7_TestTwoP_L.htm), and determine whether these two proportions are different from one another (0.133% vs. 0.095%). When we do this test, even with the improved confidence interval, we find that the two scores are not significantly different at the 95% confidence level. (try it for yourself here: http://www.mccallum-layton.co.uk/stats/ZTestTwoTailSampleValues.aspx.). In other words, the predictive coding done here did not significantly reduce the number of responsive documents remaining in the collection. The initial proportion 2/1507 was not significantly higher than 1/1065. The number of responsive documents we are dealing with in our estimates is so small, however, that a failure to find a significant difference is hardly surprising.

This paragraph appears to me to have assumed that my final quality control test was a test for Recall and uses the upper limit, the worst case scenario, as the defining measurement. Again, as I said in the narrative and replies to other comments, I was testing for Elusion, not Recall. Further, the Elusion test (D/F) here was for Highly Relevant documents, not relevant, and none were found, 0%. None were found in the first random sample at the beginning of the project, and none were found in the second random sample at the end. The yields referred to by Herb are for relevant documents, not Highly Relevant. The value of DFalse Negatives, in the elusion test was thus zero. As we have discussed, when that happens, where the numerator in a fraction is zero, the result of the division is also always zero, which, in an Elusion test, is exactly what you are looking for. You are looking for nothing and happy to find it.

The final sentence in Herb’s last paragraph is key to understanding his comment: The number of responsive documents we are dealing with in our estimates is so small, however, that a failure to find a significant difference is hardly surprising. It points to the inherent difficulty of using random sampling measurements of recall in low yield document sets where the prevalence is low. But there is still some usefulness for random sampling in these situations as the conclusion of his Comment shows.

Still, there is other information that we can glean from this result. The difference in the two proportions is approximately 28%. Predictive coding reduced by 28% the number of responsive documents unidentified in the collection. Recall, therefore, is also estimated to be 28%. Further, we can use the information we have to compute the precision of this process as approximately 22%. We can use the total number of documents in the collection, prevalence estimates, and elusion to estimate the entire 2 x 2 decision matrix.

For eDiscovery to be considered successful we do not have to guarantee that there are no unidentified responsive documents, only that we have done a reasonable job searching for them. The observed proportions do have some confidence interval around them, but they remain as our best estimate of the true percentage of responsive documents both before predictive coding and after. We can use this information and a little basic algebra to estimate Precision and Recall without the huge burden of measuring Recall directly.

These are great points made by Herb Rotiblat in the last paragraph regarding reasonability. It shows how lawyer-like he has become after working with our kind for so many years, rather than professor types like my brother in the first half of his career. Herb now well understands the difference between law and science and what this means to legal search.

Law is not a Science, and Neither Is Legal Search

To understand the numbers and need for reasonable efforts that accepts high margins of error, we must understand the futility of increasing sample sizes to try to cure the upper limit of confidence. William Webber in his Comment of August 6, 2012 at 10:28 pm said that “it is, unfortunately, very difficult to place a reassuring upper bound on a very rare event using random sampling.” (emphasis added) Dr. Webber goes on to explain that to attain even a 50% confidence interval would require a final quality control sample of 100,000 documents. Remember, there were only 699,082 documents to begin with, so that is obviously no solution at all. It is about as reassuring as the Hunger Games slogan, may the odds be ever in your favor, when we all know that all but 1 of the 24 gamers must die.

Aside from the practical cost and time issues, the fuzzy lens problem of poor human judgments also makes the quest for reassuring bounds of error a fool’s errand. The perfection is illusory. It cannot be attained, or more correctly put, if you do attain high recall in a large data set, you will never be able to prove it. Do not be fooled by the slogans and the flashy, facile analysis.

Fortunately, the law has long recognized the frailty of all human endeavors. The law necessarily has different standards for acceptable error and risks than does math and science. The less-than-divine standards also apply to manufacturing quality control where small sample sizes have long been employed for acceptable risks. There too, like in a legal search for relevance, the prevalence of defective items sampled for is typically very low.

Math and science demand perfection. But the law does not. We demand reasonability and good faith, not perfection. Some scientists may think that we are settling, but it is more like practical realism, and, is certainly far better than unreasonable and bad faith. Unlike science and math, the law is used to uncertainties. Lawyers and judges are comfortable with that. For example, we are reassured enough  to allow civil convictions when a judge or jury decides that it is more likely than not that the defendant is at fault, a 51% standard of doubt. Law and justice demand reasonable efforts, not perfection.

I know Herb Rotiblat agrees with me because this is the fundamental thesis of the fine paper he wrote with two lawyers, Patrick Oot and Anne Kershaw, entitled: Mandating Reasonableness in a Reasonable Inquiry. At pages 557-558 they sum up saying (footnote omitted):

We do not suggest limiting the court system’s ability to discover truth. We simply anticipate that judges will deploy more reasonable and efficient standards to determine whether a litigant met his Rule 26(g) reasonable inquiry obligations. Indeed, both the Victor Stanley and William A. Gross Construction decisions provide a primer for the multi-factor analysis that litigants should invoke to determine the reasonableness of a selected search and review process to meet the reasonable inquiry standard of Rule 26(f): 1. Explain how what was done was sufficient; 2. Show that it was reasonable and why; 3. Set forth the qualifications of the persons selected to design the search; 4. Carefully craft the appropriate keywords with input from the ESI’s custodians as to the words and abbreviations they use; and 5. Use quality control tests on the methodology to assure accuracy in retrieval and the elimination of false positives.

As to the fifth criteria, which we are discussing here, of quality control tests, Rotiblat, Oot and Kershaw assert in their article at page 551 that : “A litigant should sample at least 400 results of both responsive and non-responsive data.” This is the approximate sample size when using 95% confidence level and a 5% confidence interval. (Note in my sampling I used less than a 3% confidence interval with a much larger sample  size of 1,065 documents.) To support this assertion that a sample size of 400 documents is reasonable, the authors  in footnote 77 refer to an email they have on file from Maura Grossman regarding legal search of data sets in excess of 100,000 documents, which concluded with the statement:

Therefore, it seemed to me that, for the average matter with a large amount of ESI, and one which did not warrant hiring a statistician for a more careful analysis, a sample size of 400 to 600 documents should give you a reasonable view into your data collection, assuming the sample is truly randomly drawn.

Personally, I think a larger sample size than 400-600 documents is needed for quality control tests in large cases. The efficacy of this small calculated sample size using a 5% confidence interval assumes a prevalence of 50%, in other words, that half of the documents sampled are relevant. This is an obvious fiction in all legal search, just as it is in all sampling for defective manufacturing goods. That is why I sampled 1,065 documents using 3%. Still, in smaller cases, it may be very appropriate to just sample 400-600 documents using a 5% interval. It all depends, as I will elaborate further in the conclusion.

But regardless, all of these scholars of legal search make the valid point that only reasonable efforts are required in quality control sampling, not perfection. We have to accept the limited usefulness of random sampling alone as a quality assurance tool because of the margins of error inherent in sampling of the low prevalence data sets common in legal search. Fortunately, random sampling is not our only quality assurance tool. We have many other methods to show reasonable search efforts.

Going Beyond Reliance on Random Sampling Alone to a Multimodal Approach

Random sampling is not a magic cure-all that guaranties quality, or definitively establishes the reasonability of a search, but it helps. In low yield datasets, where there is a low percentage of relevant documents in the total collection, the value of random sampling for Recall is especially suspect. The comments of our scientist friends have shown that. There are inherent limitations to random sampling.

Ever increasing sample sizes are not the solution, even if that was affordable and proportionate. Confidence intervals in sampling of less than two or three percent are generally a waste of time and money. (Remember the sampling statistics rule of thumb of 2=4 that I have explained before wherein a halving of confidence interval error rate, say from 3% to 1.5%, requires a quadrupling of sample size.) Three or four percent confidence interval levels are more appropriate in most legal search projects, perhaps even the 5% interval used in the Mandating Reasonableness article by Roitblat, Oot and Kershaw. Depending on the data set itself, prevalence, other quality control measures, complexity of the case, and the amount at issue, say less than $1,000,000, the five percent based small sample size of approximately 400 documents could well be adequate and reasonable. As usual in the law, it all depends on many circumstances and variables.

The issue of inconsistent reviews between reviewers, the fuzzy lens problem, necessarily limits the effectiveness of all large-scale human reviews. The sample sizes required to make a difference are extremely large. No such reviews can be practically done without multiple reviewers and thus low agreement rates. The gold standard for review of large samples like this is made of lead, not gold. Therefore, even if cost was not a factor, large sample sizes would still be a waste of time.

Moreover, in the real word of legal review projects, there is always a strong component of vagary in relevance. Maybe that was not true in the 2009 TREC experiment as Grossman and Cormack’s study suggests, but it has been true in the thousands of messy real-world lawsuits that I have handled in the past 32 years. All trial lawyers I have spoken with on the subject agree.

Relevance can be, and usually is, a fluid and variable target depending on a host of factors, including changing legal theories, changing strategies, changing demands, new data, and court rulings. The only real gold standard in law is a judge ruling on specific documents. Even then, they can change their mind, or make mistakes. A single person, even a judge, can be inconsistent from one document to another. See Grossman & Cormack, Inconsistent Responsiveness Determination at pgs. 17-18 where a 2009 TREC Topic Authority contradicted herself 50% of the time when re-examining the same ten documents.

We must realize that random sampling is just one tool among many. We must also realize that even when random sampling is used, Recall is just one measure of accuracy among many. We must utilize the entire 2 x 2 decision matrix.

We must consider the possible applicability of all of the measurements that the search quadrant makes possible, not just recall.

  • Recall = A/G
  • Precision = A/C
  • Elusion = D/F
  • Fallout = B/H
  • Agreement = (A+E)/(D+B)
  • Prevalence = G/I
  • Miss Rate = D/G
  • False Alarm Rate = B/C

No doubt we will develop other quality control tests, for instance using Prevalence as a guide or target for relevant search as I described in my seven part Search Narrative. Just as we must use multimodal search efforts for effective search of large-scale data sets, so too must we use multiple quality control methods when evaluating the reasonability of search efforts. Random sampling is just one tool among many, and, based on the math, maybe not the best method at that, regardless of whether it is for recall, or elusion, or any other binary search quadrant measure.

Just as keyword search must be supplemented by the computer intelligence of predictive coding, so too must random based quality analysis be supplemented by skilled legal intelligence. That is what I call a Hybrid approach. The best measure of quality is to be found in the process itself, coupled with the people and software involved. A judge called upon to review reasonability of search should look at a variety of factors, such as:

  • What was done and by whom?
  • What were their qualifications?
  • What rules and disciplined procedures were followed?
  • What measures were taken to avoid inconsistent calls?
  • What training was involved?
  • What happened during the review?
  • Which search methods were used?
  • Was it multimodal?
  • Was it hybrid, using both human and artificial intelligence?
  • How long did it take?
  • What did it cost?
  • What software was used?
  • Who developed the software?
  • How long has the software been used?

Conclusion

These are just a few questions that occur to me off the top of my head. There are surely more. Last year in Part Two of Secrets of Search I suggested nine characteristics of what I hope would become an accepted best practice for legal review. I invited peer review and comments on what I may have left out, or any challenges to what I put in, but so far this list of nine remains unchallenged. We need to build on this to create standards so that quality control is not subject to so many uncertainties.

Jason R. Baron, William Webber, myself, and others keep saying this over and over, and yet the Hunger Games of standardless discovery goes on. Without these standards we may all fall prey at any time to a vicious sneak attack by another contestant in the litigation games. A contest that all too often feels like a fight to the death, rather than a cooperative pursuit of truth and justice. It has become so bad now that many lawyers snicker just to read such a phrase.

The point here is, you have to look at the entire process, and not just focus on taking random samples, especially ones that claims to measure recall in low yield collections.  By the way I submit that almost all legal search is of low yield collections, not just employment law related as some have suggested. Those who think the contrary have too broad a concept of relevance, and little or no understanding of actual trials, cumulative evidence, and the modern data koan of big data “relevant is irrelevant.” Even though random sampling is not The Answer we once thought, it should be part of the process. For instance, a random sample elusion test that finds no Highly Relevant documents should remain an important component of that process.

The no-holds-barred Hunger Games approach to litigation must end now. If we all join together, this will end in victory, not defeat. It will end with alliances and standards. Whatever district you hail from, join us in this nobel quest. Turn away from the commercial greed of winning-at-all-costs. Keep your integrity. Keep the faith. Renounce the vicious games; both hide-the-ball and extortion. The world is watching. But we are up for it. We are prepared. We are trained. The odds are ever in our favor. Salute all your colleagues who turn from the games and the leadership of greed and oppression. Salute all who join with us in the rebellion for truth of justice.

__________________

_______________

____________

__________

________

_____

___

__

_


Days Seven and Eight of a Predictive Coding Narrative: Where I have another hybrid mind-meld and discover that the computer does not know God

July 29, 2012

This is my fifth in a series of narrative descriptions of  a search of 699,082 Enron emails to find evidence on involuntary employee terminations. The preceding narratives are:

In this fifth installment I will continue my description, this time covering days seven and eight of the project. As the title indicates, progress continues and I have another hybrid mind-meld moment. I also discover that the computer does not recognize the significance of references to God in an email. This makes sense logically, but is unexpected and kind of funny when encountered in a document review.

Seventh Day of Review (7 Hours)

this seventh day I followed Joe White’s advice as described at the end of the last narrative. It was essentially a three-step process:

One: I ran another learning session for the dozen or so I’d marked since the last one to be sure I was caught up, and then made sure all of the prior Training documents were checked back in. This only took a few minutes.

Two: I ran two more focus document trainings of 100 docs each, total 200. The focus documents are generated automatically by the computer. It only took about an hour to review these 200 documents because most were obviously irrelevant to me, even if the computer was somewhat confused.

I received more of an explanation from Joe White on the focus documents, as Inview calls them. He explains that, at the current time at least (KO is coming out with a new version of the Inview software soon, and they are in a state of constant analysis and improvement), 90% of each focus group consists of grey area type documents, and 10% are pure random under IRT ranking. For documents drawn via workflow (in the demo database they are drawn from the System Trainers group in the Document Training Stage) they are selected as 90% focus and 10% random; where the 90% focus selection is drawn evenly across each category set for iC training.

The focus documents come from the areas of least certainty for the algorithm. A similar effect can be achieved by searching for a given iC category for documents between 49 – 51%, etc., as I had done before for relevance. But the automated focus document system makes it a little easier because it knows when you do not have enough documents in the 49 – 51% probability range and then increases the draw to reach your specified number, here 100,  to the next least-certain documents. This reduces the manual work in finding the grey area documents for review and training.

Three: I looked for more documents to evaluate/train the system. I had noticed that “severance” was a key word in relevant documents, and so went back and ran a search for this term for the first time. There were 3,222 hits, so, as per my standard procedure, I added this document count to name of the folder that automatically saved the search.

I found many more relevant documents that way. Some were of a new type I had not seen before (having to do with the mass lay-offs when Enron was going under), so I knew I was expanding the scope of relevancy training, as was my intent. I did the judgmental review by using various sort-type judgment searches in that folder, i.e. by ordering the documents by subject line, file type, search terms hits (the star symbols), etc., and did not review all 3,222 docs. I did not find that necessary. Instead, I honed in on the relevant docs, but also marked some irrelevant ones here that were close. Below is a screen shot of the first page of the documents sorted by putting those selected for training at the top.

I had also noticed that “lay off” “lay offs” and “laid off” were common terms found in relevant docs, and I had not searched for those particular terms before either. There were 973 documents with hits with one of these search terms. I did the same kind of judgmental search of the folder I created with these documents and found more relevant documents to train. Again, I was finding new documents and knew that I was expanding the scope of relevancy. Below is one new relevant document found in this selection; note how the search terms are highlighted for easy location.

I also took the time to mark some irrelevant documents in these new search folders, especially the documents in the last folder, and told them to train too, since they were otherwise close from a similar keywords perspective. So I thought I should go ahead and train them to try to teach the fine distinctions.

The above third step took another five hours (six hours total). I knew I had added hundreds of new docs for training in the past five hours, both relevant and irrelevant.

Fourth Round

I decided it was time to run a training session again and force the software to analyze and rank all of the documents again. This was essentially the Fourth Round (not counting the little training I did at the beginning today to make sure I was served with the right (updated) Focus documents).

After the Training session completed, I asked for a report. It showed that 2,663 total documents (19,731 pages) have now been categorized and marked for Training in this last session. There were now 1,156 Trainer (me) identified documents, plus the original 1,507 System ID’ed docs. (Previously, in Round 3, there were the same 1,507 System ID’ed docs, and only 534 Trainer ID’ed docs.)

Then I ran a report to see how many docs had been categorized by me as Relevant (whether also marked for Training or not). Note I could have done this before the training session too, and it would not make any difference in results. All the training session does is change the predictions on coding, not the actual prior human coding. This relevancy search was saved in another search folder called “All Docs Marked Relevant after 4th Round – 355 Docs.” After the third round I had only ID’ed 137 relevant documents. So progress in recall was being made.

Prevalence Quality Control Check

As explained in detail in Day Two of a Predictive Coding Narrative: More Than A Random Stroll Down Memory Lane, my first random sample search allowed me to determine prevalence and get an idea of the total number of relevant document likely contained in the database. The number was 928 documents. That was the spot or point projection of the total yield in the corpus. (Yield is another information science and statistics term that is useful to know. It means in this context the expected number of relevant documents in the total database. See eg. Webber, W., Approximate Recall Confidence Intervals, ACM Transactions on Information Systems, Vol. V, No. N, Article A (2012 draft) at A2.)

My yield calculation here of 928 is based on my earlier finding of 2 relevant documents in the initial 1,507 random sample. (2/1507=.00132714) (.13*699,082=928 relevant documents). So based on this I knew that I was correct to have gone ahead with the fourth round, and would next check to see how many documents the IRT now predicted would be relevant. My hope was the number would now be closer to the 928 goal of the projected yield of the 699,082 document corpus.

This last part had taken another hour, so I’ll end Day Seven with a total of 7 hours of search and review work.

Eighth Day of Review (9 Hours)

First I ran a probability search as before for all 51%+ probable relevant docs and saved them in a folder by that name. After the fourth round the IRC now predicted a total of 423 relevant documents. Remember I had already actually reviewed and categorized 355 docs as relevant, so this was only a potential max net gain of 68 docs. As it turned out, I disagreed with 8 of the predictions, so the actual net gain was only 60 docs, for a total of 415 confirmed relevant documents.

I had hoped for more after broadening the scope of documents marked relevant in the last seeding. So I was a little disappointed that my last seed set had not led to more predicted relevant. Since the “recall goal” for this project was 928 documents, I knew I still had some work to do to expand the scope. Either that or the confidence interval was at work, and there were actually fewer relevant documents in this collection than the random sample predicted as a point projection. The probability statistics showed that the actual range was between 112 documents 3,345 documents, due to the 95% confidence level and +/-3% confidence interval.

51%+ Probable Relevant Documents

Next I looked at the 51%+ probable relevant docs folder and sorted by whether the documents had been categorized on not. You do that by clicking on the symbol for categorization, a check, which is by default located in the upper left. That puts all of the categorized docs together, either on top or bottom. Then I reviewed the 68 new documents, the ones the computer predicted to be relevant that I had not previously marked relevant.

This is always the part of the review that is the most informative for me as to whether the computer is actually “getting-it” or not. You look to see what documents it gets wrong, in other words, makes a wrong prediction of probable relevance, and try to determine why. In this way you can be alert for additional documents to try to correct the error in future seeds. You learn from the computer’s mistakes where additional training is required.

I then had some moderately good news in my review. I only disagreed with eight of the 68 new predictions. One of these documents only had a 52.6% probability for relevance, another 53.6%, another 54.5%, another 54%, another 57.9%, and another other only 61%.  Another two were 79.2% and 76.7% having to do with “voluntary” severance again, a mistake I had seen before. So even when the computer and I disagreed, it was not by much.

Computer Finds New Hard-to-Detect Relevant Documents

A couple of the documents that Inview predicted to be relevant were long, many pages, so my study and analysis of them took a while. Even though these long documents at first seemed irrelevant to me, as I kept reading and analyzing them, I ultimately agreed with the computer on all of them. A careful reading of the documents showed that they did in fact include discussion related to termination and terminated employees. I was surprised to see that, but pleased, as it showed the software mojo was kicking in. The predictive coding training was allowing the computer to find documents I would likely never have caught on my own. The mind-meld was working and hybrid power was again manifest.

These hard to detect issues (for me) mainly arose from the unusual situation of the mass terminations that came at the end of Enron, especially at the time of its bankruptcy. To be honest, I had forgotten about those events. My recollection of Enron history was pretty rusty when I started this project. I had not been searching for bankruptcy related terminations before. That was entirely the computer’s contribution and it was a good one.

From this study of the 68 new docs I realized that although there were still some issues with the software making an accurate distinction between voluntary and involuntary severance, overall, I felt pretty confident that Inview was now pretty well-trained. I based that on the 60 other predictions that were spot on.

Note that I marked most of the newly confirmed relevant documents for training, but not all. I did not want to excessively weight the training with some that were redundant, or odd for one reason or another, and thus not particularly instructive.

This work was fairly time-consuming. It took three long hours on a Sunday to complete.

Fifth Round

Returning to work in the evening I started another training session, the Fifth. This would allow the new teaching (document training instructions) to take effect.

My plan was to then have the computer serve me up the 100 close calls (Focus Documents) by using the document training Checkout feature. Remember this feature selects and serves up for review the grey area docs designed to improve the IRT training, plus random samples.

But before I reviewed the next training set, I did a quick search to see how many new relevant documents (51%+) the last training (fifth round) has predicted. I found a total of 545 documents 51%+ predicted relevant. Remember I left the last session with 415 relevant docs (goal is 928). So progress was still being made. The computer had added 130 documents.

Review of Focus Documents

Before I looked at these new ones to see how many I agreed with, I stuck to my plan, and took a Checkout feed of 100 Focus documents. My guess is that most of the newly predicted 51%+ relevant docs would be in the grey area anyway, and so I’ll be reviewing some of them when I reviewed the Focus documents.

First, I noticed right away that it served up 35 irrelevant junk files that were obviously irrelevant and previously marked as such, such as PST placeholder files, and a few others like that, which clutter this ENRON dataset. Obviously, they were part of the random selection part of the Focus document selections. I told them all to train in one bulk command, hit the completed review button for them, and then focused on the remaining 65 documents. None had been reviewed before. Next I found some more obviously irrelevant docs, which were not close at all, i.e. 91% irrelevant and only 1% likely relevant. I suspect this is part of the general database random selection that makes up 10% of the Focus documents (the other 90% are close calls).

Next I did a file type sort to see if any more of the unreviewed documents in this batch of 100 were obviously irrelevant based on file type. I found 8 more such files, mass categorized them, mass trained them and quickly completed review for these 8.

Now there were 57 docs left, 9 of which were Word docs, and the rest emails. So I checked the 9 word docs next. Six of these were essentially the same document called “11 15 01 CALL.doc.” The computer gave each approximately a 32.3% probability of irrelevance and a 33.7% probability of relevance. Very close indeed. Some of the other docs had very slight prediction numbers (less than 1%). The documents proved to be very close calls. Most of them I found to be irrelevant. But in one document I found a comment about mass employee layoffs, so I decided to call it relevant to our issue of employee terminations. I trained those eight and checked them back in. I then reviewed the remaining word docs, found that they were also very close, but marked these as irrelevant and checked them in, leaving 48 docs left to review in the Training set of 100.

Next I noticed a junk kind of mass email from a sender called “Black.” I sorted by “From” found six by Black, and a quick look showed they were all irrelevant, as the computer had predicted for each. Not sure why they were picked as focus docs, but regardless, I trained them and checked them back in, now leaving 42 docs to review.

Next I sorted the remaining by “Subject” to look for some more that I might be able to quickly bulk code (mass categorize). It did not help much as there were only a couple of strings with the same subject. But I kept that subject order and sloughed through the remaining 42 docs.

I found most of the remaining docs were very close calls, all in the 30% range for both relevant and irrelevant. So they were all uncertain, i.w. a split choice, but none were actually predicted relevant, that is, none were in the over 50% likely relevant range. I found that most of them were indeed irrelevant, but not all. A few in this uncertain range were relevant. They were barely relevant, but of the new type recently marked having to do with the bankruptcy. Others that I found relevant were of a type I had seen before, yet the computer was still unsure with basically an even split of prediction in the 30% range. They were apparently different from the obviously relevant documents, but in a subtle way. I was not sure why. See Eg: control number 12509498.

It was 32.8% relevant and 30.9% irrelevant, even though I had marked an identical version of this email before as relevant in the last training. The computer was apparently suspicious of my prior call and was making sure. I know I’m anthropomorphizing a machine, but I don’t know how else to describe it.

Computer’s Focus Was Too Myopic To See God

One of the focus documents that the computer found a close call in the 30% range was email with control number 10910388. It was obviously just an inspirational message being forwarded around about God. You know the type I’m sure.

It was kind of funny to see that this email confused the computer, whereas any human could immediately recognize that this was a message about God, not employee terminations. It was obvious that the computer did not know God.

Suddenly My Prayers Are Answered

Right after the funny God mistake email, I reviewed another email with control number 6004505. It was about wanting to fire a particular employee. Although the computer was uncertain about the relevancy of this document, I knew right away that it rocked. It was just the kind of evidence I had been looking for. I marked it as Highly Relevant, the first hot document found in several sessions. Here is the email.

I took this discovery of a hot doc as a good sign. I was finding both the original documents I had been looking for and the new outliers. It looked to me like I had succeeded in training and in broadening the scope of relevancy to its proper breadth. I might not be travelling a divine road to redemption, but it was clearly leading to better recall.

Since most of these last 42 documents were all close questions (some were part of the 10% random and were obvious), the review took longer than usual. The above tasks all took over 1.5 hours (not including machine search time or time to write this memo).

Good Job Robot!

My next task was to review the 51% predicted relevant set of 545 docs. One document was particularly interesting, control number 12004849, which was predicted to be 54.7% likely relevant. I had previously marked it Irrelevant based on my close call decision that it only pertained to voluntary terminations, not involuntary terminations. It was an ERISA document, a Summary Plan Description of the Enron Metals Voluntary Separation Program.

Since the document on its face obviously pertained to voluntary separations, it was not relevant. That was my original thinking and why I at first called it Irrelevant. But my views on document characterizations on that fuzzy line between voluntary and involuntary employee terminations had changed somewhat over the course of the review project. I now had a better understanding of the underlying facts. The document necessarily defined both eligibility for this benefit, money when an employee left, and ineligibility. It specifically stated that employees of certain Enron entities were ineligible for this benefit. It stated that acceptance of an application was strictly within the company’s discretion. What happened if even an eligible employee decided not to voluntarily quit and take this money? Would they not then be terminated involuntarily? What happened if they applied for this severance, and the company said no? For all these reasons, and more, I decided that this document was in fact relevant to both voluntary and involuntary terminations. The relevance to involuntary terminations was indirect, and perhaps a bit of a stretch, but in my mind it was in the scope of a relevant document.

Bottom line, I had changed my mind and I now agreed with the computer and considered it Relevant. So I changed the coding to relevant and trained on it. Good call Inview. It had noticed an inconsistency with some of my other document codings and suggested a correction. I agreed. That was impressive. Good robot!

Looking at the New 51%+

Another one of the new documents that was in the 51%+ predicted relevant group was a document with 42 versions of itself. It was the Ken Lay email where he announced that he was not accepting his sixty-million dollar golden parachute. (Can you imagine how many law suits would have ensued if he took that money?) Here is one of the many copies of this email.

I had previously marked a version of this email as relevant in past rounds. Obviously the corpus (the 699,082 Enron emails) had more copies of that particular email that I had not found before. It was widely circulated. I confirmed the predictions of Relevance.  (Remember that this database was deduplicated only on the individual custodian basis, vertical deduplication. It was not globally deduplicated against all custodians, horizontal deduplication. I recommend full horizontal deduplication as a default protocol.)

I disagreed with many of the other predicted relevant docs, but did not consider any of them important. The documents now presenting as possibly relevant were, in my view, cumulative and not really new, not really important. All were fetched by the outer limits of relevance triggered by my previously allowing in as barely relevant the final day comments on Ken Lay’s not taking a sixty-million dollar payment, and also allowing in as relevant general talk during bankruptcy that might mention layoffs.

Also, I was allowing in as relevant new documents and emails that concerned the ERISA plan revisions that were related to general severance. The SPD of the Enron Metals Voluntary Separation Program was an example of that. These were all fairly far afield of my original concept of relevance, which had grown as I saw all of the final days emails regarding layoffs, and better understood the bankruptcy and ERISA set up, etc.

Bottom line, I did not see much training value in these newly added docs, both predicted and confirmed. The new documents were not really new. They were very close to documents already found in the prior rounds. I was thinking it might be time to bring this search to an end.

Latest Relevancy Metrics

I ran one final search to determine my total relevant coded documents. The count was 659. That was a good increase over the last measured count of 545 relevant, but still short of my initial goal of 928, the point projection of yield. That is a 71% recall (659/928) of my target, which is pretty good, especially if the remaining relevant were just cumulative or otherwise not important. Considering the 3% confidence interval, and the range inherent in the 928 yield point projection because of that, from between 112 and 3,345 documents, it could in fact already be 100% recall, although I doubted that based on the process to date. See references to point projection, intervals, and William Webber’s work on confidence intervals in Day Two of a Predictive Coding Narrative: More Than A Random Stroll Down Memory Lane and in Webber, W., Approximate Recall Confidence Intervals, ACM Transactions on Information Systems, Vol. V, No. N, Article A (2012 draft).

Enough Is Enough

I was pretty sure that further rounds of search would lead to the discovery of more relevant documents, but thought it very unlikely that any more significant relevant documents would be found. Although I had found one hot doc in this round, the quality of the rest of the documents found convinced me that was unlikely to occur again. I had the same reaction to the grey area documents. The quality had changed. Based on what I had been seeing in the last two rounds, the relevant documents left were, in my opinion, likely cumulative and of no real probative value to the case.

In other words, I did not see value in continuing the search and review process further, except for a final null-set quality control check. I decided to bring the search to end. Enough is enough already. Reasonable efforts are required, not perfection. Besides, I knew there was a final quality control test to be passed, and that it would likely reveal any serious mistakes on my part.

Moving On to the Perhaps-Final Quality Control Check

After declaring the search to be over, the next step in the project was to take a random sample of the documents not reviewed or categorized, to see if any significant false-negatives turned up. If none did, then I would  consider the project a success, and conclude that more rounds of search were not required. If some did turn up, then I would have to keep the project going for at least another round, maybe more, depending on exactly what false-negatives were found. That would have to wait for the next day.

But before ending this long day I ran a quick search to see the size of this null set. There were 698,423 docs not categorized as relevant and I saved them in a Null Set Folder for easy reference. Now I could exit the program.

Total time for this night’s work was 4.5 hours, not including report preparation time and wait time on the computer for the training.

To be continued . . . .           


Six Sets of Draft Principles Are Now Listed at AI-Ethics.com

October 8, 2017

Arguably the most important information resource of AI-Ethics.com is the page with the collection of Draft Principles underway by other AI Ethics groups around the world. We added a new one that came to our attention this week from an ABA article, A ‘principled’ artificial intelligence could improve justice (ABA Legal Rebels, October 3, 2017). They listed six proposed principles from the talented Nicolas Economou, the CEO of electronic discovery search company, H5.

Although Nicolas Economou is an e-discovery search pioneer and past Sedona participant, I do not know him. I was, of course, familiar with H5’s work as one of the early TREC Legal Track pioneers, but I had no idea Economou was also involved with AI ethics. Interestingly, I recently learned that another legal search expert, Maura Grossman, whom I do know quite well, is also interested in AI ethics. She is even teaching a course on AI ethics at Waterloo. All three of us seem to have independently heard the Siren’s song.

With the addition of Economou’s draft Principles we now have six different sets of AI Ethics principles listed. Economou’s new list is added at the end of the page and reproduced below. It presents a decidedly e-discovery view with which all readers here are familiar.

Nicolas Economou, like many of us, is an alumni of The Sedona Conference. His sixth principle is based on what he calls thoughtful, inclusive dialogue with civil society. Sedona was the first legal group to try to incorporate the principles of dialogue into continuing legal education programs. That is what first attracted me to The Sedona Conference. AI-Ethics.com intends to incorporate dialogue principles in conferences that it will sponsor in the future. This is explained in the Mission Statement page of AI-Ethics.com.

The mission of AI-Ethics.com is threefold:

  1. Foster dialogue between the conflicting camps in the current AI ethics debate.
  2. Help articulate basic regulatory principles for government and industry groups.
  3. Inspire and educate everyone on the importance of artificial intelligence.

First Mission: Foster Dialogue Between Opposing Camps

The first, threshold mission of AI-Ethics.com is to go beyond argumentative debates, formal and informal, and move to dialogue between the competing camps. See eg. Bohm Dialogue, Martin Buber and The Sedona Conference. Then, once this conflict is resolved, we will all be in a much better position to attain the other two goals. We need experienced mediators, dialogue specialists and judges to help us with that first goal. Although we already have many lined up, we could always use more.

We hope to use skills in both dialogue and mediation to transcend the polarized bickering that now tends to dominate AI ethics discussions. See eg. AI Ethics Debate. We need to move from debate to dialogue, and we need to do so fast.

_____

Here is the new segment we added to the Draft Principles page.

6. Nicolas Economou

The latest attempt at articulating AI Ethics principles comes from Nicolas Economou, the CEO of electronic discovery search company, H5. Nicolas has a lot of experience with legal search using AI, as do several of us at AI-Ethics.com. In addition to his work with legal search and H5, Nicholas is involved in several AI ethics groups, including the AI Initiative of the Future Society at Harvard Kennedy School and the Law Committee of the IEEE’s Global Initiative for Ethical Considerations in AI.

Nicolas Economou has obviously been thinking about AI ethics for some time. He provides a solid scientific, legal perspective based on his many years of supporting lawyers and law firms with advanced legal search. Economou has developed six principles as reported in an ABA Legal Rebels article dated October 3, 2017, A ‘principled’ artificial intelligence could improve justice. (Some of the explanations have been edited out as indicated below. Readers are encouraged to consult the full article.) As you can see the explanations given here were written for consumption by lawyers and pertain to e-discovery. They show the application of the principles in legal search. See eg TARcourse.com. The principles have obvious applications in all aspects of society, not just the Law and predictive coding, so their value goes beyond the legal applications here mentioned.

Principle 1: AI should advance the well-being of humanity, its societies, and its natural environment. The pursuit of well-being may seem a self-evident aspiration, but it is a foundational principle of particular importance given the growing prevalence, power and risks of misuse of AI and hybrid intelligence systems. In rendering the central fact-finding mission of the legal process more effective and efficient, expertly designed and executed hybrid intelligence processes can reduce errors in the determination of guilt or innocence, accelerate the resolution of disputes, and provide access to justice to parties who would otherwise lack the financial wherewithal.

Principle 2: AI should be transparent. Transparency is the ability to trace cause and effect in the decision-making pathways of algorithms and, in hybrid intelligence systems, of their operators. In discovery, for example, this may extend to the choices made in the selection of data used to train predictive coding software, of the choice of experts retained to design and execute the automated review process, or of the quality-assurance protocols utilized to affirm accuracy. …

Principle 3: Manufacturers and operators of AI should be accountable. Accountability means the ability to assign responsibility for the effects caused by AI or its operators. Courts have the ability to take corrective action or to sanction parties that deliberately use AI in a way that defeats, or places at risk, the fact-finding mission it is supposed to serve.

Principle 4: AI’s effectiveness should be measurable in the real-world applications for which it is intended. Measurability means the ability for both expert users and the ordinary citizen to gauge concretely whether AI or hybrid intelligence systems are meeting their objectives. …

Principle 5: Operators of AI systems should have appropriate competencies. None of us will get hurt if Netflix’s algorithm recommends the wrong dramedy on a Saturday evening. But when our health, our rights, our lives or our liberty depend on hybrid intelligence, such systems should be designed, executed and measured by professionals with the requisite expertise. …

Principle 6: The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society. …  The societal dialogue relating to the use of AI in electronic discovery would benefit from being even more inclusive, with more forums seeking the active participation of political scientists, sociologists, philosophers and representative groups of ordinary citizens. Even so, the realm of electronic discovery sets a hopeful example of how an inclusive dialogue can lead to broad consensus in ensuring the beneficial use of AI systems in a vital societal function.

Nicolas Economou believes, as we do, that an interdisciplinary approach, which has been employed successfully in e-discovery, is also the way to go for AI ethics. Note his use of the word “dialogue” and mention in the article of The Sedona Conference, which pioneered the use of this technique in legal education. We also believe in the power of dialogue and have seen it in action in multiple fields. See eg. the work of physicist, David Bohm and philosopher, Martin Buber. That is one reason that we propose the use of dialogue in future conferences on AI ethics. See the AI-Ethics.com Mission Statement.

_____

__

 

 


%d bloggers like this: