Beware of the TAR Pits! – Part Two

This is the conclusion of a two part blog. For this to make sense please read Part One first.

Quality of Subject Matter Experts

Poppy_headThe quality of Subject Matter Experts in a TAR project is another key factor in predictive coding. It is one that many would prefer to sweep under the rug. Vendors especially do not like to talk about this (and they sponsor most panel discussions) because it is beyond their control. SMEs come from law firms. Law firms hire vendors. What dog will bite the hand that feeds him? Yet, we all know full well that not all subject matter experts are alike. Some are better than others. Some are far more experienced and knowledgeable than others. Some know exactly what documents they need at trial to win a case. They know what they are looking for. Some do not. Some have done trials, lots of them. Some do not know where the court house is. Some have done many large search projects, first paper, now digital. Some are great lawyers; and some, well, you’d be better off with my dog.

The SMEs are the navigators. They tell the drivers where to go. They make the final decisions on what is relevant and what is not. They determine what is hot, and what is not. They determine what is marginally relevant, what is grey area, what is not. They determine what is just unimportant more of the same. They know full well that some relevant is irrelevant. They have heard and understand the frequent mantra at trials: Objection, Cumulative. Rule 403 of the Federal Evidence Code. Also see The Fourth Secret of Search: Relevant Is Irrelevant found in Secrets of Search – Part III.

Quality of SMEs is important because the quality of input in active machine learning is important. A fundamental law of predictive coding as we now know it is GIGO, garbage in, garbage out. Your active machine learning depends on correct instruction. Although good software can mitigate this somewhat, it can never be eliminated. See: Webber & Pickens, Assessor Disagreement and Text Classifier Accuracy, SIGIR 2013 (24% more ranking depth needed to reach equivalent recall when not using SMEs, even in a small data search of news articles with rather simple issues).

Jeremy_PickensInformation scientists like Jeremy Pickens are, however, working hard on ways to minimize the errors of SME document classifications on overall corpus rankings. Good thing too because even one good SME will not be consistent in ranking the same documents. That is the Jaccard Index scientists like to measure. Less Is More: When it comes to predictive coding training, the “fewer reviewers the better” – Part Two, and search of Jaccard in my blog.

Unique_Docs_VennIn my Enron experiments I was inconsistent in determining the relevance of the same document 23% of the time. That’s right, I contradicted myself on relevancy 23% of the time. (If you included irrelevancy coding the inconsistencies were only 2%.) Lest you think I’m a complete idiot (which, by the way, I sometimes am), the 23% rate is actually the best on record for an experiment. It is the best ever measured, by far. Other experimentally measured rates have inconsistencies of from 50% to 90% (with multiple reviewers). Pathetic huh? Now you know why AI is so promising and why it is so important to enhance our human intelligence with artificial intelligence. When it comes to consistency of document identifications in large scale data reviews, we are all idiots!

With these human  frailty facts in mind, not only variable quality in expertise of subject matter, but also human inconsistencies, it is obvious why scientists like Pickens and Webber are looking for techniques to minimize the impact of errors and, get this, even use these inevitable errors to improve search. Jeremy Pickens and I have been corresponding about this issue at length lately. Here is Jeremy’s later response to this blog. In TAR, Wrong Decisions Can Lead to the Right Documents (A Response to Ralph Losey). Jeremy does at least concede that coding quality is indeed important. He goes on to argue that his study shows that wrong decisions, typically on grey area documents, can indeed be useful.

Penrose_triangle_ExpertiseI do not doubt Dr. Pickens’ findings, but am skeptical of the search methods and conclusions derived therefrom. In other words, how the training was accomplished, the supervision of the learning. This is what I call here the driver’s role, shown on the triangle as the Power User and Experienced Searcher. In my experience as a driver/SME, much depends on where you are in the training cycle. As the training continues the algorithms eventually do become able to detect and respond to subtle documents distinctions. Yes, it take a while, and you have to know what and when to train on, which is the drivers skill (for instance you never train with giant documents), but it does eventually happen. Thus, while it may not matter if you code grey area documents wrong at first, it eventually will, that is unless you do not really care about the distinctions. (The TREC overturn documents Jeremy tested on, the ones he called wrong documents, were in fact grey area documents, that is, close questions. Attorneys disagreed on whether they were relevant, which is why they were overturned on appeal.) The lack of precision in training, which is inevitable anyway even when one SME is used, may not matter much in early stages of training, and may not matter at all when testing simplistic issues using easy databases, such as news articles. In fact, I have used semi-supervised training myself, as Jeremy describes from old experiments in Pseudo Relevance Feedback. I have seen it work myself, especially in early training.

Still, the fact some errors do not matter in early training does not mean you should not care about consistency and accuracy of training during the whole ride. In my experience, as training progresses and the machine gets smarter, it does matter. But let’s test that shall we? All I can do is report on what I see, i.w. – anecdotal.

Outside of TREC and science experiments, in the messy real world of legal search, the issues are typically maddeningly difficult. Moreover, the difference in cost of review of hundreds of thousands of irrelevant documents can be mean millions of dollars. The fine points of differentiation in matured training are needed for precision in results to reduce costs of final review. In other words, both precision and recall matter in legal search, and all are governed by the overarching legal principle of proportionality. That is not part of information science of course, but we lawyers must govern our search efforts by proportionality.

Also See William Webber’s response: Can you train a useful model with incorrect labels? I believe that William’s closing statement may be correct, either that or software differences:

It may also be, though this is speculation on my part, that a trainer who is not only a subject-matter expert, but an expert in training itself (an expert CAR driver, to adopt Ralph Losey’s terminology) may be better at selecting training examples; for instance, in recognizing when a document, though responsive (or non-responsive), is not a good training example.

alchemyI hope Pickens and Webber get there some day. In truth, I am a big supporter of their efforts and experiments. We need more scientific research. But for now, I still do not believe we can turn lead into gold. It is even worse if you have a bunch of SMEs arguing with each other about where they should be going, about what is relevant and what is not. That is a separate issue they do not address, which points to the downside of all trainers, both amateurs and SMEs alike. See: Less Is More: When it comes to predictive coding training, the “fewer reviewers the better” – Parts OneTwo, and Three.

For additional support on the importance of SMEs, see again Monica’s article, EDI-Oracle Studywhere she summarizes the conclusion of Patrick Oot from the study that:

Technology providers using similar underlying technology, but different human resources, performed in both the top-tier and bottom-tier of all categories. Conclusion: Software is only as good as its operators. Human contribution is the most significant element. (emphasis in original)

Also see the recent Xerox blog, Who Prevails in the E-Discovery War of Man vs. Machine? by Gabriela Baron.

Teams that participated in Oracle without a bona fide SME, much less a good driver, well, they were doomed. The software was secondary. How could you possibly replicate the work of the original SME trial lawyers that did the first search without having an SME yourself, one with at least a similar experience and knowledge level.

map_lost_navigator_SMEThis means that even with a good driver, and good software, if you do not also have a good SME, you can still end up driving in circles. It is even worse when you try to do a project with no SME at all. Remember, the SME in the automobile analogy is the navigation system, or to use the pre-digital reality, the passenger with the map. We have all seen what happens where the navigation system screws up, or the map is wrong, or more typically, out of date (like many old SMEs). You do not get to the right place. You can have a great driver, and go quite fast, but if you have a poor navigator, you will not like the results.

The Oracle study showed this, but it is hardly new or surprising. In fact, it would be shocking if the contrary were true. How can incorrect information ever create correct information? The best you can hope for is to have enough correct information to smooth out the errors. Put another way, without signal, noise is just noise. Still, Jeremy Pickens claims there is a way. I will be watching and hope he succeeds where the alchemists of old always failed.

Tabula Rasa

blank_slateThere is one way out of the SME frailty conundrum that I have high hopes for and can already understand. It has to do with teaching the machine about relevance for all projects, not just one. The way predictive coding works now the machine is a tabula rasa, a blank slate. The machine knows nothing to begin with. It only knows what you teach it as the search begins. No matter how good the AI software is at learning, it still does not know anything on its own. It is just good at learning.

That approach is obviously not too bright. Yet, it is all we can manage now in legal search at the beginning of the Second Machine Age. Someday soon it will change. The machine will not have its memory wiped after every project. It will remember. The training from one search project will carry over to the next one like it. The machine will remember the training of past SMEs.

That is the essential core of my PreSuit proposal: to retain the key components of the past SME training so that you do not have to start afresh on each search project. PreSuit: How Corporate Counsel Could Use “Smart Data” to Predict and Prevent Litigation. When that happens (I don’t say if, because this will start happening soon, some say it already has) the machine could start smart.

Scarlett_Johansson - Samantha in HERThat is what we all want. That is the holy grail of AI-enhanced search — a smart machine. (For the ultimate implications of this, see the movie Her, which is about an AI enhanced future that is still quite a few years down the road.) But do not kid yourself, that is not what we have now. Now we only have baby robots, ones that are eager and ready to learn, but do not know anything. It is kind of like 1-Ls in law school, except that when they finish a class they do not retain a thing!

When my PreSuit idea is implemented, the next SME will not have to start afresh. The machine will not be a tabula rasa. It will be able to see litigation brewing. It will help general counsel to stop law suits before they are filed. The SMEs will then build on the work of prior SMEs, or maybe build on their own previous work in another similar project. Then the GIGO principle will be much easier to mitigate. Then the computer will not be completely dumb, it will have some intelligence from the last guy. There will be some smart data, not just big dumb data. The software will know stuff, know the law and relevance, not just know how to learn stuff.

When that happens, then the SME in a particular project will not be as important, but for now, when working from scratch with dumb data, the SME is still critical. The smarter and more consistent the better. Less Is More: When it comes to predictive coding training, the “fewer reviewers the better” – Parts OneTwo, and Three.

Professor Marchionini, like all other search experts, recognizes the importance of SMEs to successful search. As he puts it:

Thus, experts in a domain have greater facility and experience related to information-seeking factors specific to the domain and are able to execute the subprocesses of information seeking with speed, confidence, and accuracy.

That is one reason that the Grossman Cormack glossary builds in the role of SMEs as part of their base definition of computer assisted review:

A process for Prioritizing or Coding a Collection of electronic Documents using a computerized system that harnesses human judgments of one or more Subject Matter Expert(s) on a smaller set of Documents and then extrapolates those judgments to the remaining Document Collection.

Glossary at pg. 21 defining TAR.

Most SMEs Today Hate CARs
(And They Don’t Much Like High-Tech Drivers Either)

simpsonoldmanThis is an inconvenient truth for vendors. Predictive coding is defined by SMEs. Yet vendors cannot make good SMEs step up to the plate and work with the trainers, the drivers, to teach the machine. All the vendors can do is supply the car and maybe help with the driver. The driver and navigator have to be supplied by the law firm or corporate clients. There is no shortage of good SMEs, but almost all of them have never even seen a CAR. They do not like them. They can barely even speak the language of the driver. They don’t much like most of the drivers either. They are damn straight not going to spend two weeks of their lives riding around in one of those new fangled horseless carriages.

ringo and old guy

That is the reality of where we are now. Also see: Does Technology Leap While Law Creeps? by Brian Dalton, Above the Law. Of course this will change with the generations. But for now, that is the way it is. So vendors work on error minimization. They try to minimize the role of SMEs. That is anyway a good idea, because, as mentioned, all human SMEs are inconsistent. I was lucky to only be inconsistent 23% of the time on relevance. But still, there is another obvious solution.

There is another way to deal today with the reluctant SME problem, a way that works right now with today’s predictive coding software. It is a kind of non-robotic surrogate system that I have developed, and I’m sure a several other professional drivers have as well. See my CAR page for more information on this. But, in reality it is one of those things I would just have to show you in a driver education school type setting. I do it frequently. It involves action in behalf of an SME, and dealing with the driver for them. It places them in their comfort zone, where they just make yes no decisions on the close question documents, although there is obviously more to it than that. It is not nearly as good as the surrogate system in the movie Her, and of course, I’m no movie star, but it works.

HER_Samantha_Surrogate

My own legal subject matter expertise is, like most lawyers, fairly limited. I know a lot about a few things, and am a stand alone SME in those fields. I know a fair amount about many more legal fields, enough to understand real experts, enough to serve as their surrogate or right hand. Those are the CAR trips I will take.

If I do not know enough about a field of law to understand what the experts are saying, then I cannot serve as a surrogate. I could still drive of course, but I would refuse to do that out of principle, unless I had a navigator, an SME, who knew what they were doing and where they wanted to go. I would need an SME willing to spend the time in the CAR needed to tell me where to go. I hate a TAR pit as much as the next guy. Plus at my age and experience I can drive anywhere I want, in pretty much any CAR I want. That brings us to the final corner of the triangle, the variance in the quality of predictive coding software.

Quality of the CAR Software

I am not going to spend a lot of time on this. No lawyer could be naive enough to think that all of the software is equally as good. That is never how it works. It takes time and money to make sophisticated software like this. Anybody can simply add on open source machine learning software code to their review platforms. That does not take much, but that is a Model-T.

Old_CAR_stuck_mud

To make active machine learning work really well, to take it to the next level, requires thousands of programming hours. It takes large teams of programmers. It takes years. It take money. It takes scientists. It takes engineers. It takes legal experts too. It takes many versions and continuous improvements of search and review software. That is how you tell the difference between ok, good, and great software. I am not going to name names, but I will say the Gartner’s so called Magic Quadrant evaluation of e-discovery software is not too bad. Still, be aware that evaluation of predictive coding is not really their thing, or even a primary factor for rating review software.

Gartner_Magic_Quadrant

It is kind of funny how pretty much everybody wins in the Gartner evaluation. Do you think that’s an accident? I am privately much more critical. Many well known programs are very late to the predictive coding party. They are way behind. Time will tell if they are ever able to catch up.

Still, these things do change from year to year, as new versions of software are continually released. For some companies you can see real improvements, real investments being made. For others, not so much, and what you do see is often just skin deep. Always be skeptical. And remember, the software CAR is only as good as your driver and navigator.

car_mind_meld

When it comes to software evaluation what counts is whether the algorithms can find the documents needed or not. Even the best driver navigator team in the world can only go so far in a clunker. But give them a great CAR, and they will fly. The software will more than pay for itself in saved reviewer time and added security of a job well done.

Deja Vu All Over Again. 

Predictive coding is a great leap forward in search technology. In the longterm predictive coding and other AI-based software will have a bigger impact on the legal profession than did the original introduction of computers into the law office. No large changes like this are without problems. When computers were first brought into law offices they too caused all sorts of problems and had their pitfalls and nay sayers. It was a rocky road at first.

Ralph in the late 1980s

I was there and remember it all very well. The Fonz was cool. Disco was still in. I can remember the secretaries yelling many times a day that they needed to reboot. Reboot! Better save. It became a joke, a maddening one. The network was especially problematic. The partner in charge threw up his hands in frustration. The other partners turned the whole project over to me, even though I was a young associate fresh out of law school. They had no choice. I was the only one who could make the damn systems work.

Ifloppy_8incht was a big investment for the firm at the time. Failure was not an option. So I worked late and led my firm’s transition from electric typewriters and carbon paper to personal computers, IBM System 36 minicomputers, word processing, printers, hardwired networks, and incredibly elaborate time and billing software. Remember Manac time and billing in Canada? Remember Displaywriter? How about the eight inch floppy? It was all new and exciting. Computers in a law office! We were written up in IBM’s small business magazine.

For years I knew what every DOS operating file was on every computer in the firm. The IBM repair man became a good friend. Yes, it was a lot simpler then. An attorney could practice law and run his firm’s IT department at the same time.

ralph_1990sHey, I was the firm’s IT department for the first decade. Computers, especially word processing and time and billing software, eventually made a huge difference in efficiency and productivity. But at first there were many pitfalls. It took us years to create new systems that worked smoothly in law offices. Business methods always lag way behind new technology. This is clearly shown by MIT’s Erik Brynjolfsson and Andrew McAfee in their bestseller, Second Machine Age. It typically takes a generation to adjust to major technology breakthroughs. Also see Ted Talk by Brynjolfsson with video.

I see parallels with the 1980s and now. The main difference is legal tech pioneers were very isolated then. The world is much more connected now. We can observe together how, like in the eighties, a whole new level of technology is starting to make its way into the law office. AI-enhanced software, starting with legal search and predictive coding, is something new and revolutionary. It is like the first computers and word processing software of the late 1970s and early 80s.

It will not stop there. Predictive coding will soon expand into information governance. This is the PreSuit project idea that I, and others, are starting to talk about. See Eg: Information Governance Initiative. Moreover, many think AI software will soon revolutionize legal practice in a number of other ways, including contract generation and other types of repetitive legal work and analysis. See Eg: Rohit Talwar, Rethinking Law Firm Strategies for an Era of Smart Technology (ABA  LPT, 2014). The potential impact of supervised learning and other cognitive analytics tools on all industries is vast. See Eg: Deloitte’s 2014 paper: Cognitive Analytics (“For the first time in computing history, it’s possible for machines to learn from experience and penetrate the complexity of data to identify associations.”); Also see: Digital Reasoning software, and Paragon Science software. Who knows where it will lead the world, much less the legal profession? Back in the 1980s I could never have imagined the online Internet based legal practice that most of us have now.

The only thing we know for sure is that it will not come easy. There will be problems, and the problems will be overcome. It will take creativity and hard work, but it will be done. Easy buttons have always been a myth, especially when dealing with the latest advancements of technology. The benefits are great. The improvements from predictive coding in document review quality and speed are truly astonishing. And it lowers cost too, especially if you avoid the pits. Of course there are issues. Of course there are TAR pits. But they can be avoided and the results are well worth the effort. The truth is we have no choice.

Conclusion

retire

If you want to remain relevant and continue to practice law in the coming decades, then you will have to learn how to use the new AI-enhanced technologies. There is really no choice, other than retirement. Keep up, learn the new ways, or move on. Many lawyers my age are retiring now for just this reason. They have no desire to learn e-discovery, much less predictive coding. That’s fine. That is the honest thing to do. The next generation will learn to do it, just like a few lawyers learned to use computers in the 1980s and 1990s. Stagnation and more of the same is not an option in today’s world. Constant change and education is the new normal. I think that is a good thing. Do you?

Leave a comment. Especially feel free to point out a TAR pit not mentioned here. There are many, I know, and you cannot avoid something you cannot see.

13 Responses to Beware of the TAR Pits! – Part Two

  1. […] Beware of the TAR Pits! Part One and Part Two. […]

  2. Jeremy Pickens says:

    Will be working on a post in the next day or two that addresses some of your concerns, a small step at a time.

  3. […] a recent blog post, Ralph Losey tackles the issue of expertise and TAR algorithm training.  The post, as is […]

  4. […] re-iterated his characterization of using non-experts to train a predictive coding system as garbage in, garbage out, a position which he regards Jeremy Pickens and myself as disagreeing with. Jeremy Pickens, […]

  5. Jeremy Pickens says:

    In my experience as a driver/SME, much depends on where you are in the training cycle. As the training continues the algorithms eventually do become able to detect and respond to subtle documents distinctions. Yes, it take a while, and you have to know what and when to train on, which is the drivers skill (for instance you never train with giant documents), but it does eventually happen. Thus, while it may not matter if you code grey area documents wrong at first, it eventually will, that is unless you do not really care about the distinctions.

    Well, we’ve seen in actual TAR matters, real data, not just on TREC, the case where fine grade expert distinctions have led to worse outcomes than more broad, inclusive judgments. A point that Webber also makes: http://blog.codalism.com/?p=2084

    But what you seem to be suggesting here is that you actually might be on board with a hybrid model, one in which both experts and non-experts each contribute effort toward training a TAR system. And I do mean training, as in non-expert judgments affecting the output of the TAR itself.

    If that’s the case, then I consider my contributions a success, because we’ve broken through the notion that an expert and ONLY an expert may be used to drive the TAR, at all times. We’ve at least allowed the possibility that non-experts may be employed not just to check the static output of a TAR system after it has finished being driven, but to have a hand in the driving of that system, while it is still moving.

    And there are a lot of possibilities when it comes to workflows that integrate both experts and non-experts. I explicitly said in my blog post that I wasn’t going to get into those workflows, but perhaps now I should write a followup. Because expert-nonexpert hybrid workflows are something that we’ve been working on for quite some time now. And what we’ve found is that when you do have an intelligent workflow around this stuff, then you can get equal, if not better, precision and recall by combining experts and non-experts, but at a fraction of the cost. Not to mention make the whole process much faster, because non-experts can work in parallel with the expert during the training itself (not just after training is done!), and drastically cut down elapsed clock time as well.

    The TREC overturn documents Jeremy tested on, the ones he called wrong documents, were in fact grey area documents, that is, close questions. Attorneys disagreed on whether they were relevant, which is why they were overturned on appeal.)

    So again, I think we’re making progress. Earlier in this discussion, the only acknowledgement made was “GIGO”, and that it didn’t seem possible that wrong decisions could lead to right documents. What the expert said was right, and if there was disagreement, it was garbage in, and would only be garbage out. Now, I think we’ve reached the understanding that it’s a spectrum. A spectrum not only on the output of a TAR system, but also a spectrum on the input.

    The acknowledgment of the spectrum-ness of the problem gets us away from the binary thinking of “GIGO”, and is what allows for the possibility of the hybrid expert-nonexpert training.. hybrid DRIVING.. workflows that I’ve been talking about.

    • Ralph Losey says:

      One important quibble. I don’t see it as between experts and non-experts. In TREC that year we were always talking about lawyer review. They were experts and had training on the issues. We are talking about degree of expertise.

      Under all ethical rules in every state (as far as I know) an attorney of record, who should be a bonafide SME on the issues in the case, is required to supervise the work of the contract lawyers. They are trained and supervised by an SME and this become little experts themselves. They just have less expertise.

      With that distinction, I am onboard with a hybrid model where lesser experts can be used at first. Still, when this happens, the real SME is deprived of the opportunity to do email psychoanalysis. By looking a thousands of emails, most of them relevant, you can really get inside the head of key players. You start to know who they are. That is helpful for an SME and power user (driver) to know. It helps in the overall trial preparation and document search. So you lose that, unless you have a good way of transferring such insights from the contract lawyers (assuming they had the skill for such email analysis).

      Final point, the tendency of experts to draw tight lines on relevance can be tempered by an experienced searchers, iw – the drivers.

  6. Jeremy Pickens says:

    And I suppose I should mention as well that it’s not just a matter of creating an expert-nonexpert hybrid workflow; it’s a matter of having underlying technology that can support that workflow. Not all CAR/TAR systems are so designed. So maybe when others say that you absolutely have to have an expert to train that system, they’re right.. for that system.

    I’ve found more total cost savings, though, more overall value, in the possibilities enabled by hybrid driver systems. Not just hybrid CARs, but hybrid DRIVERS.

  7. […] There is an ongoing debate about whether training requires the examples to be provided by subject matter experts (SMEs) to be effective. Our research suggests that review teams assisted by SMEs are just as effective as SMEs alone. See: Subject Matter Experts: What Role Should They Play in TAR 2.0 Training?  Others disagree. See, for example, Ralph Losey’s posts about the need for SME’s to make the process effective. […]

  8. […] There is an ongoing debate about whether training requires the examples to be provided by subject matter experts (SMEs) to be effective. Our research suggests that review teams assisted by SMEs are just as effective as SMEs alone. See: Subject Matter Experts: What Role Should They Play in TAR 2.0 Training?  Others disagree. See, for example, Ralph Losey’s posts about the need for SME’s to make the process effective. […]

  9. […] a recent blog post, Ralph Losey tackles the issue of expertise and TAR algorithm training.  The post, as is […]

Leave a Reply

Discover more from e-Discovery Team

Subscribe now to keep reading and get access to the full archive.

Continue reading