TAR Course: 14th Class

Fourteenth Class: Steps Four, Five and Six – Iterate

At this point we have already covered the nine insights and the first three steps of our workflow. In this Class we cover steps four, five and six of our eight-step method of electronic document review. These are the steps that iterate and are the essence of predictive coding.

predictive_coding_4-0_webIntroductory Video on Steps 4-5-6 of Predictive Coding 4.0

___

___

History of Steps Four, Five and Six: Training Select, AI Document Ranking and Multimodal Review

These are the three iterated steps that are the heart of our active machine learning process. The description of steps four, five and six constitutes the most significant change from version 3.0. We have changed the iterated steps order by making a new step four – Training Select. We have also changed somewhat the descriptions in Predictive Coding Version 4.0. This was all done to better clarify and simplify what we are doing. This is our standard work flow. Our old version 3.0 description now seems somewhat confusing. As Steve Jobs famously said:

You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.

In our case it can help you to move mountains of data by proper use of active machine learning.

predictive_coding_4-0_4-5-6-steps

In version 3.0 we called these three iterated steps: AI Predictive Ranking (step 4), Document Review (step 5), and Hybrid Active Training (step 6). The AI Predictive Ranking step, now called AI Document Ranking, was moved from step four to step five. This is to clarify that the task of selecting documents for training always comes before the training itself. We also made Training Selection a separate step to emphasize the importance of this task. This is something that we have come to appreciate more fully since our TREC experiments.

black_box_SVMThe AI Document ranking step is where the artificial intelligence does its work. It is where the software ranks all of the documents according to the training documents selected by the human trainers. It is the unique AI step. The black box.  There are no human efforts in step five at all. All we do is wait on the machine analysis. When it is done, all documents have been ranked (first time) or re-ranked (all training rounds after the first).

We slightly tweaked the name here to be AI Document Ranking, instead of AI Predictive Ranking, as that is, we think, a clearer description of what the machine is doing. It is ranking all documents according to probability of relevance, or whatever other binary training you are doing. For instance, we usually also rank all documents according to probable privilege too and also according to high relevance.

Our biggest change in version 4.0 was to make this AI step number five, instead of four, and, as mentioned, to add a new step four called Training Select. The new step four – Training Select – is the human function of deciding what documents to use to train the machine. (This used to be included in iterated step six, which was, we now see, somewhat confusing.)

Step Four – Training Select

Unlike other predictive coding methods, we empower humans to make this selection in step four, Training Select. We do not, like some methods, create automatic rules for selection of training documents. For example, the Grossman Cormack CAL method (their trademark) only uses a predetermined number of the top ranked documents for training. In our method, we could also select these top ranked documents, or we could include other documents we have found to be relevant from other methods.

ralph_and_lexieThe freedom and choices that our method provides to the humans in charge is another reason our method is called Hybrid, in that it features natural human intelligence. It is not all machine controlled. In Predictive Coding 4.0 we use artificial intelligence to enhance and augment our own natural intelligence. The machine is our partner, our friend, not our competitor or enemy. We tell our tool, our computer algorithm, what documents to train on in step four, and when, and the machine implements in step five.

Typically in step four, Training Select, we will include all documents that we have previously coded as relevant as training documents, but not always. Sometimes, for instance, we may defer including very long relevant documents in the training, especially large spreadsheets, until the AI has a better grasp of our relevance intent. Skilled searchers rarely use all documents coded as training documents, but sometimes do. The same reasoning may apply to excluding a very short message, such as a one word message saying “call,” although we are more likely to leave that in. This selection process is where the art and experience of search come in. The concern is to avoid over-training on any one document type and thus lowering recall and missing a key black-swan document.

Justice_scaleAlso, we now rarely include all irrelevant documents into training, but instead, we try to have a more balanced approach. We do not want to use a vastly higher number of irrelevant documents for training than relevant documents. Otherwise, we tend to see incorrectly low rankings across the board. The 50% probable relevant dividing line can be an inaccurate indicator unless we take such balancing steps. We also find the balanced approach allows the machine to learn faster. Information scientists we have spoken with on this topic say this is typical with most types of active machine learning algorithms. It is not unique to our Mr. EDR, an active machine learning algorithm that uses an logistic regression method.

Step Six: Multimodal Review

The sixth step of Multimodal Review is where we find new relevant or irrelevant documents for the next round of training. This is the step where most of the actual document review is done, where the documents are seen and classified by human reviewers. It is like step two, multimodal ECA. But now in step six we can also performed ranking searches, such as find all documents ranked 90% probable relevant or higher. We usually rely heavily on such high-ranking searches.

We then human-review all of the documents, which can often include very fast skimming and bulk coding. Some reviewers can become exceptionally fast. Losey is pretty fast too, but errs on the side of completeness and caution. Just another quality control measure.

In addition to these ranked searches for new documents to review and code, we can use any other type of search we deem appropriate. This is the multimodal approach. Typically keyword and concept searches are used less often after the first round of training, but similarity searches of all kinds are often used throughout a project to supplement ranking based searches. Sometimes we may even use a linear search, expert manual review at the base of the search pyramid, if a new hot document is found. For instance, it might be helpful to see all communications that a key witness had on a certain day. The two-word stand-alone call me email when seen in context can sometimes be invaluable to proving your case.

search_pyramid_revised

predictive_coding_4-0_8-steps_istStep six is much like step two, Multimodal ECA, except that now new types of document ranking search are possible. Since the documents are now all probability ranked in step five, you can use this ranking to select documents for review in Step-Six. We like to focus on the documents with high ranking of probable relevant. The research of Professors Cormack and Grossman has shown that this can be a very effective method to continuously find and train relevant documents. Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014, at pg. 9. Also see Latest Grossman and Cormack Study Proves Folly of Using Random Search for Machine Training – Parts One,  TwoThree and Four. Another popular method, also tested and reported on by Grossman and Cormack, is to select mid-ranked documents, the ones the computer is uncertain about. They are less fond of that method. We agree, but we will sometimes use it too.

Hybrid Multimodal

The e-Discovery team’s preferred active learning process in Predictive Coding 4.0 remains a four-fold multimodal process, just as it was in version 3.0. How you mix and match the various search methods is a matter of personal preference and educated response to the data searched. Here are my team’s current preferences for most projects. Again, the weight for each depends upon the project. The only constant is that more that one method is always used.

man_robot1. High Ranked Documents. My team will almost always look to see what the highest unreviewed ranked documents are after AI Ranking, step five. We agree with Cormack and Grossman that this is a very effective search. We may review them on a document by document basis, or only by spot-checking some of them. In the later spot-checking scenario, a quick review of a certain probable relevant range, say all documents ranked between 95% to 99.9% (Mr. EDR has no 100%), may show that they all seem obvious relevant. We may then bulk code all documents in that range as relevant without actually reviewing them. This is a very powerful and effective method with Mr. EDR, and other software, so long as care is used not to over-extend the probability range. In other situations, we may only select the 99%+ probable relevant set for checking and bulk coding with limited review. The safe range typically changes as the review evolves and your latest conception of relevance is successfully imprinted on the computer.

Note that when we say a document is selected without individual review – meaning no human actually read the document – that is only for purposes of training selection and identifying relevant documents for production. We sometimes call that first pass review. In real world projects for clients we always review each document found in steps four, five and six, that has not been previously reviewed by a human, before we produce the document. (This is not true in our academic or scientific studies for TREC or EDI/Oracle.) That takes place in the last step – step eight, Productions. To be clear, in legal practice we do not produce without human verification and review of each and every document produced. The stakes if an error is made are simply too high.

EDR_Cape_found_itIn our cases the most enjoyable part of the review project comes when we see from this search method that Mr. EDR has understood our training and has started to go beyond us. He starts to see patterns that we cannot. He amazingly unearths documents that our team never thought to look for. The relevant documents he finds are sometimes dissimilar to any others found. They do not have the same key words, or even the same known concepts. Still, Mr. EDR sees patterns in these documents that we do not. He finds the hidden gems of relevance, even outliers and black swans. That is when we think of Mr. EDR as going into superhero mode. At least that is the way my e-Discovery Team likes to talk about him.

By the end of most projects Mr. EDR attains a much higher intelligence and skill level than our own (at least on the task of finding the relevant evidence in the document collection). He is always lightening fast and inexhaustible, even untrained, but by the end of his education, he becomes a genius. Definitely smarter and faster than any human as to this one production review task. Mr. EDR in that kind of superhero mode is what makes Predictive Coding so much fun. See Why I Love Predictive Coding.

usain-bolt-smilingWatching AI with higher intelligence than your own, intelligence which you created by your training, is exciting. More than that, the AI you created empowers you to do things that would have been impossible before, absurd even. For instance, using Mr. EDR, my e-Discovery Team of three attorneys was able to do 30 review projects and classify 16,576,820 documents in 45 days. See TREC 2015 experiment summary at Mr. EDR. This was a very gratifying feeling of empowerment, speed and augmentation of our own abilities. The high-AI experience comes though very clearly in the ranking of Mr. EDR near the end of the project, or really anytime before that, when he catches on to what you want and starts to find the hidden gems. I urge you all to give Predictive Coding a try so you can have this same kind of advanced AI hybrid excitement.

Mr_EDR_Uncertain2. Mid-Ranked Uncertain Documents. We sometimes choose to allow the machine, in our case Mr. EDR, to select the documents for review in the sense that we review some of the mid-range ranked documents. These are documents where the software classifier is uncertain of the correct classification. They are usually in the 40% to 60% probable relevant range. Human guidance on these documents as to their relevance will sometimes help the machine to learn by adding diversity to the documents presented for review. This in turn also helps to locate outliers of a type the initial judgmental searches in step two and six may have missed. If a project is going well, we may not need to use this type of search at all.

dice_many3. Random and Judgmental Sampling. We may also select some documents at random, either by proper computer random sampling or, more often, by informal random selection, including spot-checking. The later is sometimes called judgmental sampling. These sampling techniques can help maximize recall by avoidance of a premature focus on the relevant documents initially retrieved. Random samples taken in steps three and six are typically also all included for training, and, of course, are always very carefully reviewed. The use of random selection for training purposes alone was minimized in Predictive Coding 3.0 and remains of lower importance in version 4.0. With today’s software, and using the multimodal method, it is not necessary. We did all of our TREC research without random sampling. We very rarely see the high-ranking searches become myopic without it. Plus, our multimodal approach guards against such over-training throughout the process.

4. Ad Hoc Searches Not Based on Document Ranking. Most of the time we supplement the machine’s ranking-based-searches with additional search methods using non-AI based analytics. The particular search supplements we use depends on the relevant documents we find in the ranked document searches. The searches may include some linear review of selected custodians or dates, parametric Boolean keyword searches, similarity searches of all kinds, concept searches. We use every search tool available to us. Again, we call that a multimodal approach.

More on Step Six  – Multimodal Review

As seen all types of search may be conducted in step six to find and batch out documents for human review and machine training. This step thus parallels step two, ECA, except that documents are also found by ranking of probable relevance. This is not yet possible in step two because step five of AI Document Ranking has not yet occurred. Most of our time in a review project is spent in step six.
It is important to emphasize that although we do searches in step six, steps six and eight are the steps where most of the actual document review is also done, where the documents are seen and classified by human reviewers. Search is used in step six to find the documents that human reviewers should review next. In my experience (and timed tests) the human document review can take as little as one-second per document, assuming your software is good and fast, and it is an obvious document, to as long as a half-hour. The lengthy time to review a document is very rare and only occurs where you have to fast-read a long document to be sure of its classification.

Ralph in the morning reading on his 17 inch MacProStep six is the human time intensive part of Predictive Coding 4.0. It can take most of the time in a project. Although when our top team members do a review, such as in TREC, we often spend more than half of the time in the other steps, sometimes considerably more. This is especially true when we there are only a few target documents to be found, in other words, when the prevalence is low and over production number is too. In real world document reviews, where tens of thousands of documents may be relevant and produced, step six necessarily takes the longest time and involves batches of documents in the thousands, not hundreds. The data size and prevalence of a project has a great impact on the batch size and overall duration of this step.

Depending on the classifications during step six Multimodal Review, a document is either set for production, if relevant and not-privileged, or, if coded irrelevant, it is not set for production. If relevant and privileged, then it is logged but not produced. If relevant, not privileged, but confidential for some reason, then it is either redacted and/or specially labeled before production. The special labeling performed is typically to prominently affix the word CONFIDENTIAL on the Tiff image production, or the phrase CONFIDENTIAL – ATTORNEYS EYES ONLY. The actual wording of the legends depends upon the parties confidentiality agreement or court order. This is all considered second pass work, after determination of relevance in the first pass, and is part of step eight, not six.

When many redactions are required to review a relevant document again in step eight the total time can sometimes go way up. The same goes for double and triple checking of privileged documents that are sometimes found in document collections in large numbers. In our TREC and Oracle experiments redactions and privilege double-checking were not required. Again, the time-consuming redactions are usually deferred to step eight – Productions, as part of second pass review to quality control productions. The equally as time-consuming privilege double-checking efforts can also be deferred to step seven – Quality Assurance, and again for a third-check in step eight.

When reviewing a document not already manually classified, the reviewer is usually presented with a document that the expert searcher running the project has determined is probably relevant. Typically this means that it has higher than a 50% probable relevance ranking. The reviewer may, or may not know the ranking. Whether you disclose that to a reviewer depends on a number of factors. Since we usually only use highly skilled reviewers, we usually trust them with disclosure. But sometimes you may not want to disclose the ranking.

sorry_dave_aiDuring the review many documents predicted to be relevant will not be. The reviewers will code them correctly, as they see them. Our reviewers can and do disagree with and overrule the computer’s predictions. The “Sorry Dave” phrase of the HAL 9000 computer in 2001 Space Odyssey is not possible.

If a reviewer is in doubt, they consult the SME team. Furthermore, special quality controls in the form of second reviews may be imposed on Man Machine disagreements (the computer says a document should be relevant, but the human reviewer disagrees, and visa versa – which is what we call that overturns, but the phrase is used differently by others). The overturn documents typically involve close questions and the ultimate results of the resolved conflicts are then used in the next round of training.

Sometimes the Machine will predict that a document is relevant, maybe even with 99.9% certainty, even though you have already coded the document as Irrelevant. It does so even though you have already told the Machine to train on it as irrelevant. The Machine does not care about your feelings! Or your authority as chief SME. It considers all of the input, all of your documents input in step four. If the cold, hard logic of its algorithms tells it that a document should be relevant, that is what it will report, in spite of how the document has already been coded. This is an excellent quality control tool and one reason I like to see overturns from time to time, although they usually only appear when training is just getting started.

ralph_wrongI cannot tell you how impressed I was when that an overturn from irrelevant to relevant first happened to me. I was skeptical when I saw the machine had said a document was relevant, that I had already read and marked irrelevant, but I went ahead and reread the long document anyway, this time more carefully. Sure enough, I had missed a paragraph near the end that made the document relevant. That was an Eureka moment for me. I have been a strong proponent of predictive coding ever since.

Software does not get tired like we do. If the software is good, it reads the whole document and is not front-loaded like we humans usually are. That does not mean Mr. EDR is always right. He is not. Most of the time we reaffirm the original coding, but not without a careful double-check. Usually we can see where the algorithm went wrong. Sometimes that influences our next iteration of step four, selection of training documents. That is how double-loop training works and why we advocate IST instead of simple Continuous Training.

Losey_JLPrediction error type corrections can be the focus of special searches in step six that look for possible overturns. Most quality version 4.0 software such as Mr. EDR have search functions built-in that are designed to locate all such conflicts between document ranking and classification. Reviewers then review and correct the computer errors by a variety of methods, or change their own prior decisions. This often requires SME team involvement, but only very rarely the senior level SME.

predictive_coding_4-0_4-5-6-stepsThe predictive coding software learns from all of the corrections to its prior predictive rankings. Steps 4, 5 and 6 then repeat as shown in the diagram. This iterative process is a positive feedback loop that continues until the computer predictions are accurate enough to satisfy the proportional demands of the case. In almost all cases that means you have found more than enough of the relevant documents needed to fairly decide the case. In many cases it is far better than that. It is routine for us to attain recall levels of 90% or higher, if not time constrained by proportionality, or dealing with extremely complicated data, or working with with poorly define relevance.

General Note on Ease of Version 4.0 Methodology and Attorney Empowerment

The machine training process for document review has become easier over the last few years as we have tinkered with and refined the methods. (Tinkering is the original and still only true meaning of hacking. See: HackerLaw.org)

At this point of the predictive coding life cycle it is, for example, easier to learn how to do predictive coding correctly than to learn how to do a trial – bench or jury – correctly. (Just a few years ago that was not true.) Interestingly, the most effective instruction method for both legal tasks is similar – second chair apprenticeship, watch and learn. It is the way complex legal practices have always been taught. My team can teach it to any smart tech-lawyer by having them second chair a couple of projects.

da_vinci_surgical_robotIt is interesting to note that medicine uses the same method to teach surgeons how to do complex robotic surgery, with a da Vinci  surgical system, or the like. Whenever a master surgeon operates with robotics, there are always several doctors watching and assisting, more than are needed. In this photo they are the ones around the patient. The master surgeon who is actually controlling the tiny knifes in the patient is the guy on the far left sitting down with his head in the machine. He is looking at a magnified video picture of what is happening inside the patient’s body and moving the tiny knives around with a joystick.

da_vinci_robotic_joystickThe hybrid human-robot system augments the human surgeon’s abilities. The surgeon has his hands on the wheel at all times. The other doctors may watch dozens, and if they are younger, maybe even hundreds of surgeries before they are allowed to take control of the joy stick and do the hard stuff themselves. The predictive coding steps four, five and six are far easier than this, besides, if you screw up, nobody dies.

More on Step Five  – AI Document Ranking

Lexington-Web_basicMore discussion on step five may help clarify all three iterated steps. Again, step five is the AI Document Ranking step where the machine takes over and does all of the work. We have also called this the Auto Coding Run because this is where the software’s predictive coding calculations are performed. The software we use is Kroll Ontrack’s Mr. EDR. In the fifth step the software applies all of the training documents we selected in step four to sort the data corpus. In step five the human trainers can take a coffee break while Mr. EDR ranks all of the documents according to probable relevance or other binary choices.

predictive_coding_4-0_4-5-6-stepsThe first time the document ranking algorithm executes is sometimes called the seed set run. The first repetition of the ranking step five is known as the second round of training, the next, the third round, etc. These iterations continue until the training is complete within the proportional constraints of the case. At that point the attorney in charge of the search may declare the search complete and ready for the next quality assurance test in Step Seven. That is called the Stop decision.

It is important to understand that this entire eight-step workflow diagram is just a linear two-dimensional representation of Predictive Coding 4.0 for teaching purposes. These step descriptions are also a simplified explanation. Step Five can take place just a soon as a single document has been coded. You could have continuous, ongoing machine training at any time that the humans in charge decide to do so. That is the meaning of out team’s IST (Intelligently Spaced Training), as opposed to Grossman and Cormack’s trademarked CAL method, where the training always goes on without any human choice. This was discussed at length in Part Two of this article.

scales_hybrid_tippedWe space the training times ourselves to improve our communication and understanding of the software ranking. It helps us to have a better intuitive grasp of the machine processes. It allows us to observe for ourselves how a particular document, or usually a particular group of documents, impact the overall ranking. This is an important part of the Hybrid aspects of the Predictive Coding 4.0 Hybrid IST Multimodal Method. We like to be in control and to tell the machine exactly when and if to train, not the other way around. We like to understand what is happening and not just delegate everything to the machine. That is one reason we like to say that although we promote a balanced hybrid-machine process, we are pro-human and tip the scales in our favor.

As stated, step five in the eight-step workflow is a purely algorithmic function. The ranking of a few million documents may take as long as a half hour, maybe more, depending on the complexity, the number of documents, software and other factors. Or it might just take a few minutes. This depends on the circumstances and tasks presented.

hyperplanes3d_2All documents selected for training in step four are included in step five computer processing. The software studies the documents marked for training, and then analyzes all of the text data uploaded onto the review platform. It then ranks all of the documents according to probable relevance (and, as mentioned according to other binary categories too, such as Highly Relevant and Privilege, and does all of these categories at the same time, but for simplicity purposes here we will just speak of the relevance rankings). It essentially assigns a probable value of from 0.01% to 99.9% probable relevance to each document in the corpus. (Note, some software uses different ranking values, but this is essentially what it is doing.) A value of 99.9% represents the highest probability that the document matches the category trained, such as relevant, or highly relevant, or privileged. A value of 0.01% means no likelihood of matching. A probability ranking of 50% represents equal likelihood, unless there has been careless over-training on irrelevance documents or other errors have been made. In the middle probability rankings the machine is said to be uncertain as to the document classification.

predictive_coding_4-0_4-5-6-stepsThe first few times the AI-Ranking step is run the software predictions as to document categorization are often wrong, sometimes wildly so. It depends on the kind of search and data involved and on the number of documents already classified and included for training. That is why spot-checking and further training are always needed for predictive coding to work properly. That is why predictive coding is always an iterative process.

Concluding Case Example

We conclude this class with a two-part video where Ralph illustrates the 4-5-6 steps taken on a particular case. The same case was discussed in the last class. Just above his video is the screen shot of the various document directories that he named to record his efforts up to the eleventh round of training. Each directory contains the actual documents referenced and details of the search itself, what you could call the search metadata. (Those are not shown in the screen shot.) Each directory may also contain multiple sub-directories of documents that illustrate other searches. The directories can further nest as per standard computer protocols.

LegalTech_Peck_Losey_BallThis kind of directory naming protocol serves to document the key procedures you followed. Having this information can make it far easier for you to provide detailed testimony later. It is always possible that you may need to testify as both an expert and fact witness to defend your actions as reasonable.

Scale_Justice_digitalThe folders and their contents provide an outline of your work-flow and a detailed record. Each directory contains the electronic files you retrieved by each procedure. This makes it easier to refresh your memory and quickly access the underlying documents found from different searches. You can, if need be, also contemporaneously create a detailed memorandum that further documents each step. We sometimes do so in special cases, but here we only used this far-quicker folder naming protocol and did not also create a memorandum to file. Instead of memorandums, that can become tedious and expensive, we are moving to further documentation in the form of email reports to the search team, typically at least one per training round. Frequently a search project is documented by hundreds of emails.

Using this protocol creates a running record of your activities and takes comparatively little time. Technically this is work-product, but you will probably want to make a limited waiver so that you can disclose for evidentiary purposes as required by rules and case law. Again, that is where your testimony or affidavit may be required.

It is important for quality control purposes to keep a history of your work like this, even if it is just in outline form. The folder histories we create are fairly short and abbreviated, as you can see, but they were originally far less detailed than this. We have found that this level of detail of work description, combined with the actual documents, makes it easier to analyze your efforts and decide what to do next. Yes, it takes some time to do this, and slows your work a little bit, but it is well worth the efforts from the gains in quality that you will achieve.

As an added benefit, this kind of history also strengthens the defensability of your efforts. Although we are almost never challenged, if we were, we could easily recreate and explain our actions. That kind of evidence could be required in exceptional circumstances to prove that reasonable efforts were made under Rule 26(g) Federal Rules of Civil Procedure or other rules and law. As you might imagine, Losey can testify for hours, if need be, on predictive coding and the 4.0 methods that were followed in a particular case in which he was involved. He would provide this testimony by relying on his memory, refreshed by contemporaneous notes like you see in the screen shot below. You can do the same.

____

____

Go on to the Fifteenth Class.

Or pause to do this suggested “homework” assignment for further study and analysis.

SUPPLEMENTAL READING: See if you can find any articles on the art of selection of  training documents. Do any of them involve legal technology? Last time we checked we were still pioneering new ground.  If you find anything, please let us know in a Comment below.

EXERCISES: Aside from the method discussed in the last videos, think of ways that you could train a computer to make binary (yes/no) classifications. Assume that as the project progresses the binary classification become based on ever more subtle distinctions?  How would that have to impact and training methods? What is the importance of double-loop training?

Think from another perspective — evolution, history, economics, medical, psychological, pedantic, scientific or religious — of situations where concept shift has been observed. Think of examples where concept shift has been accepted as a positive norm. Think of other situations where it has been opposed? Why?

_

e-Discovery Team LLC COPYRIGHT 2018

ALL RIGHTS RESERVED

_

 

Leave a Reply