Predictive coding type algorithms are designed to leverage the expertise of human input, preferably attorneys who are subject matter experts of the case at hand. A classification of one document by an attorney results in a recommended classification of hundreds, if not thousands of other documents that the computer identifies as similar. The computer examines the entire data set, the corpus, and predicts the probability of each document therein fitting within the same classification. The expert then tests and correct the predictions in an iterative process. The cost savings in this approach are obvious, particularly considering the high expense of attorney review time. Reviewers can break through the current linear review speed barrier of approximately 100 files per hour, to 1,000, or even 10,000 files per hour. These supersonic review speeds are what it takes to handle e-discovery today in an effective and economical manner.
Predictive coding software tracks and learns from human expert decisions in a feedback loop. This increases accuracy, allowing greater reliance and faster culling of irrelevant (or unresponsive) ESI. Only the ESI predicted to be relevant then needs to be reviewed by attorneys for final quality control verification before production. The extent and degree of the final human review for quality control varies according to two considerations: proportionality constraints related to the value and importance of the case; and, the parties risk tolerance for disclosure of confidential or privileged information.
This kind of software features that I here refer to as predictive coding algorithms use what is known in information science as classification systems based on probability theories. The Sedona Conference Best Practices Commentary on the Use of Search & Information Retrieval Methods in E-Discovery, 8 Sedona Conf. J. 189, 218-221. (2007):
Probability theories are used to make decisions regarding relevant documents. The most prominent of these are so-called “Bayesian” systems or methods, based on Bayes’ Theorem, and the related “dimension reduction systems.”
With predictive coding human review of all files is eliminated. The operative word here is all. As mentioned, review of some files by attorneys is still required for a variety of reasons. Primary among them are training of the artificial intelligence agents, and quality control verification before information is transferred to the requesting party. But all files will not be reviewed by human eyes. In fact, in some cases only a small minority may be subjected to human review. The percent of files not reviewed by experts, but instead only reviewed by artificial intelligence algorithms, will depend on the costs and time factors in the case — the proportionality analysis under Rule 26(b)(2)(B) — and the producing party’s risk tolerance.
Experienced, Skilled Attorneys Are Key to Successful Use of Predictive Coding Software
The input of human experts to train the artificial intelligence algorithms is essential and has a direct impact upon the quality of performance of the predictive coding. If, for instance, the expert is inconsistent in making initial calls on the relevance of documents, then the computer extrapolations based on these calls will also be inconsistent. It is the perennial computer situation of garbage in, garbage out. The quality of the predictions made by the computer is, in large part, based on the quality of the training, the input, provided by the human experts. Quality is, of course, also impacted by other factors, including the nature of the ESI under review, the number of iterative human expert input cycles, and the quality of the software itself. But any deficiencies in these quality factors can be detected and corrected by statistical random sampling of the final predictions. This is ultimately why the black box itself, i.w. exactly how the software works, is not really that important. The final quality controls of random sampling protect against any errors, including software errors.
Although random sampling protects against errors, and greater reliance is placed on these new technologies, the skills, or lack of skills, of attorneys managing a discovery project are still critical. Unskilled attorneys may, for instance, be inconsistent in coding of relevance over multiple documents. They may also make poor choices as to what documents should be considered as representative of highly relevant documents. Even errors in merely relevant coding can have cascading effects.
Predictive coding software models future choices after the documents submitted to it as representative. If the documents have not been properly evaluated or selected by attorneys to begin with, then the results of the automated analysis will also be flawed. If you show the computer what a smoking gun document actually looks like, then the computer will have a good chance to find it. But if you show it a poor imitation, not a real smoking gun document, then you should not be surprised when the computer only finds duds.
When I speak of skilled attorneys, I am not simply talking about attorneys who know how to operate a particular vendor’s software. I am talking about attorneys who really understand the particular issues of a case and the relative probative value of various documents. I am also talking about attorneys who understand the new iterative processes and legal methods to properly use the software. The diagram below provides a high-level overview of these new processes that I recently prepared. I will go into greater detail about these new iterative predictive coding methods for ESI productions in a future blog.
Do not simply use in an unthinking manner the standard work flow methods that vendors set up as defaults for software. Also, if a vendor offers advice on legal methods and best practices (and good vendors know better than to do that), take it all with a big grain of salt. Vendors are not lawyers. E-discovery vendors are not allowed to provide legal services or legal advice. They are prohibited from doing so, even if the expert has a law degree and is licensed to practice law. It is even worse when a tech expert, one who has never even gone to law school, provides opinions on legal methods and what is reasonable and what is not. Vendors and techs are permitted to be advisers on technology and software functions, not the law. Legal methods must be determined by lawyers, not techs. Legal advice may only be provided by practicing attorneys in firms, or acting as solo practitioners. Still, unless you are already an expert in a particular kind of software or technology, it is a very good idea to consult with a software expert or bona fide information scientist in order to formulate your legal opinions. The multi-disciplinary team approach is required in e-discovery work.
Bottom line: you must never let the tail (the software) wag the dog (legal search). Attorneys must at all times remain in control and not abdicate their responsibility as legal advisers to non-lawyer tech experts, much less to software. In other words, even the best software is only as good as the lawyers who use it.
Proving a Negative
The inability of attorneys to uncover documents requested does not necessarily mean that the attorneys are unskilled or the software is defective (although it might). The template for the smoking gun document might be accurate, but nothing like it is found for the simple reason that none exist. This is a key point that should never be forgotten. It is also one reason that skill alone is not enough to know whether you have proven a negative, or have made a mistake. Art and experience are important in this area of the law, just like in any other area of the law. If there is no needle in the haystack to begin with, then no amount of skills, quality controls, or repetition in search will find one.
An experienced attorney gets a feel for the data under review and the people involved. After a while of studying a large data set like a multi-custodian collection of email they know when proceeding further is likely futile. The data examined allows for an accurate assessment of the probabilities of the data not yet examined. It looks like this may be a kind of intuitive application of Bayes’ theorem of conditional probabilities.
It is not uncommon for a requesting party to imagine or hope for the existence of emails or other documents that, like a Unicorn, simply do not exist. The requester is on a mere fishing expedition, and often a delusional one at that. If only a few relevant documents are uncovered by a search, that does not necessarily mean garbage in, garbage out. It could be treasure in, treasure out, as far as the responding party in concerned. The search was successful in proving that what was alleged to have happened, in fact did not. It is proof of a negative by failing to discover a positive. This does not require that every stone be turned as some still think. Probability of data is quite sufficient in the law in these circumstances for a number of reasons, including cost proportionality restraints and burden of proof requirements.
The old adage “where there’s smoke there’s fire” is usually true, but the converse is also true. Moreover, sometimes there is just smoke and mirrors without any fire at all. It is just a conjured wrong that doesn’t really exist. Knowing the difference is where experience and art in law come into play. There is no artificial intelligence for that, although Bayesian probability analysis comes close. This is something that most software designers, vendors, and inexperienced attorneys do not fully appreciate.
Positive Learning Feedback Loop
Even the most highly skilled attorneys will make mistakes from time to time. All experienced attorneys know that. That is why the iterative processes of predictive coding are so valuable. When mistakes in input are made by the subject matter expert attorneys who create the first seed set in the predictive coding process, it is not a one-time, all or nothing procedure.
In predictive coding the linear processes are replaced by cyclic ones. A review by attorneys of the results returned by their initial input creates a positive learning feedback loop for both the software and attorney reviewers. They will quickly detect any errors in their analysis and make corrections accordingly. It is a way of repeatedly checking their work. If they get a return of duds, they know to improve the picture of the smoking guns documents they are looking for, or to look for something else. Proper use of predictive coding tools will in effect make the attorney reviewers smarter and more effective. This in turn allows better input from the attorneys and improves the computer’s efficiency in the next round. The iterative nature of the process allows the creation of powerful positive feedback loops.
The more advanced trans-keyword search methods include the following as described by one of the leading jurists in this area, Judge Paul Grimm:
In addition to keyword searches, other search and information retrieval methodologies include: probabilistic search models, including “Bayesian classifiers” (which searches by creating a formula based on values assigned to particular words based on their interrelationships, proximity, and frequency to establish a relevancy ranking that is applied to each document searched); “Fuzzy Search Models” (which attempt to refine a search beyond specific words, recognizing that words can have multiple forms. By identifying the “core” for a word the fuzzy search can retrieve documents containing all forms of the target word); “Clustering” searches (searches of documents by grouping them by similarity of content, for example, the presence of a series of same or similar words that are found in multiple documents); and “Concept and Categorization Tools” (search systems that rely on a thesaurus to capture documents which use alternative ways to express the same thought).
Victor Stanley, Inc. v. Creative Pipe, Inc., supra at FN 9 of the opinion. Also see videos of my short-hand explanations of predictive coding at LegalTech interviews: My Impromptu Video Interview at NY LegalTech on Predictive Coding and Some Hopeful Thoughts for the Future; and, Interview at Legaltech 2012 on predictive coding and transparency.
The current linear review and culling process derived from paper discovery, where one step follows after the last one is completed, should be replaced by a non-sequential, four-dimensional approach. Here all steps are pursued at once, to some degree, and are repeated over time (the fourth dimension) within predetermined budgetary constraints. This is a use of iterative cycles, where multiple decisions are made at the same time with computer assistance, but later verified and quality controlled by sampling. This is all done within case specific proportionality limits under Rule 26(b)(2)(C), Federal Rules of Civil Procedure and related case law. This type of iterative project management may be new to the law, but is well established in technology development where diagrams like this are common.
Under the new procedures, the old style ad hoc quality control methods are replaced by judgmental and statistical sampling and other metric based quality control systems. The old school muddling management approach is replaced by proven project management techniques. See The Sedona Conference® Commentary on Achieving Quality in the E-Discovery Process; (identifies seven elements critical for successful project management: 1. Leadership. 2. Tailoring. 3. Expertise. 4. Adaptability. 5. Measurement. 6. Documentation. 7. Transparency.) The new procedures also require and depend on cooperation and disclosure. See eg. Animation Showing How Not To Cooperate In An E-Discovery Conference.
Six Search Ideas
In Secrets of Search Parts One, Two and Three, I outlined the five key characteristics of effective search today, using the rubric of secrets. In Part Three I summarized my ideas on search and review using the symbol of the Pythagoreans, the five-sided polygon, or pentagon:
With my blog, Bottom Line Driven Proportional Review I added the sixth idea, where the process gets real and takes money into consideration. In that blog I shared my method to use estimation, projections, budget, cooperation, transparency, and the legal doctrine of proportionality to control the costs of search and review. This final piece competed the outline of a new gold standard of search and review:
Predictive coding is just one tool, one part of the six search ideas. It is part of the “hybrid multimodal,” one of many search techniques, including keyword, that make up multimodal, meaning multiple types of search techniques. Predictive coding is currently the best search tool to be sure, but it is just another high-priced hammer without a well-trained carpenter to use it, one who knows the new multidimensional, iterative, largely automated, cooperative methods. (By the way, have I mentioned yet that most software is priced too high? Come on vendors, go for quantity of sales by lowering the prices.)
Conclusion
The number of documents we have to review seems to double every two to three years, so this new six-fold legal method is imperative. New and better software, especially predictive coding type, is also important. The ranking of relevancy and other categories built into the latest algorithms is, under Bottom Line Driven Proportional Review, an especially helpful new capability. You rank the documents that the computer predicts will be the most relevant to your case and still fit within your budget. You plan an open proportional review, not an over-burdensome one.
Predictive coding is a powerful new tool, but technology alone is not enough. We must also have new legal methods. Technology and law have to work together, grounded in science, to create a new gold standard. The law, driven as it is to stop the run-away costs of e-discovery, is now ready to adopt these new ideas and methods. It is ready to change from a linear, confrontative, one-dimensional, mostly-manual, Bates stamp approach to discovery, to a multidimensional, cooperative, iterative, largely-automated, hash value approach.
Change is possible. Just keep working on it, a little bit every day, and then suddenly, a breakthrough. Once you break the ice, and get things started, the pace of change can quicken. New technologies like predictive coding and new legal methods like bottom line driven proportional review can be quickly accepted.
But we need your help to establish and explain these new ideas and methods to the Bench and Bar. We need more lawyers and other experts to implement the new approach. Remember the lessons of Zuckerberg’s success: Impactful, Fast, Bold, Open, Values: Guidance of the “Hacker Way”. Don’t wait any longer. Affirmance is on its way. Be bold, try the new methods, the new software, and don’t be afraid to be open about what you are doing. We need the impact of these values to quickly stem the tide of ESI overload and excessive costs.
As Margaret Mead said:
Never doubt that a small group
of thoughtful, committed, citizens
can change the world.
Indeed,
it is the only thing that ever has.
Mr. Losey,
I notice the inclusion of “7±2” in your graphic. I’m curious if this number refers back to a 1950’s cognitive psychology theory that “a given individual can handle 7±2 ‘chunks’ of information at a time”?
I have seen this used in software complexity theory (e.g., Halstead’s Software Science monograph) and in some user interface design. While the ‘chunk’ theory has a weak foundation, it appears to hold true.
Your throughts?
LikeLike
Terry – yes that’s what he’s referring to. Mr Losey refers to this in several earlier blog posts (I recall he thinks the numbers are more like 5 ± 2 but the point is fairly moot).
If you haven’t read the earlier posts, comb back through them. I’m an observer rather than practitioner but I have a feeling that in ❤ years time this will fall into place along the lines which Mr Losey advocates if for no other reasons tha (a) things HAVE to change and (b) TAR or whatever-you-want-to-call-it seems the best bet just now.
LikeLike
While experienced attorneys are needed at every step of the EDRM process, it appears that the profession is hesitant regarding investing in the cultivation of this talent. The work of e-Discovery attorneys, review attorneys in particular, continues to provide little to no access to stable employment, (never mind partnership). This creates a tension between the need for sophisticated review attorneys, well practiced in the art of iterative search and review, and the value, or the lack thereof, placed on those skills by the profession.
How long before this tension cause the profession to collapse due to a fundamental mismatch between what it values and the clients’ need for skilled management of its information to meet the legal demands they face?
LikeLike
I have long thoght that the marketplace would force a change and now, finally, it looks like it is starting to happen. Hang in there and keep on improving your skills. Constant education is now required of all true professionals.
LikeLike
Improving skills and constant education. Amen.
LikeLike
[…] Predictive Coding Based Legal Methods for Search and Review; […]
LikeLike
[…] litigants to, in the words of Star Trek, boldly go where no man has gone before. See eg. Predictive Coding Based Legal Methods for Search and Review; and, New Methods for Legal Search and Review. I am reminded once again of the words of a famous […]
LikeLike
[…] Bottom Line Driven Proportional Review; […]
LikeLike
[…] overview of Predictive Coding, I suggest that you read a March 25, 2012 Blog post titled, “Predictive Coding Based Legal Methods for Search and Review“, Ralph Losey does an excellent job of discussing the basic technical mechanics and some of […]
LikeLike
[…] call this the Unicorn search problem, which I have previously described without the math in Predictive Coding Based Legal Methods for Search and Review. It happens frequently in the law where you are asked to produce a document that does not exist. In […]
LikeLike
[…] Predictive Coding Based Legal Methods for Search and Review. […]
LikeLike
[…] Coding Based Legal Methods for Search and Review – bit.ly/H6rY5x (Ralph […]
LikeLike
[…] Predictive Coding Based Legal Methods for Search and Review. […]
LikeLike
[…] Predictive Coding Based Legal Methods for Search and Review. […]
LikeLike
[…] call this the Unicorn search problem, which I have previously described in Predictive Coding Based Legal Methods for Search and Review. Of course, this requires proof of excellent search, another challenge that does not intimidate me […]
LikeLike