TAR Course: 3rd Class

Third Class: TREC Total Recall Track

These improvements naturally evolved to a certain degree as part of the e-Discovery Team members normal work supervising hundreds of document review projects each year. But the new insights that required us to make a complete restatement, a new Version 4.0 of Predictive Coding, arose in the last half of 2016 as part of our research at TREC. The key insights are summarized in the graphic below based on the Basilica of Superga in Turin, Italy. The basilica image seems appropriate since TREC is the high shrine of all scientific research on ESI search.

TAR Concepts - Basilica

In the 2016 Total Recall Track of TREC we conducted intensive, controlled experiments outside of our usual legal practice and document reviews. See e-Discovery Team’s 2016 TREC Report: Once Again Proving the Effectiveness of Our Standard Method of Predictive Coding. We completed our revised Final 2016 Report in April 2017. It is 160 pages.

TREC is sponsored by the National Institute of Standards and Technology. 2016 was the 25th year of the TREC event. We were honored to again participate, as we did in 2015, in the Total Recall Track of TREC. (TREC has dozens of different Tracks with each focusing on different aspects of text search.) The Total Recall Track was the closest research Track to a real legal review project. It is not a Legal Track, however, and so we necessarily did our own side-experiments and had our own unique approach, which was different from the Universities that participated. The TREC leadership of the Total Recall Track in 2016 was once again in the capable hands of Maura Grossman, Gordon Cormack and other scientists.

The TREC related work we did the Summer of 2016 went beyond the thirty-four research topics included in the TREC event. It went well beyond the 9,863,366 documents we reviewed with Mr. EDR’s help as part of the formal submittals. Countless more documents were reviewed for relevance if you include our side-experiments. At the same time that we did the formal tests specified by the Total Recall Track we did multiple side-experiments of our own. Some of these tests are still ongoing. We did so to investigate our own questions that are unique to legal search and thus beyond the scope of the Total Recall Track. We also performed some experiments to test unique attributes of Kroll Ontrack’s (now KL) EDR software. It uses a proprietary type of logistic regression algorithm that was awarded a patent in 2016.

We participated in this scientific event in 2015 and 2016 primarily to test the effectiveness of our Hybrid Multimodal IST Predictive Coding 4.0. method. We knew it was good, but wanted to try to measure how good it was, to see how it compared. We knew we were attaining high recall with good precision, but just how high and how precise? We had other research questions, but that was the primary one each year. In 2015 we searched posts from three ESI sources: Black Hat World Forum, regional news collections and Jeb Bush email. In 2016 all of the Total Recall Track was devoted to the Bush email. Bottom line, we proved the effectiveness of our method at TREC in 2015 and 2016 in almost one hundred test runs. We attained extremely high recall and precision scores in the most of the tests. See e-Discovery Team’s 2016 TREC Report: Once Again Proving the Effectiveness of Our Standard Method of Predictive Coding.

The e-Discovery Team doing this basic research consisted of myself and several of Kroll Ontrack’s top document review specialists, including Jim Sullivan and Tony Reichenberger. They have now fully mastered the e-discovery team Hybrid Multimodal IST methodologies.

The insights we gained and share here with version 4.0, and the skills we honed, including speed, did not come easily. It took full time work on client projects for a year, plus three full months of research, often in lieu of real summer vacations. This is hard work, but we love it. See: Why I Love Predictive Coding.

This kind of dedication of time and resources by an e-discovery vendor or law firm is unprecedented. There is a cost to attain the research benefits realized, both hard out-of-pocket costs and lost time. So I hope you understand that we are only going to share most of our techniques in this course. Some will be held back and kept as trade-secrets.

Although this article presents a complete course Predictive Coding 4.0 methodology, it cannot cover everything we know, even if we were willing to share everything. Many things can only be learned by doing, especially methods. Document review is, after all, a part of legal practice. As the scientists like to put it, legal search is essentially ad hoc. It changes and is customized to fit the particular evidence search assignments at hand. But we will try to share all of the basic insights. They have all been discussed on this blog before. The new insights we gained are more like a deepening understanding and matter of emphasis. They are refinements, not radical departures, although some are surprising.

The vendor we worked with at TRECk, Kroll, understood the importance of pure research and approved the expenditures incurred to participate in this research. I suggest you ask your vendor, or law firm, how much time they spent last year researching and experimenting with document review methods? The only other law firm we know of is Maura Grossman’s new solo practice. Shown below is the poster we prepared for the 2015 conference to share with other participants.

The results we attained certainly make this investment worthwhile, even if many in the profession do not realize it, much less appreciate it. They will in time, so will the consumers. This is a long term investment. Pure research is necessary for any technology company, including all companies in the e-Discovery field. The same holds true, albeit to a lesser extent, to any law firm claiming to have technological superiority.

Experience from handling live projects alone is too slow an incubator for the kind of AI breakthrough technologies we are now using. It is also too inhibiting. You do not experiment on important client data or review projects. Any expert will improvise somewhat during such projects to match the circumstances, and sometimes do post hoc analysis. But such work on client projects alone is not enough. Pure research is needed to continue to advance in AI-enhanced review.

That is why the e-Discovery Team has now conducted over fifty experiments on the Jeb Bush email collection alone. As a result of the many new things we learned in the experiments and new methods perfected, we reached a point in late 2016 where a complete restatement of our method was required. Thus Predictive Coding version 4.0 was born out of research and experiments, just like version 3.0 was before it.

For the full report on our research in 2016 see:  E-Discovery Team’s Final 2016 Report and also the short, but incomplete earlier summary, e-Discovery Team’s 2016 TREC Report: Once Again Proving the Effectiveness of Our Standard Method of Predictive Coding. Although this Predictive Coding 4.0 course will go into details on our TREC experiments, we will share the bottom line, the take-aways of this testing.

We will also not be discussing in this article our future plans and spin-off projects to the TREC research. Let’s just say that we have several in mind. One in particular will, I think, be very exciting for all attorneys and paralegals who do document review. Maybe even fun for those of you who, like us, are really into and enjoy a good computer search. You know who you are! Watch for announcements in the coming year. We are having too much fun here not to share some of the good times.

Go on to Class Four.

Or pause to do this suggested “homework” assignment for further study and analysis.

SUPPLEMENTAL READING: Review the related website, Mr. EDR. It presents the story from the AI’s point of view of the e-Discovery Team’s participation in 2015 and 2016 TREC Total Recall Track science experiments.

Next go onto read the e-Discovery Team’s revised Final 2016 Report. It is a long read, 160 pages. Still, without the Appendix, which describes all 34 searches, it is only 28 pages. Start by reading the main Report and then go deeper later by study of the Appendix. For the very ambitious, search and find the reports of some of the other participants in the Total Recall Tracks. To did deeper still, search out and read a few of the summaries of the prior Legal Tracks.

EXERCISE: See if you can figure out what the highest Recall scores were in the Legal Tracks of TREC. Why do you suppose the scores were so much lower in all of the Legal Tracks then the scores of the e-Discovery team and other participants in the Total Recall Tracks? There are multiple reasons.

Students are invited to leave a public comment below. Insights that might help other students are especially welcome. Let’s collaborate!

_

Ralph Losey COPYRIGHT 2017, 2023

ALL RIGHTS RESERVED

_

 

Leave a Reply

%d bloggers like this: