Announcing the e-Discovery Team’s TAR Training Program: 16 Classes, All Online, All Free – The TAR Course

March 19, 2017

We launch today a sixteen class online training program on Predictive Coding: the e-Discovery Team TAR Course. This is a “how to” course on predictive coding. We have a long descriptive name for our method, Hybrid Multimodal IST Predictive Coding 4.0. By the end of the course you will know exactly what that means. You will also understand the seventeen key things you need to know to do predictive coding properly, shown this diagram.


Hands-on
 hacking of predictive coding document reviews has been my obsession since Da Silva went viral. Da Silva Moore v. Publicis Groupe & MSL Group, 27 F.R.D. 182 (S.D.N.Y. 2012). That is the case where I threw Judge Peck the softball opportunity to approve predictive coding for the first time. See: Judge Peck Calls Upon Lawyers to Use Artificial Intelligence and Jason Baron Warns of a Dark Future of Information Burn-Out If We Don’t

Alas, because of my involvement in Da Silva I could never write about it, but I can tell you that none of the thousands of commentaries on the case have told the whole nasty story, including the outrageous “alternate fact” attacks by plaintiff’s counsel on Judge Andrew Peck and me. I guess I should just take the failed attempts to knock me and the Judge out of the case as flattery, but it still leaves a bad taste in my mouth. A good judge like Andy Peck did not deserve that kind of treatment. 

At the time of Da Silva, 2012, my knowledge of predictive coding was mostly theoretical, informational. But now, after “stepping-in” for five years to actually make the new software work, it is practical. For what “stepping-in” means see the excellent book on artificial intelligence and future employment by Professor Thomas Davenport and Julia Kirby, titled Only Humans Need Apply (HarperBusiness, 2016). Also see: Dean Gonsowski, A Clear View or a Short Distance? AI and the Legal Industry, and, Gonsowski, A Changing World: Ralph Losey on “Stepping In” for e-Discovery (Relativity Blog). 

If you are looking to craft a speciality in the law that rides the new wave of AI innovations, then electronic document review with TAR is a good place to start. See Part Two of my January 22, 2017 blog, Lawyers’ Job Security in a Near Future World of AI. This is where the money will be.

 

Our TAR Course is designed to teach this practical, stepping-in based knowledge. The link to the course will always be shown on this blog at the top of the page. The TAR page next to it has related information.

Since Da Silva we have learned a lot about the actual methods of predictive coding. This is hands-on learning through actual cases and experiments, including sixty-four test runs at TREC in 2015 and 2016.

We have come to understand very well the technical details, the ins and outs of legal document review enhanced by artificial intelligence, AI-enhanced review. That is what TAR and predictive coding really mean, the use of active machine learning, a type of specialized artificial intelligence, to find the key documents needed in an investigation. In the process I have written over sixty articles on the subject of TAR, predictive coding and document review, most of them focused on what we have learned about methods.

The TAR Course is the first time we have put all of this information together in a systematic training program. In sixteen classes we cover all seventeen topics, and much more. The result is an online instruction program that can be completed in one long weekend. After that it can serve as a reference manual. The goal is to help you to step-in and improve your document review projects.

The TAR Course has sixteen classes listed below. Click on some and check them out. All free. We do not even require registration. No tests either, but someday soon that may change. Stay tuned to the e-Discovery Team. This is just the first step dear readers of my latest hack of the profession. Change we must, and not just gradual, but radical. That is the only way the Law can keep up with the accelerating advances in technology. Taking the TAR Course is a minimum requirement and will get you ready for the next stage.

  1. First Class: Introduction
  2. Second Class: TREC Total Recall Track
  3. Third Class: Introduction to the Nine Insights Concerning the Use of Predictive Coding in Legal Document Review
  4. Fourth Class: 1st of the Nine Insights – Active Machine Learning
  5. Fifth Class: Balanced Hybrid and Intelligently Spaced Training
  6. Sixth Class: Concept and Similarity Searches
  7. Seventh Class: Keyword and Linear Review
  8. Eighth Class: GIGO, QC, SME, Method, Software
  9. Ninth Class: Introduction to the Eight-Step Work Flow
  10. Tenth Class: Step One – ESI Communications
  11. Eleventh Class: Step Two – Multimodal ECA
  12. Twelfth Class: Step Three – Random Prevalence
  13. Thirteenth Class: Steps Four, Five and Six – Iterate
  14. Fourteenth Class: Step Seven – ZEN Quality Assurance Tests
  15. Fifteenth Class: Step Eight – Phased Production
  16. Sixteenth Class: Conclusion

This course is not about the theory or law of predictive coding. You can easily get that elsewhere. It is about learning the latest methods to do predictive coding. It is about learning how to train an AI to find the ESI evidence you want. The future looks bright for attorneys with both legal knowledge and skills and software knowledge and skills. The best and brightest will also be able to work with various kinds of specialized AI to do a variety of tasks, including AI-enhanced document review. If that is your interest, then jump onto the TAR Course and start your training today. Who knows where it may take you?

________

__

.

 

 


e-Discovery Team’s 2016 TREC Report: Once Again Proving the Effectiveness of Our Standard Method of Predictive Coding

February 24, 2017

Team_TRECOur Team’s Final Report of its participation in the 2016 TREC ESI search Conference has now been published online by NIST and can be found here. TREC stands for Text Retrieval Conference. It is co-sponsored by a group within the National Institute of Standards and Technology (NIST), which is turn is an agency of the U.S. Commerce Department. The stated purpose of the annual TREC conference is to encourage research in information retrieval from large text collections.

The other co-sponsor of TREC is the United States Department of Defense. That’s right, the DOD is the official co-sponsor of this event, although TREC almost never mentions that. Can you guess why the DOD is interested? No one talks about it at TREC, but I have some purely speculative ideas. Recall that the NSA is part of the DOD.

We participated in one of several TREC programs in both 2015 and 2016, the one closest to legal search, called the Total Recall Track. The leaders, administrators of this Track were Professors Gordon Cormack and Maura Grossman. They also participated each year in their own track.

One of the core purposes of all of the Tracks is to demonstrate the robustness of core retrieval technology. Moreover, one of the primary goals of TREC is:

[T]o speed the transfer of technology from research labs into commercial products by demonstrating substantial improvements in retrieval methodologies on real-world problems.

Our participation in TREC in 2015 and 2016 has demonstrated substantial improvements in retrieval methodologies. That is what we set out to do. That is the whole point of the collaboration between the Department of Commerce and Department of Defense to establish TREC.

clinton_emailThe e-Discovery Team has a commercial interest in participation in TREC, not a defense or police interest. Although from what we saw with the FBI’s struggles to search email last year, the federal government needs help. We were very unimpressed by the FBI’s prolonged efforts to review the Clinton email collection. I was one of the few e-discovery lawyers to correctly call the whole Clinton email server “scandal” a political tempest in a teapot. I still do and I am still outraged by how her email review was handled by the FBI, especially with the last-minute “revelations.”

prism_nsaThe executive agencies of the federal government have been conspicuously absent from TREC. They seem incapable of effective search, which may well be a good thing. Still, we have to believe that the NSA and other defense agencies are able to do a far better job at large-scale search than the FBI. Consider their ongoing large-scale metadata and text interception efforts, including the once Top Secret PRISM operation. Maybe it is a good thing the NSA doe not share it abilities with the FBI, especially these days. Who knows? We certainly will not.

Mr_EDRThe e-Discovery Team’s commercial interest is to transfer Predictive Coding technology from our research labs into commercial products, namely transfer our Predictive Coding 4.0 Method using KrolL Discovery EDR software to commercial products. In our case at the present time “commercial products” means our search methods, time and consultations. But who knows, it may be reduced to a robot product someday like our Mr. EDR.

The e-Discovery Team method can be used on other document review platforms as well, not just Kroll’s, but only if they have strong active machine learning features. Active machine learning is what everyone at TREC was testing, although we appear to have been the only participant to focus on a particular method of operation. And we were the only team led by a practicing attorney, not an academic or software company. (Catalyst also fielded a team in 2015 and 2106 headed by Information Science Ph.D., Jeremy Pickens.)

Olympics-finish-line-Usain-Bolt-winsThe e-Discovery Team wanted to test the hybrid multimodal software methods we use in legal search to demonstrate substantial improvements in retrieval methodologies on real-world problems. We have now done so twice; participating in both the 2015 and 2016 Total Recall Tracks. The results in 2016 were even better than 2015. We obtained remarkable results in document review speed, recall and precision; although, as we admit, the search challenges presented at TREC 2016 were easier than most projects we see in legal discovery. Still, to use the quaint language of TREC, we have demonstrated the robustness of our methods and software.

These demonstrations, and all of the reporting and analysis involved, have taken hundreds of hours of our time, but there was no other venue around to test our retrieval methodologies on real-world problems. The demonstrations are now over. We have proven our case. Our standard Predictive Coding method has been tested and its effectiveness demonstrated. No one else has tested and proven their predictive coding methods as we have done. We have proven that our hybrid multimodal method of AI-Enhanced document review is the gold standard. We will continue to make improvements in our method and software, but we are done with participation in federal government programs to prove our standard, even one run by the National Institute of Standards and Technology.

predictive_coding_4-0_web

To prove our point that we have now demonstrated substantial improvements in retrieval methodologies, we quote below Section 5.1 of our official TREC report, but we urge you to read the whole thing. It is 164 pages. This section of our report covers our primary research question only. We investigated three additional research questions not included below.

__________

Section 5.1 First and Primary Research Question

What Recall, Precision and Effort levels will the e-Discovery Team attain in TREC test conditions over all thirty-four topics using the Team’s Predictive Coding 4.0 hybrid multimodal search methods and Kroll Ontrack’s software, eDiscovery.com Review (EDR).

Again, as in the 2015 Total Recall Track, the Team attained very good results with high levels of Recall and Precision in all topics, including perfect or near perfect results in several topics using the corrected gold standard. The Team did so even though it only used five of the eight steps in its usual methodology, intentionally severely constrained the amount of human effort expended on each topic and worked on a dataset stripped of metadata. The Team’s enthusiasm for the record-setting results, which were significantly better than its 2015 effort, is tempered by the fact that the search challenges presented in most of the topics in 2016 were not difficult and the TREC relevance judgments had to be corrected in most topics.  …

This next chart uses the corrected standard. It is the primary reference chart we use to measure our results. Unfortunately, it is not possible to make any comparisons with BMI standards because we do not know the order in which the BMI documents were submitted.

trec-16_revised-all-results

The average results obtained across all thirty-four topics at the time of reasonable call using the corrected standard are shown below in bold. The average scores using the uncorrected standard are shown for comparison in parentheses.

  • 88.17% Recall (75.46%)
  • 64.94% Precision (57.12%)
  • 69.15% F1 (57.69%)
  • 124 Docs Reviewed Effort (124)

At the time of reasonable call the Team had recall scores greater than 90% in twenty-two of the thirty-four topics and greater than 80% in five more topics. Recall of greater than 95% was attained in fourteen topics. These Recall scores under the corrected standard are shown in the below chart. The results are far better than we anticipated, including six topics with total recall – 100%, and two topics with both total recall and perfect precision, topic 417 Movie Gallery and topic 434 Bacardi Trademark.

recall-scores-amended-2016

At the time of reasonable call the Team had precision scores greater than 90% in thirteen of the thirty-four topics and greater than 75% in three more topics. Precision of greater than 95% was attained in nine topics. These Precision scores under the corrected standard are shown in the below chart. Again, the results were, in our experience, incredibly good, including three topics with perfect precision at the time of the reasonable call.

precision-scores-amended-2016

At the time of reasonable call the Team had F1 scores greater than 90% in twelve of the thirty-four topics and greater than 75% in two more. F1 of greater than 90% was attained in eight topics. These F1 scores under the corrected standard are shown in the below chart. Note there were two topics with a perfect score, Movie Gallery (100%) and Bacardi Trademark (100%) and three more that were near perfect: Felon Disenfranchisement (98.5%), James V. Crosby (97.57%), and Elian Gonzalez (97.1%).

f1-scores-amended_2016

We were lucky to attain two perfect scores in 2016 (we attained one in 2015), in topic 417 Movie Gallery and topic 434 Bacardi Trademark. The perfect score of 100% F1 was obtained in topic 417 by locating all 5,945 documents relevant under the corrected standard after reviewing only 66 documents. This topic was filled with form letters and was a fairly simple search.

The perfect score of 100% F1 was obtained in topic 434 Bacardi Trademark by locating all 38 documents relevant under the corrected standard after reviewing only 83 documents. This topic had some legal issues involved that required analysis, but the reviewing attorney, Ralph Losey, is an SME in trademark law so this did not pose any problems. The issues were easy and not critical to understand relevance. This was a simple search involving distinct language and players. All but one of the 38 relevant documents were found by tested, refined keyword search. One additional relevant document was found by a similarity search. Predictive coding searches were run after the keywords searches and nothing new was uncovered. Here machine learning merely performed a quality assurance role to verify that all relevant documents had indeed been found.

The Team proved once again, as it did in 2015, that perfect recall and perfect precision is possible, albeit rare, using the Team’s methods and fairly simple search projects.

The Team’s top ten projects attained remarkably high scores with an average Recall of 95.66%, average Precision of 97.28% and average F-Measure: 96.42%. The top ten are shown in the chart below.

top-10_results

In addition to Recall, Precision and F1, the Team per TREC requirements also measured the effort involved in each topic search. We measured effort by the number of documents that were actually human-reviewed prior to submission and coded relevant or irrelevant. We also measured effort by the total human time expended for each topic. Overall, the Team human-reviewed only 6,957 documents to find all the 34,723 relevant documents within the overall corpus of 9,863,366 documents. The total time spent by the Team to review the 6,957 documents, and do all the search and analysis and other work using our Hybrid Multimodal Predictive Coding 4.0 method, was 234.25 hours. reviewed_data_pie_chart_2016

It is typical in legal search to try to measure the efficiency of a document review by the number of documents classified by an attorney in an hour. For instance, a typical contract review attorney can read and classify an average of 50 documents per hour. The Team classified 9,863,366 documents by review of 6,957 documents taking a total time of 234.25 hours. The Team’s overall review rate for the entire corpus was thus 42,106 files per hour (9,863,366/234.25).

In legal search it is also typical, indeed mandatory, to measure the costs of review and bill clients accordingly. If we here assume a high attorney hourly rate of $500 per hour, then the total cost of the review of all 34 Topics would be $117,125. That is a cost of just over $0.01 per document. In a traditional legal review, where a lawyer reviews one document at a time, the cost would be far higher. Even if you assume a low attorney rate of $50 per hour, and review speed of 50 files per hour, the total cost to review every document for every issue would be $9,863,366. That is a cost of $1.00 per document, which is actually low by legal search standards.13

Analysis of project duration is also very important in legal search. Instead of the 234.25 hours expended by our Team using Predictive Coding 4.0, traditional linear review would have taken 197,267 hours (9,863,366/50). In other words, the review of thirty-four projects, which we did in our part-time after work in one Summer, would have taken a team of two lawyers using traditional methods, 8 hours a day, every day, over 33 years! These kinds of comparisons are common in Legal Search.

Detailed descriptions of the searches run in all thirty-four topics are included in the Appendix.

___________

We also reproduce below Section 1.0, Summary of Team Efforts, from our 2016 TREC Report. For more information on what we learned in the 2016 TREC see alsoComplete Description in 30,114 Words and 10 Videos of the e-Discovery Team’s “Predictive Coding 4.0” Method of Electronic Document ReviewNine new insights that we learned in the 2016 research are summarized by the below diagram more specifically described in the article.

predictive_coding_six-three-2

_________

Excerpt From Team’s 2016 Report

1.1 Summary of Team’s Efforts. The e-Discovery Team’s 2016 Total Recall Track Athome project started June 3, 2016, and concluded on August 31, 2016. Using a single expert reviewer in each topic the Team classified 9,863,366 documents in thirty-four review projects.

The topics searched in 2016 and their issue names are shown in the chart below. Also included are the first names of the e-Discovery Team member who did the review for that topic, the total time spent by that reviewer and the number of documents manually reviewed to find all of the relevant documents in that topic. The total time of all reviewers on all projects was 234.25 hours. All relevant documents, totaling 34,723 by Team count, were found by manual review of 6,957 documents. The thirteen topics in red were considered mandatory by TREC and the remaining twenty-one were optional. The e-Discovery Team did all topics.

trec-2016-topics

They were all one-person, solo efforts, although there was coordination and communications between Team members on the Subject Matter Expert (SME) type issues encountered. This pertained to questions of true relevance and errors found in the gold standard for many of these topics. A detailed description of the search for each topic is contained in the Appendix.

In each topic the assigned Team attorney personally read and evaluated for true relevance every email that TREC returned as a relevant document, and every email that TREC unexpectedly returned as Irrelevant. Some of these were read and studied multiple times before we made our final calls on true relevance, determinations that took into consideration and gave some deference to the TREC assessor adjudications, but were not bound by them. Many other emails that the Team members considered irrelevant, and TREC agreed, were also personally reviewed as part of their search efforts. As mentioned, there was sometimes consultations and discussion between Team members as to the unexpected TREC opinions on relevance.

This contrasts sharply with participants in the Sandbox division. They never make any effort to determine where their software made errors in predicting relevance, or for any other reasons. They accept as a matter of faith the correctness of all TREC’s prior assessment of relevance. To these participants, who were all academic institutions, the ground truth itself as to relevance or not, was of no relevance. Apparently, that did not matter to their research.

All thirty-four topics presented search challenges to the Team that were easier, some far easier, than the Team typically face as attorneys leading legal document review projects. (If the Bush email had not been altered by omission of metadata, the searches would have been even easier.) The details of the searches performed in each of the thirty-four topics are included in the Appendix. The search challenges presented by these topics were roughly equivalent to the most simplistic challenges that the e-Discovery Team might face in projects involving relatively simple legal disputes. A few of the search topics in 2016 included quasi legal issues, more than were found in the 2015 Total Recall Track. This is a revision that the Team requested and appreciated because it allowed some, albeit very limited testing of legal judgment and analysis in determination of true relevance in these topics. In legal search relevancy, legal analysis skills are obviously very important. In most of the 2016 Total Recall topics, however, no special legal training or analysis was required for a determination of true relevance.

At Home participants were asked to track and report their manual efforts. The e-Discovery Team did this by recording the number of documents that were human reviewed and classified prior to submission. More were reviewed after submission as part of the Team’s TREC relevance checking. Virtually all documents human reviewed were also classified, although all documents classified were not used for active training of the software classifier. The Team also tracked effort by number of attorney hours worked as is traditional in legal services. Although the amount of time varied somewhat by topic, the average time spent per topic was only 6.89 hours. The average review and classification speed for each project was 42,106 files per hour (9,863,366/234.25).

Again, for the full picture and complete details of our work please see the complete 164 page report to TREC of the e-Discovery Team’s Participation in the 2016 Total Recall Track.

 

 

 

 



What Chaos Theory Tell Us About e-Discovery and the Projected ‘Information → Knowledge → Wisdom’ Transition

May 20, 2016
Ralph and Gleick

Gleick & Losey meeting sometime in the future

This article assumes a general, non-technical familiarity with the scientific theory of Chaos. See James Gleick’s book, Chaos: making a new science (1987). This field of study is not usually discussed in the context of “The Law,” although there is a small body of literature outside of e-discovery. See: Chen, Jim, Complexity Theory in Legal Scholarship (Jurisdymanics 2006).

The article begins with a brief, personal recapitulation of the basic scientific theories of Chaos. I buttress my own synopsis with several good instructional videos. My explanation of the Mandelbrot Set and Complex numbers is a little long, I know, but you can skip over that and still understand all of the legal aspects. In this article I also explore the application of the Chaos theories to two areas of my current work:

  1. The search for needles of relevant evidence in large, chaotic, electronic storage systems, such as email servers and email archives, in order to find the truth, the whole truth, and nothing but the truth needed to resolve competing claims of what happened – the facts – in the context of civil and criminal law suits and investigations.
  2. The articulation of a coherent social theory that makes sense of modern technological life, a theory that I summarize with the words/symbols: Information → Knowledge → Wisdom. See Information → Knowledge → Wisdom: Progression of Society in the Age of Computers and the more recent, How The 12 Predictions Are Doing That We Made In “Information → Knowledge → Wisdom.”

Introduction to the Science of Chaos

Gleick’s book on Chaos provides a good introduction to the science of chaos and, even though written in 1987, is still a must read. For those who have read this long ago, like me, here is a good, short, 3:53, refresher video James Gleick on Chaos: Making a New Science (Open Road Media, 2011) below:

mandelbrot_youngA key leader in the Chaos Theory field is the late great French mathematician, Benoit Mandelbrot (1924-2010) (shown right). Benoit, a math genius who never learned the alphabet, spent most of his adult life employed by IBM. He discovered and named the natural phenomena of fractals. He discovered that there is a hidden order to any complex, seemingly chaotic system, including economics and the price of cotton. He also learned that this order was not causal and could not be predicted. He arrived at these insights by study of geometry, specifically the rough geometric shapes found everywhere in nature and mathematics, which he called fractals. The penultimate fractal he discovered now bears his name, The Mandelbrot Fractalshown in the computer photo below, and explained further in the video that follows.

Mandelbrot set

Look here for thousands of additional videos of fractals with zoom magnifications. You will see the recursive nature of self-similarity over varying scales of magnitude. The patterns repeat with slight variations. The complex patterns at the rough edges continue infinitely without repetition, much like Pi. They show the unpredictable element and the importance of initial conditions played out over time. The scale of the in-between dimensions can be measured. Metadata remains important in all investigations, legal or otherwise.

mandelbrot_equation

The Mandelbrot is based on a simple mathematical formula involving feedback and Complex Numbers: z ⇔ z2 + c. The ‘c’ in the formula stands for any Complex Number. Unlike all other numbers, such as the natural numbers one through nine – 1.2.3.4.5.6.7.8.9, the Complex Numbers do not exist on a horizontal number line. They exist only on an x-y coordinate time plane where regular numbers on the horizontal grid combine with so-called Imaginary Numbers on the vertical grid. A complex number is shown as c= a + bi, where a and b are real numbers and i is the imaginary number. Complex_number_illustration

A complex number can be visually represented as a pair of numbers (a, b) forming a vector on a diagram called an Argand diagram, representing the complex plane. “Re” is the real axis, “Im” is the imaginary axis, and i is the imaginary number. And that is all there is too it. Mandelbrot calls the formula embarrassingly simple. That is the Occam’s razor beauty of it.

To understand the full dynamics of all of this remember what Imaginary Numbers are. They are a special class of numbers where a negative times a negative creates a negative, not a positive, like is the rule with all other numbers. In other words, with imaginary numbers -2 times -2 = -4, not +4. Imaginary numbers are formally defined as i2 = −1.

Thus, the formula z ⇔ z2 + c, can be restated as z ⇔ z2 + (a + bi).

The Complex Numbers when iterated according to this simple formula – subject to constant feedback – produce the Mandelbrot set.

mandelbrot

Mandelbrot_formulaThe value for z in the iteration always starts with zero. The ⇔ symbol stands for iteration, meaning the formula is repeated in a feedback loop. The end result of the last calculation becomes the beginning constant of the next: z² + c becomes the z in the next repetition. Z begins with zero and starts with different values for c. When you repeat the simple multiplication and addition formula millions of times, and plot it on a Cartesian grid, the Mandelbrot shape is revealed.

When iteration of a squaring process is applied to non-complex numbers the results are always known and predictable. For instance when any non-complex number greater than one is repeatedly squared, it quickly approaches infinity: 1.1 * 1.1 = 1.21 * 1.21 = 1.4641 * 1.4641 = 2.14358 and after ten iterations the number created is 2.43… * 10 which written out is 2,430,000,000,000,000,000,000,000,000,000,000,000,000,000. A number so large as to dwarf even the national debt. Mathematicians say of this size number that it is approaching infinity.

The same is true for any non-complex number which is less than one, but in reverse; it quickly goes to the infinitely small, the zero. For example with .9: .9.9=.81; .81.81=.6561; .6561.6561=.43046 and after only ten iterations it becomes 1.39…10 which written out is .0000000000000000000000000000000000000000000000139…, a very small number indeed.

With non-complex numbers, such as real, rational or natural numbers, the squaring iteration must always go to infinity unless the starting number is one. No matter how many times you square one, it will still equal one. But just the slightest bit more or less than one and the iteration of squaring will attract it to the infinitely large or small. The same behavior holds true for complex numbers: numbers just outside of the circle z = 1 on the complex plane will jump off into the infinitely large, complex numbers just inside z = 1 will quickly square into zero.

The magic comes by adding the constant c (a complex number) to the squaring process and starting from z at zero: z ⇔ z² + c. Then stable iterations – a set attracted to neither the infinitely small or infinitely large – become possible. The potentially stable Complex numbers lie both outside and inside of the circle of z = 1; specifically on the complex plane they lie between -2.4 and .8 on the real number line, the horizontal x grid, and between -1.2 and +1.2 on the imaginary line, the vertical y grid. These numbers are contained within the black of the Mandelbrot fractal.

Mandelbrot_grid

In the Mandelbrot formula z ⇔ z² + c, where you always start the iterative process with z equals zero, and c equaling any complex number, an endless series of seemingly random or chaotic numbers are produced. Like the weather, the stock market and other chaotic systems, negligible changes in quantities, coupled with feedback, can produce unexpected chaotic effects. The behavior of the complex numbers thus mirrors the behavior of the real world where Chaos is obvious or lurks behind the most ordered of systems.

With some values of ‘c’ the iterative process immediately begins to exponentially increase or fall into infinity. These numbers are completely outside of the Mandelbrot set. With other values of ‘c’ the iterative process is stable for a number of repetitions, and only later in the dynamic process are they attracted to infinity. These are the unstable strange attractor numbers just on the outside edge of the Mandelbrot set. They are shown on computer graphics with colors or shades of grey according to the number of stable iterations. The values of ‘c’ which remain stable, repeating as a finite number forever, never attracted to infinity, and thus within the Mandelbrot set, are plotted as black.

Mandel_Diagram

Some iterations of complex numbers like 1 -1i run off into infinity from the start, just like all of the real numbers. Other complex numbers are always stable like -1 +0i. Other complex numbers stay stable for many iterations, and then only further into the process do they unpredictably begin to start to increase or decrease exponentially (for example, .37 +4i stays stable for 12 iterations). These are the numbers on the edge of inclusion of the stable numbers shown in black.

Chaos enters into the iteration because out of the potentially infinite number of complex numbers in the window of -2.4 to .8 along the horizontal real number axis, and -1.2 to 1.2 along the vertical imaginary number axis. There are an infinite subset of such numbers on the edge, and they cannot be predicted in advance. All that we know about these edge numbers is that if the z produced by any iteration lies outside of a circle with a radius of 2 on the complex plane, then the subsequent z values will go to infinity, and there is no need to continue the iteration process.

By using a computer you can escape the normal limitations of human time. You can try a very large number of different complex numbers and iterate them to see what kind they may be, finite or infinite. Under the Mandelbrot formula you start with z equals zero and then try different values for c. When a particular value of c is attracted to infinity – produces a value for z greater than 2 – then you stop that iteration, go back to z equals zero again, and try another c, and so on, over and over again, millions and millions of times as only a computer can do.

Mandel_zoom_08_satellite_antennaMandelbrot was the first to discover that by using zero as the base z for each iteration, and trying a large number of the possible complex numbers with a computer on a trial and error basis, that he could define the set of stable complex numbers graphically by plotting their location on the complex plane. This is exactly what the Mandelbrot figure is. Along with this discovery came the surprise realization of the beauty and fractal recursive nature of these numbers when displayed graphically.

The following Numberphile video by Holly Krieger, an NSF postdoctoral fellow and instructor at MIT, gives a fairly accessible, almost cutesy, yet still technically correct explanation to the Mandelbrot set.

Fractals and the Mandelbrot set are key parts of the Chaos theories, but there is much more to it than that. Chaos Theory impacts our basic Newtonian, cause-effect, linear world view of reality as a machine. For a refresher on the big picture of the Chaos insights and how the old linear, Newtonian, machine view of reality is wrong, look at this short summary: Chaos Theory (4:48)

Anther Chaos Theory instructional applying the insights to psychology is worth your view. The Science and Psychology of the Chaos Theory (8:59, 2008). It suggests the importance of spontaneous actions in the moment, the so-called flow state.

Also see High Anxieties – The Mathematics of Chaos (59:00, BBC 2008) concerning Chaos Theories, Economics and the Environment, and Order and Chaos (50:36, New Atlantis, 2015).

Application of Chaos Theories to e-Discovery

The use of feedback, iteration and algorithmic processes are central to work in electronic discovery. For instance, my search methods to find relevant evidence in chaotic systems follow iterative processes, including continuous, interactive, machine learning methods. I use these methods to find hidden patterns in the otherwise chaotic data. An overview of the methods I use in legal search is summarized in the following chart. As you can see, steps four, five and six iterate. These are the steps where human computer interactions take place. 
predictive_coding_3.0

My methods place heavy reliance on these steps and on human-computer interaction, which I call a Hybrid process. Like Maura Grossman and Gordon Cormack, I rely heavily on high-ranking documents in this Hybrid process. The primary difference in our methods is that I do not begin to place a heavy reliance on high-ranking documents until after completing several rounds of other training methods. I call this four cylinder multimodal training. This is all part of the sixth step in the 8-step workflow chart above. The four cylinders search engines are: (1) high ranking, (2) midlevel ranking or uncertain, (3) random, and (4) multimodal (including all types of search, such as keyword) directed by humans.

Analogous Application of Similar Mandelbrot Formula For Purposes of Expressing the Importance of the Creative Human Component in Hybrid 

4-5-6-only_predictive_coding_3.0

Recall Mandelbrot’s formula: z ⇔ z² + c, which is the same as z ⇔ z2 + (a + bi). I have something like that going on in my steps four, five and six. If you plugged the numbers of the steps into the Mandelbrot formula it would read something like this: 4 ⇔ 4² + (5+6i). The fourth step is the key AI Predictive Ranking step, where the algorithm ranks the probable relevance of all documents. The fourth step of computer ranking is the whole point of the formula, so AI Ranking here I will call ‘z‘ and represents the left side of the formula. The fifth step is where humans read documents to determine relevance, let’s call that ‘r‘ and the sixth step is where human’s train the computer, ‘t‘. This is the Hybrid Active Training step where the four cylinder multimodal training methods are used to select documents to train the whole set. The documents in steps five and six, r and t are added together for relevance feedback, (r + ti).

Thus, z ⇔ z² + c, which is the same as z ⇔ z2 + (a + bi), becomes under my system z ⇔ z + (r + ti). (Note: I took out the squaring, z², because there is no such exponential function in legal search; it’s all addition.) What, you might ask, is the i in my version of the formula? This is the critical part in my formula, just as it is in Mandelbrot’s. The imaginary number – i – in my formula version represents the creativity of the human conducting the training.

The Hybrid Active Training step is not fully automated in my system. I do not simply use the highest ranking documents to train, especially in the early rounds of training, as do some others. I use a variety of methods in my discretion, especially the multimodal search methods such a keywords, concept search, and the like. In text retrieval science this use of human discretion, human creativity and judgment, is called an ad hoc search. It contrasts with fully automated search, where the text retrieval experts try to eliminate the human element. See Mr EDR for more detail on 2016 TREC Total Recall Track that had both ad hoc and fully automated sections.

My work with legal search engines, especially predictive coding, has shown that new technologies do not work with the old methods and processes, such as linear review or keyword alone. New processes are required that employ new ways of thinking. The new methods that link creative human judgments (i) and the computer’s amazing abilities at text reading speed, consistency, analysis, learning and ranking (z).

A rather Fat Cat. My latest processes, Predictive Coding  3.0, are variations of Continuous Active Training (CAT) where steps four, five and six iterate until the project is concluded. Grossman & Cormack call this Continuous Active Learning or CAL, and they claim Trademark rights to CAL. I respect their right to do so (no doubt they grow weary of vendor rip-offs) and will try to avoid the acronym henceforth. My use of the acronym CAT essentially takes the view of the other side, the human side that trains, not the machine side that learns. In both Continuous Active Learning and CAT the machine keeps learning with every document that a human codes. Continuous Active Learning or Training, makes the linear seed-set method obsolete, along with the control set and random training documents. See Losey, Predictive Coding 3.0.

In my typical implementation of Continuous Active Training I do not automatically include every document coded as a training document. This is the sixth training step (‘t‘ in the prior formula). Instead of automatically using every document to train that has been coded relevant or irrelevant, I select particular documents that I decide to use to train. This, in addition to multimodal search in step six, Hybrid Active, is another way in which the equivalent of Imaginary Numbers come into my formula, the uniquely human element (ti). I typically use most every relevant document coded in step five, the ‘r‘ in the formula, as a training document, but not all. z ⇔ z + (r + ti)

I exercise my human judgment and experience to withhold certain training documents. (Note, I never withhold hot trainers (highly relevant documents)). I do this if my experience (I am tempted to say ‘my imagination‘) suggests that including them as training documents will likely slow down or confuse the algorithm, even if temporarily. I have found that this improves efficiency and effectiveness. It is one of the techniques I used to win document review contests.

robot-friendThis kind of intimate machine communication is possible because I carefully observe the impact of each set of training documents on the classifying algorithm, and carryover lessons – iterate – from one project to the next. I call this keeping a human in the loop and the attorney in charge of relevance scope adjudications. See Losey, Why the ‘Google Car’ Has No Place in Legal Search. We humans provide experienced observation, new feedback, different approaches, empathy, play and emotion. We also add a whole lot of other things too. The AI-Robot is the Knowledge fountain. We are the Wisdom fountain.That it is why we should strive to progress into and through the Knowledge stage as soon as possible. We will thrive in the end-goal Wisdom state.

Application of Chaos Theory to Information→Knowledge→Wisdom

mininformation_arrowsThe first Information stage of the post-computer society in which we live is obviously chaotic. It is like the disconnected numbers that lie completely outside of the Mandelbrot set. It is pure information with only haphazard meaning. It is often just misinformation. Just exponential. There is an overwhelming deluge of such raw information, raw data, that spirals off into an infinity of dead-ends. It leads no where and is disconnected. The information is useless. You may be informed, but to no end. That is modern life in the post-PC era.

The next stage of society we seek, a Knowledge based culture, is geometrically similar to the large black blogs that unite most of the figure. This is the finite set of numbers that provide all connectivity in the Mandelbrot set. Analogously, this will be a time when many loose-ends will be discarded, false theories abandoned, and consensus arise.

In the next stage we will not only be informed, we will be knowledgable. The information we all be processed. The future Knowledge Society will be static, responsible, serious and well fed. People will be brought together by common knowledge. There will be large scale agreements on most subjects. A tremendous amount of diversity will likely be lost.

After a while a knowledgable world will become boring. Ask any professor or academic.  The danger of the next stage will be stagnation, complacency, self-satisfaction. The smug complacency of a know-it-all world. This may be just as dangerous as the pure-chaos Information world in which we now live.

If society is to continue to evolve after that, we will need to move beyond mere Knowledge. We will need to challenge ourselves to attain new, creative applications of Knowledge. We will need to move beyond Knowledge into Wisdom.

benoit-mandelbrot-seahorse-valleyI am inclined to think that if we ever do progress to a Wisdom-based society, we will be a place and time much like the unpredictable fractal edges of the Mandelbrot. Stable to a point, but ultimately unpredictable, constantly changing, evolving. The basic patterns of our truth will remain the same, but they will constantly evolve and be refined. The deeper we dig, the more complex and beautiful it will be. The dry sameness of a Knowledgable based world will be replaced by an ever-changing flow, by more and more diversity and individuality. Our social cohesivity will arise from recursivity and similarity, not sameness and conformity. A Wisdom based society will be filled with fractal beauty. It will live ever zigzagging between the edge of the known and unknown. It will also necessarily have to be a time when people learn to get along together and share in prosperity and health, both physical and mental. It will be a time when people are accustomed to ambiguities and comfortable with them.

In Wisdom World knowledge itself will be plentiful, but will be held very lightly. It will be subject to constant reevaluation. Living in Wisdom will be like living on the rough edge of the Mandelbrot. It will be a culture that knows infinity firsthand. An open, peaceful, ecumenical culture that knows everything and nothing at the same time. A culture where most of the people, or at least a strong minority, have attained a certain level of personal Wisdom.

Conclusion

Back to our times, where we are just now discovering what machine learning can do, we are just beginning to pattern our investigations, our search for truth, in the Law and elsewhere, on new information gleaned from the Chaos theories. Active machine learning, Predictive Coding, is a natural outgrowth of Chaos Theory and the Mandelbrot Set. The insights of hidden fractal order that can only be seen by repetitive computer processes are prevalent in computer based culture. These iterative, computer assisted processes have been the driving force behind thousands of fact investigations that I have conducted since 1980.

I have been using computers to help me in legal investigations since 1980. The reliance on computers at first increased slowly, but steadily. Then from about 2006 to 2013 the increase accelerated and peaked in late 2013. The shift is beginning to level off. We are still heavily dependent on computers, but now we understand that human methods are just as important as software. Software is limited in its capacities without human additive, especially in legal search. Hybrid, Man and Machine, that is the solution. But remember that the focus should be on us, human lawyers and search experts. The AIs we are creating and training should be used to Augment and Enhance our abilities, not replace them. They should complement and complete us.

butterfly_effectThe converse realization of Chaos Theory, that disorder underlies all apparent order, that if you look closely enough, you will find it, also informs our truth-seeking investigatory work. There are no smooth edges. It is all rough. If you look close enough the border of any coastline is infinite.

The same is true of the complexity of any investigation. As every experienced lawyer knows, there is no black and white, no straight line. It always depends on so many things. Complexity and ambiguity are everywhere. There is always a mess, always rough edges. That is what makes the pursuit of truth so interesting. Just when you think you have it, the turbulent echo of another butterfly’s wings knock you about.

The various zigs and zags of e-discovery, and other investigative, truth-seeking activities, are what make them fascinating. Each case is different, unique, yet the same patterns are seen again and again with recursive similarity. Often you begin a search only to have it quickly burn out. No problem, try again. Go back to square one, back to zero, and try another complex number, another clue. Pursue a new idea, a new connection. You chase down all reasonable leads, understanding that many of them will lead nowhere. Even failed searches rule out negatives and so help in the investigation. Lawyers often try to prove a negative.

The fractal story that emerges from Hybrid Multimodal search is often unexpected. As the search matures you see a bigger story, a previously hidden truth. A continuity emerges that connects previously unrelated facts. You literally connect the dots. The unknown complex numbers – (a + bi) – the ones that do not spiral off into the infinite large or small, do in fact touch each other when you look closely enough at the spaces.

z ⇔ z2 + (a + bi)

SherlockI am no Sherlock, but I know how to find ESI using computer processes. It requires an iterative sorting processes, a hybrid multimodal process, using the latest computers and software. This process allows you to harness the infinite patience, analytics and speed of a machine to enhance your own intelligence ……. to augment your own abilities. You let the computer do the boring bits, the drudgery, while you do the creative parts.

The strength comes from the hybrid synergy. It comes from exploring the rough edges of what you think you know about the evidence. It does not come from linear review, nor simple keyword cause-effect. Evidence is always complex, always derived from chaotic systems. A full multimodal selection of search tools is needed to find this kind of dark data.

The truth is out there, but sometimes you have to look very carefully to find it. You have to dig deep and keep on looking to find the missing pieces, to move from Information → Knowledge → Wisdom.

_______

______

_____

____

___

__

_

.

Mandelbrot_zoom

.

_

.

blue zoom Mandelbrot fractal animation of looking deeper into the details

.

.


%d bloggers like this: