Lawyers’ Job Security in a Near Future World of AI, the Law’s “Reasonable Man Myth” and “Bagley Two” – Part Two

January 22, 2017

This is the second and concluding section to the two-part blog, Lawyers’ Job Security in a Near Future World of AI, the Law’s “Reasonable Man Myth” and “Bagley Two.” Click here to read Part One.

Robot_handshake

___________

Next consider Judge Haight’s closing words to the opinion dated December 22, 2016, Ruling On Plaintiff’s Motion To Compel; Bagely v. Yale, Civil Action No. 3:13-CV-1890 (CSH):

However, requiring this additional production, or a further deposition in case of need, is in keeping with a governing objective of the Federal Rules of Civil Procedure: “By requiring disclosure of all relevant information, the discovery rules allow ultimate resolution of disputed issues to be based on full and accurate understanding of true facts.” 6 Moore’s Federal Practice § 26.02 (Matthew Bender 3d ed.). 6

__________

6 While Yale may not welcome the measurement of its obligations in the case at bar by these principles, it is worth recalling that the treatise’s principal initial author, James Wm. Moore, was a towering figure on the faculty of Yale Law School. In his preface to the first edition (1938), Professor Moore referred to his effort “at all times to accord to the Rules the interpretation which is most likely to attain the general objective of the new practice: the settlement of litigation on the merits.” That is the interpretation this Ruling attempts to adopt.

william_moore_prof_yale

Prof. Moore (1905-1994)

Poor Yale. Moore’s Federal Practice is one of the most cited treatises in the law. James W. Moore was the author of the 34-volume Moore’s Federal Practice (2d ed., 1948) and the three-volume Moore’s Manual: Federal Practice & Procedure (1962). He was also the Sterling Professor Emeritus of Law at Yale University, where he taught for 37 years. Who else but Yale can have anything in Moore’s thirty-four volume treatise held against them personally? Seems kind of funny, but I am sure Yale’s attorneys were not laughing.

Getting back to the case and Judge Haight’s decision. Aside from showing the malleability and limits of reason, Bagley Two provides some important new precedent for e-discovery, namely his rulings on privilege and the discoverability of a party’s preservation efforts. Judge Haight starts by repeating what is now established law, that a party’s preservation efforts are not satisfied by mere issuance of a notice, that a whole process is involved and the process must be reasonable. He then goes on to provide a pretty good list of the facts and circumstances that should be considered to determine reasonability.

[A] party’s issuance of a litigation hold notice does not put an end to the party’s obligation to preserve evidence; it is, rather, the first in a series of related steps necessary to ensure that preservation. As Magistrate Judge Francis aptly observed in Mastr Adjustable Rate Mortgages Trust 2006 v. UBS Real Estate Securities Inc., 295 F.R.D. 77, 85 (S.D.N.Y. 2013): “A litigation hold is not, alone, sufficient; instead compliance must be monitored.”

In spoliation cases involving litigation hold notices, one can discern from Second Circuit and district court opinions a number of decisive questions:

1. When did a party’s duty to preserve evidence arise?
2. Did the party issue a litigation hold notice in order to preserve evidence?
3. When did the party issue a litigation hold notice, in relation to the date its duty to preserve the evidence arose?
4. What did the litigation hold notice say?
5. What did recipients of the litigation hold notice do or say, in response to or as result of, the notice?
6. After receiving recipients’ responses to the litigation hold notice, what further action, if any, did the party giving the notice take to preserve the evidence?

Questions 2 through 6 are entirely fact-specific to a given case. Question 1 is a mixed question of law and fact, whose legal element the Second Circuit defined in Fujitsu Ltd. v. Federal Express Corp., 247 F.3d 423, 436 (2d Cir. 2001): “The obligation to preserve evidence arises when the party has notice that the evidence is relevant to litigation or when a party should have known that the evidence may be relevant to future litigation.”

In the case at bar, I am unable to accept Yale’s argument that the litigation hold notices it issued about Bagley and the recipients’ responses to the notices are immune from discovery because (in the absence of proof that spoliation had in fact occurred) such documents “are subject to the attorney-client and to work product privileges,” Defendants’ Brief [Doc. 192], at 3. That contention is something of a stretch. … . Assuming that all of Clune’s litigation hold notices were sent to employees of Yale, Clune was in effect communicating with his client. However, the predominant purpose of that communication was to give recipients forceful instructions about what they must do, rather than advice about what they might do. 3

I like the list of six key facts to consider to weigh the reasonability of preservation efforts, especially the last one. But my primary point here is the malleability of reason in classifying the notice as unprotected. A letter from in-house counsel telling employees that the law requires them to preserve is not advice entitled to privilege protection? It’s predominant purpose was instead unprotected instructions? The language of the litigation hold notices was earlier quoted in the opinion. It’s language included the following:

[A]ll members of the Yale faculty and staff who have information in their possession or control relating or referring in any way to Professor Bagley, her employment and teaching at SOM, or the circumstances relating to the non-renewal of her faculty appointment (collectively “this Matter”) have a legal obligation to preserve that information. The law imposes this obligation to prevent the loss of potential evidence during litigation. You must preserve and retain, and not alter, delete, remove, discard or destroy, directly or indirectly, any information concerning this Matter. Failure to preserve information could seriously undermine Yale’s legal position and lead to legal sanctions.

The lawyer’s letter tells employees that they “have a legal obligation to preserve,” and the legal consequences if they do not. Yet this letter is not advice because the predominant purpose is just an unprotected instruction? That is the holding.

mental_impressionsJudge Haight gets rid of work product protection too.

As for the work product doctrine, it “is not actually a privilege, but rather a qualified immunity from discovery,” codified in Fed. R. Civ. P. Rule 26(b)(3), whose purpose “is to protect an attorney’s mental processes so that the attorney can analyze and prepare for the client’s case without interference from an opponent.” 6 Moore’s Federal Practice, § 26.70[1] (Matthew Bender 3d ed.). 4 That purpose is not implicated by the present exercise.

__________

4 Fed. R. Civ. P. 26 (b)(3) of Civil Procedure protects from disclosure those materials which reveal “the mental impressions, conclusions, opinions, or legal theories of a party’s attorney.” See also In re Steinhardt Partners, L.P., 9 F.3d 230, 234 (2d Cir. 1993) (“At its core, the work product doctrine shelters the mental processes of the attorney, providing a privileged area within which he can analyze and prepare his client’s case.”) (quoting United States v. Nobles, 422 U.S. 225, 238 (1975)) (emphasis added).

I do not agree with Judge Haight on this aspect of his ruling. I think both work product and attorney client apply to these particular notices and his “reasoning”on this issue is wrong. I do, however, agree with his final ruling requiring production. I think the protections had been waived by the circumstances and actions of defense counsel, which, by the way, they were correct in doing. I think the waiver on their part was necessary. Judge Haight also mentioned waiver, but as dicta alternative grounds in footnote three:

3 The Court also notes that to the extent that Yale’s litigation hold notices included the text of the exemplar provided to Plaintiff as “document preservation notices,” that text has already been revealed publicly in this case, so that secrecy or privilege relating to that language was destroyed or waived. See Doc. 191-1, Ex. F.

triggerJudge Haight then looks at the question of when Yale’s duty to preserve commenced. Recall Yale kept adding custodians in eight stages. The first were pre-litigation notices. They were made, I note, after Yale’s lawyer mental processes told him that litigation was reasonably likely. The last were made after suit was filed, again based on the lawyer’s mental processes causing him to believe that these additional witnesses might have relevant evidence. The mental processes of Plaintiff’s attorneys led them to believe that all of the notices, including the pre-litigation notices, were sent too late and thus spoliation was likely. Here is Judge Haight’s analysis of the trigger issue:

When, during the course of this melancholy chain of events, should Yale have known that evidence pertinent to Bagley’s reappointment might be relevant to future litigation? That is a crucial question in spoliation analysis. A state of reasonable anticipation clearly antedates the actual filing of a complaint; in Fujitsu, 247 F.3d at 436, the Second Circuit was careful to couple actual present and possible future litigation as catalysts of equal strength for the preservation of evidence.

Bagley has not yet formally moved for spoliation sanctions, and so the question is not yet before me for decision, but some preliminary, non-binding observations may be made. The record previously made in the case shows that Bagley’s personal distress and institutional disapproval and distrust grew throughout the winter and spring of 2012 (the last year of her five-year appointment), so that when on May 24, 2012, Dean Snyder told Bagley that she would not be reappointed, it would not be irrational to suppose that Bagley might soon transform herself from disheartened academic to vengeful litigant. In fact, Bagley filed an internal discrimination complaint against Yale during the following month of June 2012 (which had the effect of bringing Provost Salovey out of the wings and onto the stage).

Predictable_IrrationalNote the Judge’s use of the phrase not be irrational to suppose. What is the impact of hindsight bias on this supposedly objective, rational analysis? Bagley’s later actions made it obvious that she would sue. She did sue. The law suit has been very contentious. But was it really all that obvious back in 2012 that Yale would end up in the federal courthouse? I personally doubt it, but, admit it is a close judgment call. We lawyers say that a lot. All that phrase really means is that reason is not objective. It is in the eye of the beholder.

Judge Haight then wraps up his analysis in Bagley Two.

What happened in this case is that Yale identified 65 individuals who might have evidence relevant to Bagley’s denial of reappointment, and issued them litigation hold notices in eight separate batches, a process that took a considerable amount of time. The first nine notices were sent nine months after Snyder told Bagley she would not be reappointed. The last was sent eight months after Bagley filed this action. To characterize the pace of this notification process as culpable or even negligent would be premature on the present record, but it is fair to say that it was leisurely, to an extent making it impossible to dismiss as frivolous Bagley’s suggestion that she might move for a spoliation sanction. The six questions outlined supra arise in this case, and the factors pertinent to resolving them include an unreasonable delay in issuing the notices and a subsequent failure to implement and monitor the recipients’ responses. Judge Sweet said in Stimson that the Second Circuit has left open “the question of whether a sufficiently indefensible failure to issue a litigation hold could justify an adverse inference on its own,” and an additional factor would be “the failure to properly implement the litigation hold even after it was issued.” 2016 WL 54684, at *6. These are legitimate questions in the case at bar. Bagley is entitled to discovery with respect to them. 5 (footnote citations omitted)

I certainly agree with Judge Haight on all of those points and law. Those factual circumstances do justify the modest amount of discovery requested by the plaintiff in this motion.

gavelNow we get to the actual Order on the pending motion to compel:

Therefore I conclude that in the circumstances of this case, Bagley’s “Motion to Compel” [Doc. 190] is GRANTED. Bagley is entitled to examine the litigation hold notices issued by Yale, and the responsive survey forms that notice recipients returned to Yale. These documents bear directly upon the questions courts identify as dispositive in spoliation cases. Bagley is entitled to discovery in these areas, in order to discern the merit or lack of merit of a formal claim for spoliation claim. To the extent that Yale objects to production of these documents on the grounds of privilege or the work product doctrine, the objections are OVERRULED.

For the same reasons, Bagley is also entitled to an affidavit from a Yale officer or employee (not a notice recipient or recipients) which describes what non-ESI documents Yale received from notice recipients and what was done with them. On a spoliation claim, Bagley will ultimately bear the burden of showing that pertinent evidence was destroyed or rendered unavailable. This discovery may cast light on that disputed issue. Yale may prefer not to have to produce that information; Yale’s counsel miss no opportunity to remind the Court how much discovery effort the case has previously required.

Judge Haight then ended his opinion with the previously quoted zinger regarding Yale’s famous law Professor Moore. This zinger and comments about Yale’s leisurely efforts and Yale counsel’s missing no opportunities to remind the court tell a story of their own. It shows the emotional undertone. So too does his earlier noted comment about “spoliation” being a cardinal litigation vice, well known to practicing attorneys and judges, but “perhaps unfamiliar” to academics. I suspect this goes beyond humor.

Artificial Intelligence and the Future of Employment

robot_whispererI am sure legal reason will improve in the future and become less subjective, less subject to hidden irrationalities and prejudices. By using artificial intelligence our legal doctrines and decision making can be improved, but only if the human judges remain in charge. The same comment goes for all attorneys. In fact, it applies to all current employment.

The doom and gloom futurists disagree. They think AI will replace humans at their jobs, not empower them. They envision a future of cold automation, not man-machine augmentation. They predict wide-spread unemployment with a loss of half of our current employment. An University of Oxford study predicted that almost half of all U.S. jobs could be lost to automation in the next twenty years. Even the influential World Economic Forum predicts predicts that Five Million jobs could be lost by 2020. Five Million Jobs by 2020: the Real Challenge of the Fourth Industrial Revolution. Also seeThe Future of Jobs: Employment, Skills and Workforce Strategy for the Fourth Industrial Revolution (World Economic Forum, Jan. 2016).

A contrary view “augmentation” oriented group predicts the opposite, that at least as many new jobs will be created as lost. This is a subject of hot debate. See EgArtificial intelligence will save jobs, not destroy them (World Economic Forum, 1/19/17). Readers know I am in the half-full camp.

James Bessen: Law Prophet of the Future of Employment

james_bessonMany are like me and have an overall positive outlook, including James Bessen, an economist  and Lecturer in Law at the Boston University School of Law. Jim Bessen, who was a good hacker with an entrepreneurial background (he created the first WYSIWYG desktop publishing software), has researched the history of computer use and employment since 1980. Jim’s research has shown that for those who can keep up with technology, there will be new jobs to replace the ones lost. Bessen, How Computer Automation Affects Occupations: Technology, Jobs & Economics, Boston University School of Law Law & Economics Working Paper No. 15-49 (1/16/16). He also found that wages in occupations that use computers grow faster, not slower:

[B]ecause higher wage occupations use computers more, computer use tends to increase well-paid jobs and to decrease low-paid jobs. Generally, computer use is associated with a substantial reallocation of jobs, requiring workers to learn new skills to shift occupations.

Also see the article in The Atlantic magazine by Bessen, The Automation Paradox: When computers start doing the work of people, the need for people often increases, (The Atlantic, 1/19, 2016) where he said:

…workers will have greater employment opportunities if their occupation undergoes some degree of computer automation. As long as they can learn to use the new tools, automation will be their friend.

This is certainly consistent with what I have seen in the legal profession since I started practice in 1980.

james_bessenJames Bessen has also written a book on this, Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth. (Yale U. Press 2015). In this book, Bessen, in his words:

… looks at both economic history and the current economy to understand how new technology affects ordinary workers and how society can best meet the challenges it poses.

He notes that major new technologies always require new human work skills and knowledge, and that today, as before, they are slow and difficult to develop. He also makes the observation, which is again consistent with my own experience as a tech-lawyer, that relevant technical knowledge “develops slowly because it is learned through experience, not in the classroom.” In his analysis that is because the new knowledge is not yet standardized. I agree. This is one reason my work has been focused on the standardization of the use of active machine learning in the search for electronic evidence; see for example Predictive Coding 4.0 and my experiments at the TREC conference on predictive coding methods sponsored by the National Institute of Standards and Technology. Also see: Electronic Discovery Best Practices. In spite of my efforts on standards and best practices for e-discovery, we are still in the early, rapidly changing, non-standardized stage of new technology. Bessen argues that employer policies and government policies should encourage such on-the-job learning and perfection of new methods.

Jim Bessen’s findings are starting to be discussed by many who are now concerned with the impact of AI on employment. See for instance, Andrea Willige’s article in the World Economic Forum concerning Davos for 2017Two reasons computers won’t destroy all the jobs (“jobs don’t disappear, they simply move up the skills and wage ladder. For workers to move up the ranks, they must acquire the necessary skillset.”).

Standardization v. On-the-Job Training

Moving on up requires new employment skills. It requires workers who can step-in, step-up, step-aside, step-narrowly, or step-forward. Only Humans Need Apply; Dean Gonsowski, A Clear View or a Short Distance? AI and the Legal Industry, and, Gonsowski, A Changing World: Ralph Losey on “Stepping In” for e-Discovery, (Relativity Blog) (Interview with references to the the 5-steps described in Only Humans Need Apply). Unless and until standardization emerges, and this is taught in a classroom, the new skills will be acquired by on-the-job learning only, sometimes with experienced trainers, but more often self-taught by trial and error.

Borg_Ralph_headI have been working on creating the perfect, standard method for electronic document review using predictive coding since Da Silva Moore. I have used trial and error and on-the-job learning, buttressed by spending a month a year over the last five years on scientific research and experiments with my own team (remember my Borg experiments and videos?) and with TREC, EDI and Kroll Ontrack. Borg Challenge: Report of my experimental review of 699,082 Enron documents using a semi-automated monomodal methodology (a five-part written and video series comparing two different kinds of predictive coding search methods); Predictive Coding Narrative: Searching for Relevance in the Ashes of EnronEDI-Oracle Study: Humans Are Still Essential in E-Discovery (LTN Nov., 2013); e-Discovery Team at TREC 2015 Total Recall Track, Final ReportTREC 2016 Total Recall Track NOTEBOOK.

predictive_coding_4-0_simpleAfter years we have finally perfected and standardized a highly effective method for document review using predictive coding. We call it Predictive Coding 4.0. This method is complete, well-tested, proven and standardized for my team, but not yet accepted by the industry. Unfortunately, industry acceptance of one lawyer’s method is very difficult (impossible?) in the highly competitive, still young and emerging field of electronic document review. I create a standard because I have to in my work, not because I unrealistically expect the industry to adopt it. The industry is still too young for that. I will continue with my on-the-job training, content with that, just as Bessen, Davenport and Kirby observe is the norm for all new technologies. Someday a standard will be generally accepted and taught in classrooms, but we are far from it.

Conclusion

There is more going on in Bagley Two than objective reason, even assuming such a thing exists. Experienced attorneys can easily read between the lines. Reasoned analysis is just the tip of the iceberg, or top of the pyramid, as I envisioned in the new model for Holistic Law outlined in my prior article, Scientific Proof.

There is far more to Senior District Judge Charles S. Haight, Jr., than his ability to be logical and apply reason to the facts. He is not just a “thinking machine.” He has wisdom from decades on the bench. He is perceptive, has feelings and emotions, good intuitions and, we can see, a sense of humor. The same holds true for most judges and lawyers, perhaps even law professors. We are all human and have many other capacities beyond what robots can be trained to do.

Jason_Ralph_RobotReason is just one of the things that we humans do, and, as the work of Professor Ariely has shown, it is typically full of holes and clouded by hidden bias. We need the help of computers to get reason done right, to augment our logic and reasoning skills. Do not try to compete with, nor exclude robots from tasks involving reason. You will ultimately lose that battle. Instead, work with the robots. Invite them in, but remain in control of the processes; use the AI’s abilities to enhance and enlarge your own.

I am sure legal reason will improve in the future and become less subjective. This will happen when more lawyers Step-In as discussed in Davenport and Kirby, Only Humans Need Apply and Dean Gonsowski, A Clear View or a Short Distance? AI and the Legal Industry, and A Changing World: Ralph Losey on “Stepping In” for e-Discovery

alex_hafezMany of us have stepped-in, to use Davenport and Kirby’s language, to manage the use of TAR and AI in document review, not just me. Consider, for instance attorney Alexander Hafez, currently a “Solutions Engineer” for FTI. He was the only other attorney featured in Only Humans Need Apply. Alex bootstrapped his way from minimum wage contract document reviewer, to his current large vendor consultant “step-in” job, by, in the book’s words, “educational bricolage” composed of on-the-job learning and “a specialized course of two and some autodidactic reading.” Id. pg. 144. There are thousands of lawyers in e-Discovery doing quite well in today’s economy. The use of AI and other advanced technologies is now starting to appear in other areas of the law too, including contract review, analysis and construction. See eg. Kira Systems, Inc.

Great-Depression_LitigatorsAs the other areas of the Law become as enhanced and augmented as e-discovery, we will see new jobs open up for the steppers. Old mechanistic law jobs will be replaced. That is for sure. There will be jobs lost in the legal economy. But if Davenport, Kirby and Bessen are correct, and I for one think they are, new, better paying jobs will be created to replace them. Still, for most luddite lawyers, young and old, who are unable to adapt and learn new technologies, the impact of AI on the Law could be devastating. 

Only the tech-savvy will be able to move up the skill and wage ladder by stepping-in to make the technology work right. I attained the necessary skill set to do this with legal technology by teaching myself, by “hacking around” with computers. Yes, it was difficult, but I enjoyed this kind of learning. My story of on the job self-learning is very common. Thus the name of Bessen’s book, Learning by DoingOthers might do better in a more structured learning environment, such as a school, but for the fact there currently is none for this sort of thing, at least in the Law. It falls between the cracks of law school and computer science. For now the self-motivated, self-learners will continue to lead the way.

brad_smith_microsoftNot only do we need to improve our thinking with machines, we need to contribute our other talents and efforts. We need to engage and expand upon the qualities of our job that are most satisfying to us, that meet our human nature. This uniquely human work requires what is sometimes called “soft skills.” This primarily includes the ability for good interpersonal communication, but also such things as the ability to work collaboratively, to adapt to a new set of demands, and to solve problems on the fly. Legal counseling is a prime example according to the general counsel of Microsoft, Brad Smith. Microsoft’s Top Lawyer Toasts Legal Secretaries (Bloomberg Law, 1/18/17). The top lawyer, once CEO of Microsoft, also opined:

Individuals need to learn new skills to keep pace, and this isn’t always easy.  Over the next decade this could become more daunting still, as technology continues to change rapidly.  There is a broadening need for new technical skills and stronger soft skills.  The ability – and opportunity – to continue learning has itself become more important.

Brad Smith, Constructing a Future that Enables all Americans to Succeed, (Dept. of Commerce guest blog, 11/30/16).

The Wikipedia article on “soft skills” lists ten basic skills as compiled by Heckman and Kautz, Hard Evidence on Soft Skills, Labour Econ. 2012 Aug 1; 19(4): 451–464.

  • Communication – oral, speaking capability, written, presenting, listening.
  • Courtesy – manners, etiquette, business etiquette, gracious, says please and thank you, respectful.
  • Flexibility – adaptability, willing to change, lifelong learner, accepts new things, adjusts, teachable.
  • Integrity – honest, ethical, high morals, has personal values, does what’s right.
  • Interpersonal skills – nice, personable, sense of humor, friendly, nurturing, empathetic, has self-control, patient, sociability, warmth, social skills.
  • Positive attitude – optimistic, enthusiastic, encouraging, happy, confident.
  • Professionalism – businesslike, well-dressed, appearance, poised.
  • Responsibility – accountable, reliable, gets the job done, resourceful, self-disciplined, wants to do well, conscientious, common sense.
  • Teamwork – cooperative, gets along with others, agreeable, supportive, helpful, collaborative.
  • Work ethic – hard working, willing to work, loyal, initiative, self-motivated, on time, good attendance.

soft-skills_cartoon

As Brad Smith correctly observed, the skills and tasks needed to keep pace with technology include these kinds of soft skills as well as new technological know-how, things like the best methods to implement new predictive coding software. The tasks, both soft and technical, are generally not overly repetitive and typically require some creativity, imagination, flexibility and inventiveness and, in my view, the initiative to exceed original parameters.

cute_robotA concerned lawyer with real empathy who counsels fellow humans is not likely to be replaced anytime soon by a robot, no matter how cute. There is no substitute for caring, human relationships, for comforting warmth, wit and wisdom. The calm, knowledgeable, confident presence of a lawyer who has been through a problem many times before, and assures you that they can help, is priceless. It brings peace of mind, relaxation and trust far beyond the abilities of any machine.

Stepping-in is one solution for those of us who like working with new technology, but for the rest of humanity, soft-skills are now even more important. Even us tech-types need to learn and improve upon our soft skills. The team approach to e-discovery, which is the basic premise of this e-Discovery Team blog, does not work well without them.

ralph_17_pallate_knife_2Brad Smith’s comment on the need for continued learning is key for everyone who wants to keep working in the future. It is the same thing that Bessen, Davenport and Kirby say. Continued learning is one reason I keep writing. It helps me to learn and may help others to learn too, as part of their “autodidactic reading” and “educational bricolage.” (How else would I learn those words?) According to Bessen’s, Davenport and Kirby’s research most of the key skills needed to keep pace can only be learned on-the-job and are usually self-taught. That is one reason online education is so important. It makes it easier than ever for otherwise isolated people to have access to specialized knowledge and trainers.


Predictive Coding 4.0 – Nine Key Points of Legal Document Review and an Updated Statement of Our Workflow – Part Four

October 2, 2016

predictive_coding_4-0_simpleThis is the fourth installment of the article explaining the e-Discovery Team’s latest enhancements to electronic document review using Predictive Coding. Here are Parts OneTwo and Three. This series explains the nine insights behind the latest upgrade to version 4.0 and the slight revisions these insights triggered to the eight-step workflow. In Part Two we covered the premier search method, Active Machine Learning (aka Predictive Coding). In this installment we will cover our insights into the remaining four basic search methods: Concept Searches (Passive, Unsupervised Learning); Similarity Searches (Families and near Duplication); Keyword Searches (tested, Boolean, parametric); and Focused Linear Search (key dates & people). The five search types are all in our newly revised Search Pyramid shown below (last revised in 2012).

search_pyramid_revised

Concept Searches – aka Passive Learning

edr_buttons_find_similiar_conceptAs discussed inPart Two of this series,  the e-discovery search software company, Engenium was one of the first to use Passive Machine Learning techniques. Shortly after the turn of the century, the early 2000s, Engenium began to market what later become known as Concept Searches. They were supposed to be a major improvement over then dominant Keyword Search. Kroll Ontrack bought Engenium in 2006 and acquired its patent rights to concept search. These software enhancements were taken out of the e-discovery market and removed from all competitor software, except Kroll Ontrack. The same thing happened in 2014 when Microsoft bought Equivio. See e-Discovery Industry Reaction to Microsoft’s Offer to Purchase Equivio for $200 Million – Part One and Part Two. We have yet to see what Microsoft will do with it. All we know for sure is its predictive coding add-on for Relativity is no longer available.

dave_chaplinDavid Chaplin, who founded Engeniun in 1998, and sold it in 2006, became Kroll Ontrack’s VP of Advanced Search Technologies from 2006-2009. He is now the CEO of two Digital Marketing Service and Technology (SEO) companies, Atruik and SearchDex. Other vendors emerged at the time to try to stay competitive with the search capabilities of Kroll Ontrack’s document review platform. They included Clearwell, Cataphora, Autonomy, Equivio, Recommind, Ringtail, Catalyst, and Content Analyst. Most of these companies went the way of Equivo and are now ghosts, gone from the e-discovery market. There are a few notable exceptions, including Catalyst, who participated in TREC with us in 2015 and 2016.

As described in Part Two of this series the so-called Concept Searches all relied on passive machine learning that did not depend on training or active instruction by any human (aka supervised learning). It was all done automatically by computer study and analysis of the data alone, including semantic analysis of the language contained in documents. That meant you did not have to rely on keywords alone, but could state your searches in conceptual terms. The below is a screen-shot of one example of concept search interface using Kroll Ontrack’s EDR software.

concept_search_screen_shot

For a good description of these admittedly powerful, albeit now somewhat dated search tools (at least compared to active machine learning), see the afore-cited article by D4’s Tom Groom, The Three Groups of Discovery Analytics and When to Apply Them. The article refers to Concept Search as Conceptual Analytics, and is described as follows:

Conceptual analytics takes a semantic approach to explore the conceptual content, or meaning of the content within the data. Approaches such as Clustering, Categorization, Conceptual Search, Keyword Expansion, Themes & Ideas, Intelligent Folders, etc. are dependent on technology that builds and then applies a conceptual index of the data for analysis.

Search experts and information scientists know that active machine learning, also called supervised machine learning, was the next big step in search after concept searches, including clustering, which are, in programming language, also known as passive or unsupervised machine learning. The below instructional chart by Hackbright Academy sets forth key difference between supervised learning (predictive coding) and unsupervised or passive learning (analytics, aka concept search).

machine_learning_algorithms

3-factors_hybrid_balanceIt is usually worthwhile to spend some time using concept search to speed up the search and review of electronic documents. We have found it to be of only modest value in simple search projects, with greater value added in more complex projects, especially where data is very complex. Still, in all projects, simple or complex, the use of Concept Search features such as document Clustering, Categorization, Keyword Expansion, Themes & Ideas are at least somewhat helpful. They are especially helpful in finding new keywords to try out, including wild-card stemming searches with instant results and data groupings.

In simple projects you may not need to spend much time with these kind of searches. We find that an expenditure of at least thirty minutes at the beginning of a search is cost-effective in all projects, even simple ones. In more complex projects it may be necessary to spend much more time on these kinds of features.

Passive, unsupervised machine learning is a good way to be introduced to the type of data you are dealing with, especially if you have not worked with the client data before. In TREC Total Recall 2015 and 2016, where we were working with the same datasets, our use of these searches diminished as our familiarity with the datasets grew. They can also help in projects where the search target in not well-defined. There the data itself helps focus the target. It is helpful in this kind of sloppy, I’ll know it when I see it type of approach. That usually indicates a failure of both target identification and SME guidance. Even with simple data you will want to use passive machine learning in those circumstances

Similarity Searches  – Families and Near Duplication

tom_groomIn Tom Groom‘s, article, The Three Groups of Discovery Analytics and When to Apply Themhe refers to Similarity Searches as Structured Analytics, which he explains as follows:

Structured analytics deals with textual similarity and is based on syntactic approaches that utilize character organization in the data as the foundation for the analysis. The goal is to provide better group identification and sorting. One primary example of structured analytics for eDiscovery is Email Thread detection where analytics organizes the various email messages between multiple people into one conversation. Another primary example is Near Duplicate detection where analytics identifies documents with like text that can be then used for various useful workflows.

These methods can always improve efficiency of a human reviewer’s efforts. It makes it easier and faster for human reviewers to put documents in context. It also helps a reviewer minimize repeat readings of the same language or same document. The near duplicate clustering of documents can significantly speed up review. In some corporate email collections the use of Email Thread detection can also be very useful. The idea is to read the last email first, or read in chronological order from the bottom of the email chain to the top. The ability to instantly see on demand the parents and children of email collections can also speed up review and improve context comprehension.

threads_ko_screen-shot

All of these Similarity Searches are less powerful than Concept Search, but tend to be of even more value than Concept Search in simple to intermediate complexity cases. In most simple or medium complex projects one to three hours are typically used with these kind of software features. Also, for this type of search the volume of documents is important. The larger the data set, especially the larger the number of relevant documents located, the greater the value of these searches.

Keyword Searches – Tested, Boolean, Parametric

Go FishIn my perspective as an attorney in private practice specializing in e-discovery and supervising the e-discovery work in a firm with 800 attorneys, almost all of whom do employment litigation, I have a good view of what is happening in the U.S.. We have over fifty offices and all of them at one point or another have some kind of e-discovery issue. All of them deal with opposing counsel who are sometimes mired in keywords, thinking it is the end-all and be-all of legal search. Moreover, they usually want to go about doing it without any testing. Instead, they think they are geniuses who can just dream up good searches out of thin air. They think because they know what their legal complaint is about, they know what keywords will be used by the witnesses in all relevant documents. I cannot tell you how many times I see the word “complaint” in their keyword list. The guessing involved reminds me of the child’s game of Go Fish.

I wrote about this in 2009 and the phrase caught on after Judge Peck and others started citing to this article, which later became a chapter in my book, Adventures in Electronic Discovery, 209-211 (West 2011). The Go Fish analogy appears to be the third most popular reference in predictive coding case-law, after the huge, Da Silva Moore case in 2012 that Judge Peck and I are best known for.

predictive_coding_chart-law

E-discovery Team members employed by Kroll Ontrack also see hundreds of document reviews for other law firms and corporate clients. They see them from all around the world. There is no doubt in our minds that keyword search is still the dominant search method used by most attorneys. It is especially true in small to medium-sized firms, but also in larger firms that have no e-discovery search expertise. Many attorneys and paralegals who use a sophisticated, full featured document review platforms such as Kroll Ontrack’s EDR, still only use keyword search. They do not use the many other powerful search techniques of EDR, even though they are readily available to them. The Search Pyramid to them looks more like this, which I call a Dunce Hat.

distorted_search_pyramid

The AI at the top, standing for Predictive Coding, is, for average lawyers today, still just a far off remote mountain top; something they have heard about, but never tried. Even though this is my speciality, I am not worried about this. I am confident that this will all change soon. Our new, easier to use methods will help with that, so too will ever improving software by the few vendors left standing. God knows the judges are already doing their part. Plus, high-tech propagation is an inevitable result of the next generation of lawyers assuming leadership positions in law firms and legal departments.

The old-timey paper lawyers around the world are finally retiring in droves. The aging out of current leadership is a good thing. Their over-reliance on untested keyword search to find evidence is holding back our whole justice system. The law must keep up with technology and lawyers must not fear math, science and AI. They must learn to keep up with technology. This is what will allow the legal profession to remain a bedrock of contemporary culture. It will happen. Positive disruptive change is just under the horizon and will soon rise.

Chicago sunrise

Abuse is badIn the meantime we encounter opposing counsel everyday who think e-discovery means to dream up keywords and demand that every document that contains their keywords be produced. The more sophisticated of this confederacy of dunces understand that we do not have to produce them, that they might not all be per se relevant, but they demand that we review them all and produce the relevant ones. Fortunately we have the revised rules to protect our clients from these kind of disproportionate, unskilled demands. All too often this is nothing more than discovery as abuse.

This still dominant approach to litigation is really nothing more than an artifact of the old-timey paper lawyers’ use of discovery as a weapon. Let me speak plainly. This is nothing more than adversarial bullshit discovery with no real intent by the requesting party to find out what really happened. They just want to make the process as expensive and difficult as possible for the responding party because, well, that’s what they were trained to do. That is what they think smart, adversarial discovery is all about. Just another tool in their negotiate and settle, extortion approach to litigation. It is the opposite of the modern cooperative approach.

dino teachersI cannot wait until these dinosaurs retire so we can get back to the original intent of discovery, a cooperative pursuit of the facts. Fortunately, a growing number of our opposing counsel do get it. We are able to work very well with them to get things done quickly and effectively. That is what discovery is all about. Both sides save their powder for when it really matters, for the battles over the meaning of the facts, the governing law, and how the facts apply to this law for the result desired.

Tested, Parametric Boolean Keyword Search

In some projects tested Keyword Search works great.

The biggest surprise for me in our latest research is just how amazingly good keyword search can perform under the right circumstances. I’m talking about hands-on, tested keyword search based on human document review and file scanning, sampling, and also based on strong keyword search software. When keyword search is done with skill and is based on the evidence seen, typically in a refined series of keyword searches, very high levels of Precision, Recall and F1 are attainable. Again, the dataset and other conditions must be just right for it to be that effective, as explained in the diagram: simple data, clear target and good SME. Sometimes keywords are the best way to find clear targets like names and dates.

In those circumstances the other search forms may not be needed to find the relevant documents, or at least to find almost all of the relevant documents. These are cases where the hybrid balance is tipped heavily towards the 400 pound man hacking away at the computer. All the AI does in these circumstances, when the human using keyword search is on a roll, is double-check and verify that it agrees that all relevant documents have been located. It is always nice to get a free second opinion from Mr. EDR. This is an excellent quality control and quality assurance application from our legal robot friends.

MrEdr_Caped

keywrodsearchWe are not going to try to go through all of the ins and outs of keyword search. There are many variables and features available in most document review platforms today to make it easy to construct effective keyword searches and otherwise find similar documents. This is the kind of thing that KO and I teach to the e-discovery liaisons in my firm and other attorneys and paralegals handing electronic document reviews. The passive learning software features can be especially helpful, so too can simple indexing and clustering. Most software programs have important features to improve keyword search and make it more effective. All lawyers should learn the basic tested, keyword search skills.

There is far more to effective keyword search than a simple Google approach. (Google is concerned with finding websites, not recall of relevant evidence.) Still, in the right case, with the right data and easy targets, keywords can open the door to both high recall and precision. Keyword search, even tested and sophisticated, does not work well in complex cases or with dirty data. It certainly has its limits and there is a significant danger in over reliance on keyword search. It is typically very imprecise and can all to easily miss unexpected word usage and misspellings. That is one reason that the e-Discovery Team always supplements keyword search with a variety of other search methods, including predictive coding.

Focused Linear Search – Key Dates & People

Close up of Lincoln's face on April 10, 1865In Abraham Lincoln’s day all a lawyer had to do to prepare for a trial was talk to some witnesses, talk to his client and review all of the documents the clients had that could possibly be relevant. All of them. One right after the other. In a big case that might take an hour. Flash forward one hundred years to the post-photocopier era of the 1960s and document review, linear style reviewing them all, might take a day. By the 1990s it might take weeks. With the data volume of today such a review would take years.

All document review was linear up until the 1990s. Until that time almost all documents and evidence were paper, not electronic. The records were filed in accordance with an organization wide filing system. They were combinations of chronological files and alphabetical ordering. If the filing was by subject then the linear review conducted by the attorney would be by subject, usually in alphabetical order. Otherwise, without subject files, you would probably take the data and read it in chronological order. You would certainly do this with the correspondence file. This was done by lawyers for centuries to look for a coherent story for the case. If you found no evidence of value in the papers, then you would smile knowing that your client’s testimony could not be contradicted by letters, contracts and other paperwork.

Clarence Darrow and William Jennings Bryan

This kind of investigative, linear review still goes on today. But with today’s electronic document volumes the task is carried out in warehouses by relatively low paid, document review contract lawyers. By itself it is a fool’s errand, but it is still an important part of a multimodal approach.

Document_reviewers

There is nothing wrong with Focused Linear Search when used in moderation. And there is nothing wrong with document review contract-lawyers, except that they are underpaid for their services, especially the really good ones. I am a big fan of document review specialists.

Review_Consistency_RatesLarge linear review projects can be expensive and difficult to manage. Moreover, it typically has only limited use. It breaks down entirely when large teams are used because human review is so inconsistent in document analysis. Losey, R., Less Is More: When it comes to predictive coding training, the “fewer reviewers the better” (parts OneTwo and three) (December 8, 2013, e-Discovery Team). When review of large numbers of documents are involved the consistency rate among multiple human reviewers is dismal. Also see: Roitblat, Predictive Coding with Multiple Reviewers Can Be Problematic: And What You Can Do About It (4/12/16).

Still, linear review can be very helpful in limited time spans and in reconstruction of a quick series of events, especially communications. Knowing what happened one day in the life of a key custodian can sometimes give you a great defense or great problem. Either are rare. Most of the time Expert Manual Review is helpful, but not critical. That is why Expert Manual Review is at the base of the Search Pyramid that illustrates our multimodal approach.

search_pyramid_revised

An attorney’s knowledge, wisdom and skill are the foundation of all that we do, with or without AI. The information that an attorney holds is also of value, especially information about the latest technology, but the human information roles are diminishing. Instead the trend is to delegate mere information level services to automated systems. The legal robots would not be permitted to go beyond information fulfillment roles and provide legal advice based on human knowledge and wisdom. Their function would be constrained to Information processing and reports.  The metrics and technology tools they provide can make it easier for the human attorneys to build a solid evidentiary foundation for trial.

 To be continued …

 


Predictive Coding 4.0 – Nine Key Points of Legal Document Review and an Updated Statement of Our Workflow – Part Two

September 18, 2016

Team_TRECIn Part One we announced the latest enhancements to our document review method, the upgrade to Predictive Coding 4.0. We explained the background that led to this upgrade – the TREC research and hundreds of projects we have done since our last upgrade a year ago. Millions have been spent to develop the software and methods we now use for Technology Assisted Review (TAR). As a result our TAR methods are more effective and simpler than ever.

The nine insights we will share are based on our experience and research. Some of our insights may be complicated, especially our lead insight on Active Machine Learning covered in this Part Two with our new description of ISTIntelligently Spaced Training. We consider IST the smart, human empowering alternative to CAL. If I am able to write these insights up here correctly, the obviousness of them should come through. They are all simple in essence. The insights and methods of Predictive Coding 4.0 document review are partially summarized in the chart below (which you are free to reproduce without edit).

predictive_coding_6-9

1st of the Nine Insights: Active Machine Learning

Our method is Multimodal in that it uses all kinds of document search tools. Although we emphasize active machine learning, we do not rely on that method alone. Our method is also Hybrid in that we use both machine judgments and human (lawyer) judgments. Moreover, in our method the lawyer is always in charge. We may take our hand off the wheel and let the machine drive for a while, but under our versions of Predictive Coding, we watch carefully. We remain ready to take over at a moment’s notice. We do not rely on one brain to the exclusion of another. See eg. Why the ‘Google Car’ Has No Place in Legal Search (caution against over reliance on fully automated methods of active machine learning). Of course the converse is also true, we never just rely on our human brain alone. It has too many limitations. We enhance our brain with predictive coding algorithms. We add to our own natural intelligence with artificial intelligence. The perfect balance between the two, the Balanced Hybrid, is another of insights that we will discuss later.

Active Machine Learning is Predictive Coding – Passive Analytic Methods Are Not

Even though our methods are multimodal and hybrid, the primary search method we rely on is Active Machine Learning. The overall name of our method is, after all, Predictive Coding. And, as any information retrieval expert will tell you, predictive coding means active machine learning. That is the only true AI method. The passive type of machine learning that some vendors use under the name Analytics is NOT the same thing as Predictive Coding. These passive Analytics have been around for years and are far less powerful than active machine learning.

concept-searches-brainThese search methods, that used to be called Concept Search, were a big improvement upon relying on keyword search alone. I remember talking about concepts search techniques in reverent terms when I did my first Legal Search webinar in 2006 with Jason Baron and Professor Doug Oard. That same year, Kroll Ontrack bought one of the original developers and patent holders of concept search, Engenium. For a short time in 2006 and 2007 Kroll Ontrack was the only vendor to have these concept search tools. The founder of Engenium, David Chaplin came with the purchase, and became Kroll Ontrack’s VP of Advanced Search Technologies for three years. (Here is an interesting interview of Chaplin that discusses what he and Kroll Ontrack were doing with advanced search analytic-type tools when he left in 2009.)

search_globalBut search was hot and soon boutique search firms like, Clearwell, Cataphora, Content Analyst (the company recently purchased by popular newcomer, kCura), and other e-discovery vendors developed their own concept search tools. Again, they were all using passive machine learning. It was a big deal ten years ago. For a good description of these admittedly powerful, albeit now dated search tools, see the concise, well-written article by D4’s Tom Groom, The Three Groups of Discovery Analytics and When to Apply Them.

Search experts and information scientists know that active machine learning, also called supervised machine learning, was the next big step in search after concept searches, which are, in programming language, also known as passive or unsupervised machine learning. I am getting out of my area of expertise here, and so am unable go into any details, other than present the below instructional chart by Hackbright Academy that sets forth key difference between supervised learning (predictive coding) and unsupervised (analytics, aka concept search).

machine_learning_algorithms

What I do know is that the bonafide active machine learning software in the market today all use either a form of Logistic Regression, including Kroll Ontrack, or SVM, which means Support Vector Machine.

e-Discovery Vendors Have Been Market Leaders in Active Machine Learning Software

Kroll_IRTAfter Kroll Ontrack absorbed the Engenium purchase, and its founder Chaplin completed his contract with Kroll Ontrack and moved on, Kroll Ontrack focused their efforts on the next big step, active machine learning, aka predictive coding. They have always been that kind of cutting edge company, especially when it comes to search, which is one reason they are one of my personal favorites. A few of the other, then leading e-discovery vendors did too, including especially Recommind and the Israeli based search company, Equivo. Do not get me wrong, the concept search methods, now being sold under the name of TAR Analytics, are powerful search tools. They are a part of our multimodal tool-kit and should be part of yours. But they are not predictive coding. They do not rank documents according to your external input, your supervision. They do not rely on human feedback. They group documents according to passive analytics of the data. It is automatic, unsupervised. These passive analytic algorithms can be good tools for efficient document review, but they not active machine learning and are nowhere near as powerful.

ghosts

Search Software Ghosts

Many of the software companies that made the multi-million dollar investments necessary to go to the next step and build document review platforms with active machine learning algorithms have since been bought out by big-tech and repurposed out of the e-discovery market. They are the ghosts of legal search past. Clearwell was purchased by Symantec and has since disappeared. Autonomy was purchased by Hewlett Packard and has since disappeared. Equivio was purchased by Microsoft and has since disappeared. See e-Discovery Industry Reaction to Microsoft’s Offer to Purchase Equivio for $200 Million – Part One and Part Two. Recommind was recently purchased by OpenText and, although it is too early to tell for sure, may also soon disappear from e-Discovery.

Slightly outside of this pattern, but with the same ghosting result, e-discovery search company, Cataphora, was bought by Ernst & Young, and has since disappeared. The year after the acquisition, Ernst & Young added predictive coding features from Cataphora to its internal discovery services. At this point, all of the Big Four Accounting Firms, claim to have their own proprietary software with predictive coding. Along the same lines, at about the time of the Cataphora buy-out, consulting giant FTI purchased another e-discovery document review company, Ringtail Solutions (known for its petri dish like visualizations). Although not exactly ghosted by FTI from the e-discovery world after the purchase, they have been absorbed by the giant FTI.

microsoft_acquiresOutside of consulting/accountancy, in the general service e-discovery industry for lawyers, there are, at this point (late 2016) just a few document review platforms left that have real active machine learning. Some of the most popular ones left behind certainly do not. They only have passive learning analytics. Again, those are good features, but they are not active machine learning, one of the nine basic insights of Predictive Coding 4.0 and a key component of the e-Discovery Team’s document review capabilities.

predictive_coding_9_2

The power of the advanced, active learning technologies that have been developed for e-discovery is the reason for all of these acquisitions by big-tech and the big-4 or 5. It is not just about wild overspending, although that may well have been the case for Hewlett Packard payment of $10.3 Billion to buy Autonomy. The ability to do AI-enhanced document search and review is a very valuable skill, one that will only increase in value as our data volumes continue to explode. The tools used for such document review are also quite valuable, both inside the legal profession and, as the ghostings prove, well beyond into big business. See e-Discovery Industry Reaction to Microsoft’s Offer to Purchase Equivio for $200 MillionPart Two.

The indisputable fact that so many big-tech companies have bought up the e-discovery companies with active machine learning software should tell you a lot. It is a testimony to the advanced technologies that the e-discovery industry has spawned. When it comes to advanced search and document retrieval, we in the e-discovery world are the best in the world my friends, primarily because we have (or can easily get) the best tools. Smile.

usain-bolt-smiling

Search is king of our modern Information Age culture. See Information → Knowledge → Wisdom: Progression of Society in the Age of ComputersThe search for evidence to peacefully resolve disputes is, in my most biased opinion, the most important search of all. It sure beats selling sugar water. Without truth and justice all of the petty business quests for fame and fortune would crumble into anarchy, or worse, dictatorship.

With this background it is easy to understand why some of the e-discovery vendors left standing are not being completely candid about the capabilities of their document review software. (It is called puffing and is not illegal.) The industry is unregulated and, alas, most of our expert commentators are paid by vendors. They are not independent. As a result, many of the lawyers who have tried what they thought was predictive coding, and had disappointing results, have never really tried predictive coding at all. They have just used slightly updated concept search.

Ralph Losey with this "nobody read my blog" sad shirtAlternatively, some of the disappointed lawyers may have used one of the many now-ghosted vendor tools. They were all early version 1.0 type tools. For example, Clearwell’s active machine learning was only on the market for a few months with this feature before they were bought and ghosted by Symantec. (I think Jason Baron and I were the first people to see an almost completed demo of their product at a breakfast meeting a few months before it was released.) Recommind’s predictive coding software was well-developed at the time of their sell-out, but not its methods of use. Most of its customers can testify as to how difficult it is to operate. That is one reason that OpenText was able to buy them so cheaply, which, we now see, was part of their larger acquisition plan culminating in the purchase of Dell’s EMC document management software.

All software still using early methods, what we call version 1.0 and 2.0 methods based on control sets, are cumbersome and hard to operate, not just Recommind’s system. I explained this in my article last year, Predictive Coding 3.0. I also mentioned in this article that some vendors with predictive coding would only let you use predictive coding for search. It was, in effect, mono-modal. That is also a mistake. All types of search must be used – multimodal – for the predictive coding type of search to work efficiently and effectively. More on that point later.

Maura Grossman Also Blows the Whistle on Ineffective “TAR tools”

Maura Grossman aka "Mr. Grossman" to her email friends

Maura Grossman, who is now an independent expert in this field, made many of these same points in a recent interview with Artificial Lawyer, a periodical dedicated to AI and the Law. AI and the Future of E-Discovery: AL Interview with Maura Grossman (Sept. 16, 2016). When asked about the viability of the “over 200 businesses offering e-discovery services” Maura said, among other things:

In the long run, I am not sure that the market can support so many e-discovery providers …

… many vendors and service providers were quick to label their existing software solutions as “TAR,” without providing any evidence that they were effective or efficient. Many overpromised, overcharged, and underdelivered. Sadly, the net result was a hype cycle with its peak of inflated expectations and its trough of disillusionment. E-discovery is still far too inefficient and costly, either because ineffective so-called “TAR tools” are being used, or because, having observed the ineffectiveness of these tools, consumers have reverted back to the stone-age methods of keyword culling and manual review.

caveman lawyerNow that Maura is no longer with the conservative law firm of Wachtell Lipton, she has more freedom to speak her mind about caveman lawyers. It is refreshing and, as you can see, echoes much of what I have been saying. But wait, there is still more that you need to hear from the interview of new Professor Grossman:

It is difficult to know how often TAR is used given confusion over what “TAR” is (and is not), and inconsistencies in the results of published surveys. As I noted earlier, “Predictive Coding”—a term which actually pre-dates TAR—and TAR itself have been oversold. Many of the commercial offerings are nowhere near state of the art; with the unfortunate consequence that consumers have generalised their poor experiences (e.g., excessive complexity, poor effectiveness and efficiency, high cost) to all forms of TAR. In my opinion, these disappointing experiences, among other things, have impeded the adoption of this technology for e-discovery. …

ulNot all products with a “TAR” label are equally effective or efficient. There is no Consumer Reports or Underwriters Laboratories (“UL”) that evaluates TAR systems. Users should not assume that a so-called “market leading” vendor’s tool will necessarily be satisfactory, and if they try one TAR tool and find it to be unsatisfactory, they should keep evaluating tools until they find one that works well. To evaluate a tool, users can try it on a dataset that they have previously reviewed, or on a public dataset that has previously been labelled; for example, one of the datasets prepared for the TREC 2015 or 2016 Total Recall tracks. …

She was then asked by the Artificial Lawyer interviewer (name never identified), which is apparently based in the UK, another popular question:

As is often the case, many lawyers are fearful about any new technology that they don’t understand. There has already been some debate in the UK about the ‘black box’ effect, i.e., barristers not knowing how their predictive coding process actually worked. But does it really matter if a lawyer can’t understand how algorithms work?

Maura_Goog_GlassesThe following is an excerpt of Maura’s answer. Suggest you consult the full article for a complete picture. AI and the Future of E-Discovery: AL Interview with Maura Grossman (Sept. 16, 2016). I am not sure whether she put on her Google Glasses to answer (probably not), but anyway, I rather like it.

Many TAR offerings have a long way to go in achieving predictability, reliability, and comprehensibility. But, the truth that many attorneys fail to acknowledge is that so do most non-TAR offerings, including the brains of the little black boxes we call contract attorneys or junior associates. It is really hard to predict how any reviewer will code a document, or whether a keyword search will do an effective job of finding substantially all relevant documents. But we are familiar with these older approaches (and we think we understand their mechanisms), so we tend to be lulled into overlooking their limitations.

The brains of the little black boxes we call contract attorneys or junior associates. So true. We will go into that more throughly in our discussion of the GIGO & QC insight.

Recent Team Insights Into Active Machine Learning

To summarize what I have said so far, in the field of legal search, only active machine learning:

  • effectively enhances human intelligence with artificial intelligence;
  • qualifies for the term Predictive Coding.

I want to close on this discussion of active machine learning with one more insight. This one is slightly technical, and again, if I explain it correctly, should seem perfectly obvious. It is certainly not new, and most search experts will already know this to some degree. Still, even for them, there may some nuances to this insight that they have not thought of. It can be summarized as follows: active machine learning should have a double feedback loop with active monitoring by the attorney trainers.

robot-friend

feedback_loopsActive machine learning should create feedback for both the algorithm (the data classified) AND the human managing the training. Both should learn, not just the robot. They should, so to speak, be friends. They should get to know each other

Many predictive coding methods that I have read about, or heard described, including how I first used active machine learning, did not sufficiently include the human trainer in the feedback loop.  They were static types of training using single a feedback loop. These methods are, so to speak, very stand-offish, aloof. Under these methods the attorney trainer does not even try to understand what is going on with the robot. The information flow was one-way, from attorney to machine.

Mr_EDRAs I grew more experienced with the EDR software I started to realize that it is possible to start to understand, at least a little, what the black box is doing. Logistic based AI is a foreign intelligence, but it is intelligence. After a while you start to understand it. So although I started just using one-sided machine training, I slowly gained the ability to read how EDR was learning. I then added another dimension, another feedback loop that was very interesting one indeed. Now I not only trained and provided feedback to the AI as to whether the predictions of relevance were correct, or not, but I also received training from the AI as to how well, or not, it was learning. That in turn led to the humorous personification of the Kroll Ontrack software that we now call Mr. EDR. See MrEDR.com. When we reached this level, machine training became a fully active, two-way process.

We now understand that to fully supervise a predictive coding process you to have a good understanding of what is happening. How else can you supervise it? You do not have to know exactly how the engine works, but you at least need to know how fast it is going. You need a speedometer. You also need to pay attention to how the engine is operating, whether it is over-heating, needs oil or gas, etc. The same holds true to teaching humans. Their brains are indeed mysterious black boxes. You do not need to know exactly how each student’s brain works in order to teach them. You find out if your teaching is getting through by questions.

For us supervised learning means that the human attorney has an active role in the process. A role where the attorney trainer learns by observing the trainee, the AI in creation. I want to know as much as possible, so long as it does not slow me down significantly.

In other methods of using predictive coding that we have used or seen described the only role of the human trainer is to say yes or no as to the relevance of a document. The decision as to what documents to select for training has already been predetermined. Typically it is the highest ranked documents, but sometimes also some mid-ranked “uncertain documents” or some “random documents” are added in the mix. The attorney
has no say in what documents to look at. They are all fed to him or her according to predetermined rules. These decision making rules are set in ralph_boredadvance and do not change. These active machine learning methods work, but they are slow, and less precise, not to mention boring as hell.

The recall of these single-loop passive supervision methods may also not be as good. The jury is still out on that question. We are trying to run experiments on that now, although it can be hard to stop yawning. See an earlier experiment on this topic testing the single loop teaching method of random selection: Borg Challenge: Report of my experimental review of 699,082 Enron documents using a semi-automated monomodal methodology.

These mere yes or no, limited participation methods are hybrid Man-Machine methods, but, in our opinion, they are imbalanced towards the Machine. (Again, more on the question of Hybrid Balance will be covered in the next installment of this article.) This single versus dual feedback approach seems to be the basic idea behind the Double Loop Learning approach to human education depicted in the diagram below. Also see Graham Attwell, Double Loop Learning and Learning Analytics (Pontydysgu, May 4, 2016).

double-loop-learning

To quote Wikipedia:

The double loop learning system entails the modification of goals or decision-making rules in the light of experience. The first loop uses the goals or decision-making rules, the second loop enables their modification, hence “double-loop.” …

Double-loop learning is contrasted with “single-loop learning”: the repeated attempt at the same problem, with no variation of method and without ever questioning the goal. …

Double-loop learning is used when it is necessary to change the mental model on which a decision depends. Unlike single loops, this model includes a shift in understanding, from simple and static to broader and more dynamic, such as taking into account the changes in the surroundings and the need for expression changes in mental models.

double-loop-learning2

The method of active machine learning that we use in Predictive Coding 4.0 is a type of double loop learning system. As such it is ideal for legal search, which is inherently ad hoc, where even the understanding of relevance evolves as the project develops. As Maura noted near the end of the Artificial Lawyer interview:

… e-discovery tends to be more ad hoc, in that the criteria applied are typically very different for every review effort, so each review generally begins from a nearly zero knowledge base.

The driving impetus behind our double feedback look system is to allow for training document selection to vary according to the circumstances encountered. Attorneys select documents for training and then observe how these documents impact the AI’s overall ranking of the documents. Based on this information decisions are then made by the attorney as to which documents to next submit for training. A single fixed mental model is not used, such as only submitting the ten highest ranked documents for training.

The human stays involved and engaged and selects the next documents to add to the training based on what she sees. This makes the whole process much more interesting. For example, if I find a group of relevant spreadsheets by some other means, such as a keyword search, then, when I add these document to the training, I observe how these documents impact the overall ranking of the dataset. For instance, did this training result in an increase of relevance ranking of other spreadsheets? Was the increase nominal or major? How did it impact the ranking of other documents? For instance, were emails with a lot of numbers in them suddenly much higher ranked? Overall, was this training effective? Were the documents in fact relevant as predicted that moved up in rank to the top, or near top of probable relevance? What was the precision rate like for these documents? Does the AI now have a good understanding of relevance of spreadsheets, or need more training on that type of document? Should we focus our search on other kinds of documents?

You see all kinds of variations on that. If the spreadsheet understanding (ranking) is good, how does it compare to its understanding (correct ranking) of Word Docs or emails? Where should I next focus my multimodal searches? What documents should I next assign to my reviewers to read and make a relevancy determination? These kind of considerations keep the search interesting, fun even. Work as play is the best kind. Typically we simply assign the documents for attorney review that have the highest ranking (which is the essence of what Grossman and Cormack call CAL), but not always. We are flexible. We, the human attorneys, are the second positive feedback loop.

EDR_lookWe like to remain in charge of teaching the classifier, the AI. We do not just turn it over to the classifier to teach itself. Although sometimes, when we are out of ideas and are not sure what to do next, we will do exactly that. We will turn over to the computer the decision of what documents to review next. We just go with his top predictions and use those documents to train. Mr. EDR has come through for us many times when we have done that. But this is more of an exception, than the rule. After all, the classifier is a tabula rasa. As Maura put it: each review generally begins from a nearly zero knowledge base. Before the training starts, it knows nothing about document relevance. The computer does not come with built-in knowledge of the law or relevance. You know what you are looking for. You know what is relevant, even if you do not know how to find it, or even whether it exists at all. The computer does not know what you are looking for, aside from what you have told it by your yes-no judgments on particular documents. But, after you teach it, it knows how to find more documents that probably have the same meaning.

raised_handsBy observation you can see for yourself, first hand, how your training is working, or not working. It is like a teacher talking to their students to find out what they learned from the last assigned reading materials. You may be surprised by how much, or how little they learned. If the last approach did not work, you change the approach. That is double-loop learning. In that sense our active monitoring approach it is like continuous dialogue. You learn how and if the AI is learning. This in turn helps you to plan your next lessons. What has the student learned? Where does the AI need more help to understand the conception of relevance that you are trying to teach it.

Only_Humans_Need_ApplyThis monitoring of the AI’s learning is one of the most interesting aspects of active machine learning. It is also a great opportunity for human creativity and value. The inevitable advance of AI in the law can mean more jobs for lawyers overall, but only for those able step up and change their methods. The lawyers able to play the second loop game of active machine learning will have plenty of employment opportunities. See eg. Thomas H. Davenport, Julia Kirby, Only Humans Need Apply: Winners and Losers in the Age of Smart Machines (Harper 2016).

Going down into the weeds a little bit more, our active monitoring dual feedback approach means that when we use Kroll Ontrack’s EDR software, we adjust the settings so that new learning sessions are not created automatically. They only run when and if we click on the Initiate Session button shown in the EDR screenshot below (arrow and words were added). We do not want the training to go on continuously in the background (typically meaning at periodic intervals of every thirty minutes or so.) We only want the learning sessions to occur when we say so. In that way we can know exactly what documents EDR is training on during a session. Then, when that training session is complete, we can see how the input of those documents has impacted the overall data ranking.  For instance, are there now more documents in the 90% or higher probable relevance category and if so, how many? The picture below is of a completed TREC project. The probability rankings are on the far left with the number of documents shown in the adjacent column. Most of the documents in the 290,099 collection of Bush email were in the 0-5% probable relevant ranking not included in the screen shot.

edr_initiate_session

This means that the e-Discovery Team’s active learning is not continuous, in the sense of always training. It is instead intelligently spaced. That is an essential aspect of our Balanced Hybrid approach to electronic document review. The machine training only begins when we click on the “Initiate Session” button in EDR that the arrow points to. It is only continuous in the sense that the training continues until all human review is completed. The spaced training, in the sense of staggered  in time, is itself an ongoing process until the production is completed. We call this Intelligently Spaced Training or IST. Such ongoing training improves efficiency and precision, and also improves Hybrid human-machine communications. Thus, in our team’s opinion, IST is a better process of electronic document review than training automatically without human participation, the so-called CAL approach promoted (and recently trademarked) by search experts and professors, Maura Grossman and Gordon Cormack.

ist-sm

Exactly how we space out the timing of training in IST is a little more difficult to describe without going into the particulars of a case. A full, detailed description would require the reader to have intimate knowledge of the EDR software. Our IST process is, however, software neutral. You can follow the IST dual feedback method of active machine learning with any document review software that has active machine learning capacities and also allows you to decide when to initiate a training session. (By the way, a training session is the same thing as a learning session, but we like to say training, not learning, as that takes the human perspective and we are pro-human!) You cannot do that if the training is literally continuous and cannot be halted while you input a new batch of relevance determined documents for training.

The details of IST, such as when to initiate a training session, and what human coded documents to select next for training, is an ad hoc process. It depends on the data itself, the issues involved in the case, the progress made, the stage of the review project and time factors. This is the kind of thing you learn by doing. It is not rocket science, but it does help keep the project interesting. Hire one of our team members to guide your next review project and you will see it in action. It is easier than it sounds. With experience Hybrid Multimodal IST becomes an intuitive process, much like riding a bicycle.

ralph_trecTo summarize, active machine learning should be a dual feedback process with double-loop learning. The training should continue throughout a project, but it should be spaced in time so that you can actively monitor the progress, what we call IST. The software should learn from the trainer, of course, but the trainer should also learn from the software. This requires active monitoring by the teacher who reacts to what he or she sees and adjusts the training accordingly so as to maximize recall and precision.

This is really nothing more than a common sense approach to teaching. No teacher who just mails in their lessons, and does not pay attention to the students, is ever going to be effective. The same is true for active machine learning. That’s the essence of the insight. Simple really.

Next, in Part Three, I will address the related insights of Balanced Hybrid.

To be Continued …


What Chaos Theory Tell Us About e-Discovery and the Projected ‘Information → Knowledge → Wisdom’ Transition

May 20, 2016
Ralph and Gleick

Gleick & Losey meeting sometime in the future

This article assumes a general, non-technical familiarity with the scientific theory of Chaos. See James Gleick’s book, Chaos: making a new science (1987). This field of study is not usually discussed in the context of “The Law,” although there is a small body of literature outside of e-discovery. See: Chen, Jim, Complexity Theory in Legal Scholarship (Jurisdymanics 2006).

The article begins with a brief, personal recapitulation of the basic scientific theories of Chaos. I buttress my own synopsis with several good instructional videos. My explanation of the Mandelbrot Set and Complex numbers is a little long, I know, but you can skip over that and still understand all of the legal aspects. In this article I also explore the application of the Chaos theories to two areas of my current work:

  1. The search for needles of relevant evidence in large, chaotic, electronic storage systems, such as email servers and email archives, in order to find the truth, the whole truth, and nothing but the truth needed to resolve competing claims of what happened – the facts – in the context of civil and criminal law suits and investigations.
  2. The articulation of a coherent social theory that makes sense of modern technological life, a theory that I summarize with the words/symbols: Information → Knowledge → Wisdom. See Information → Knowledge → Wisdom: Progression of Society in the Age of Computers and the more recent, How The 12 Predictions Are Doing That We Made In “Information → Knowledge → Wisdom.”

Introduction to the Science of Chaos

Gleick’s book on Chaos provides a good introduction to the science of chaos and, even though written in 1987, is still a must read. For those who have read this long ago, like me, here is a good, short, 3:53, refresher video James Gleick on Chaos: Making a New Science (Open Road Media, 2011) below:

mandelbrot_youngA key leader in the Chaos Theory field is the late great French mathematician, Benoit Mandelbrot (1924-2010) (shown right). Benoit, a math genius who never learned the alphabet, spent most of his adult life employed by IBM. He discovered and named the natural phenomena of fractals. He discovered that there is a hidden order to any complex, seemingly chaotic system, including economics and the price of cotton. He also learned that this order was not causal and could not be predicted. He arrived at these insights by study of geometry, specifically the rough geometric shapes found everywhere in nature and mathematics, which he called fractals. The penultimate fractal he discovered now bears his name, The Mandelbrot Fractalshown in the computer photo below, and explained further in the video that follows.

Mandelbrot set

Look here for thousands of additional videos of fractals with zoom magnifications. You will see the recursive nature of self-similarity over varying scales of magnitude. The patterns repeat with slight variations. The complex patterns at the rough edges continue infinitely without repetition, much like Pi. They show the unpredictable element and the importance of initial conditions played out over time. The scale of the in-between dimensions can be measured. Metadata remains important in all investigations, legal or otherwise.

mandelbrot_equation

The Mandelbrot is based on a simple mathematical formula involving feedback and Complex Numbers: z ⇔ z2 + c. The ‘c’ in the formula stands for any Complex Number. Unlike all other numbers, such as the natural numbers one through nine – 1.2.3.4.5.6.7.8.9, the Complex Numbers do not exist on a horizontal number line. They exist only on an x-y coordinate time plane where regular numbers on the horizontal grid combine with so-called Imaginary Numbers on the vertical grid. A complex number is shown as c= a + bi, where a and b are real numbers and i is the imaginary number. Complex_number_illustration

A complex number can be visually represented as a pair of numbers (a, b) forming a vector on a diagram called an Argand diagram, representing the complex plane. “Re” is the real axis, “Im” is the imaginary axis, and i is the imaginary number. And that is all there is too it. Mandelbrot calls the formula embarrassingly simple. That is the Occam’s razor beauty of it.

To understand the full dynamics of all of this remember what Imaginary Numbers are. They are a special class of numbers where a negative times a negative creates a negative, not a positive, like is the rule with all other numbers. In other words, with imaginary numbers -2 times -2 = -4, not +4. Imaginary numbers are formally defined as i2 = −1.

Thus, the formula z ⇔ z2 + c, can be restated as z ⇔ z2 + (a + bi).

The Complex Numbers when iterated according to this simple formula – subject to constant feedback – produce the Mandelbrot set.

mandelbrot

Mandelbrot_formulaThe value for z in the iteration always starts with zero. The ⇔ symbol stands for iteration, meaning the formula is repeated in a feedback loop. The end result of the last calculation becomes the beginning constant of the next: z² + c becomes the z in the next repetition. Z begins with zero and starts with different values for c. When you repeat the simple multiplication and addition formula millions of times, and plot it on a Cartesian grid, the Mandelbrot shape is revealed.

When iteration of a squaring process is applied to non-complex numbers the results are always known and predictable. For instance when any non-complex number greater than one is repeatedly squared, it quickly approaches infinity: 1.1 * 1.1 = 1.21 * 1.21 = 1.4641 * 1.4641 = 2.14358 and after ten iterations the number created is 2.43… * 10 which written out is 2,430,000,000,000,000,000,000,000,000,000,000,000,000,000. A number so large as to dwarf even the national debt. Mathematicians say of this size number that it is approaching infinity.

The same is true for any non-complex number which is less than one, but in reverse; it quickly goes to the infinitely small, the zero. For example with .9: .9.9=.81; .81.81=.6561; .6561.6561=.43046 and after only ten iterations it becomes 1.39…10 which written out is .0000000000000000000000000000000000000000000000139…, a very small number indeed.

With non-complex numbers, such as real, rational or natural numbers, the squaring iteration must always go to infinity unless the starting number is one. No matter how many times you square one, it will still equal one. But just the slightest bit more or less than one and the iteration of squaring will attract it to the infinitely large or small. The same behavior holds true for complex numbers: numbers just outside of the circle z = 1 on the complex plane will jump off into the infinitely large, complex numbers just inside z = 1 will quickly square into zero.

The magic comes by adding the constant c (a complex number) to the squaring process and starting from z at zero: z ⇔ z² + c. Then stable iterations – a set attracted to neither the infinitely small or infinitely large – become possible. The potentially stable Complex numbers lie both outside and inside of the circle of z = 1; specifically on the complex plane they lie between -2.4 and .8 on the real number line, the horizontal x grid, and between -1.2 and +1.2 on the imaginary line, the vertical y grid. These numbers are contained within the black of the Mandelbrot fractal.

Mandelbrot_grid

In the Mandelbrot formula z ⇔ z² + c, where you always start the iterative process with z equals zero, and c equaling any complex number, an endless series of seemingly random or chaotic numbers are produced. Like the weather, the stock market and other chaotic systems, negligible changes in quantities, coupled with feedback, can produce unexpected chaotic effects. The behavior of the complex numbers thus mirrors the behavior of the real world where Chaos is obvious or lurks behind the most ordered of systems.

With some values of ‘c’ the iterative process immediately begins to exponentially increase or fall into infinity. These numbers are completely outside of the Mandelbrot set. With other values of ‘c’ the iterative process is stable for a number of repetitions, and only later in the dynamic process are they attracted to infinity. These are the unstable strange attractor numbers just on the outside edge of the Mandelbrot set. They are shown on computer graphics with colors or shades of grey according to the number of stable iterations. The values of ‘c’ which remain stable, repeating as a finite number forever, never attracted to infinity, and thus within the Mandelbrot set, are plotted as black.

Mandel_Diagram

Some iterations of complex numbers like 1 -1i run off into infinity from the start, just like all of the real numbers. Other complex numbers are always stable like -1 +0i. Other complex numbers stay stable for many iterations, and then only further into the process do they unpredictably begin to start to increase or decrease exponentially (for example, .37 +4i stays stable for 12 iterations). These are the numbers on the edge of inclusion of the stable numbers shown in black.

Chaos enters into the iteration because out of the potentially infinite number of complex numbers in the window of -2.4 to .8 along the horizontal real number axis, and -1.2 to 1.2 along the vertical imaginary number axis. There are an infinite subset of such numbers on the edge, and they cannot be predicted in advance. All that we know about these edge numbers is that if the z produced by any iteration lies outside of a circle with a radius of 2 on the complex plane, then the subsequent z values will go to infinity, and there is no need to continue the iteration process.

By using a computer you can escape the normal limitations of human time. You can try a very large number of different complex numbers and iterate them to see what kind they may be, finite or infinite. Under the Mandelbrot formula you start with z equals zero and then try different values for c. When a particular value of c is attracted to infinity – produces a value for z greater than 2 – then you stop that iteration, go back to z equals zero again, and try another c, and so on, over and over again, millions and millions of times as only a computer can do.

Mandel_zoom_08_satellite_antennaMandelbrot was the first to discover that by using zero as the base z for each iteration, and trying a large number of the possible complex numbers with a computer on a trial and error basis, that he could define the set of stable complex numbers graphically by plotting their location on the complex plane. This is exactly what the Mandelbrot figure is. Along with this discovery came the surprise realization of the beauty and fractal recursive nature of these numbers when displayed graphically.

The following Numberphile video by Holly Krieger, an NSF postdoctoral fellow and instructor at MIT, gives a fairly accessible, almost cutesy, yet still technically correct explanation to the Mandelbrot set.

Fractals and the Mandelbrot set are key parts of the Chaos theories, but there is much more to it than that. Chaos Theory impacts our basic Newtonian, cause-effect, linear world view of reality as a machine. For a refresher on the big picture of the Chaos insights and how the old linear, Newtonian, machine view of reality is wrong, look at this short summary: Chaos Theory (4:48)

Anther Chaos Theory instructional applying the insights to psychology is worth your view. The Science and Psychology of the Chaos Theory (8:59, 2008). It suggests the importance of spontaneous actions in the moment, the so-called flow state.

Also see High Anxieties – The Mathematics of Chaos (59:00, BBC 2008) concerning Chaos Theories, Economics and the Environment, and Order and Chaos (50:36, New Atlantis, 2015).

Application of Chaos Theories to e-Discovery

The use of feedback, iteration and algorithmic processes are central to work in electronic discovery. For instance, my search methods to find relevant evidence in chaotic systems follow iterative processes, including continuous, interactive, machine learning methods. I use these methods to find hidden patterns in the otherwise chaotic data. An overview of the methods I use in legal search is summarized in the following chart. As you can see, steps four, five and six iterate. These are the steps where human computer interactions take place. 
predictive_coding_3.0

My methods place heavy reliance on these steps and on human-computer interaction, which I call a Hybrid process. Like Maura Grossman and Gordon Cormack, I rely heavily on high-ranking documents in this Hybrid process. The primary difference in our methods is that I do not begin to place a heavy reliance on high-ranking documents until after completing several rounds of other training methods. I call this four cylinder multimodal training. This is all part of the sixth step in the 8-step workflow chart above. The four cylinders search engines are: (1) high ranking, (2) midlevel ranking or uncertain, (3) random, and (4) multimodal (including all types of search, such as keyword) directed by humans.

Analogous Application of Similar Mandelbrot Formula For Purposes of Expressing the Importance of the Creative Human Component in Hybrid 

4-5-6-only_predictive_coding_3.0

Recall Mandelbrot’s formula: z ⇔ z² + c, which is the same as z ⇔ z2 + (a + bi). I have something like that going on in my steps four, five and six. If you plugged the numbers of the steps into the Mandelbrot formula it would read something like this: 4 ⇔ 4² + (5+6i). The fourth step is the key AI Predictive Ranking step, where the algorithm ranks the probable relevance of all documents. The fourth step of computer ranking is the whole point of the formula, so AI Ranking here I will call ‘z‘ and represents the left side of the formula. The fifth step is where humans read documents to determine relevance, let’s call that ‘r‘ and the sixth step is where human’s train the computer, ‘t‘. This is the Hybrid Active Training step where the four cylinder multimodal training methods are used to select documents to train the whole set. The documents in steps five and six, r and t are added together for relevance feedback, (r + ti).

Thus, z ⇔ z² + c, which is the same as z ⇔ z2 + (a + bi), becomes under my system z ⇔ z + (r + ti). (Note: I took out the squaring, z², because there is no such exponential function in legal search; it’s all addition.) What, you might ask, is the i in my version of the formula? This is the critical part in my formula, just as it is in Mandelbrot’s. The imaginary number – i – in my formula version represents the creativity of the human conducting the training.

The Hybrid Active Training step is not fully automated in my system. I do not simply use the highest ranking documents to train, especially in the early rounds of training, as do some others. I use a variety of methods in my discretion, especially the multimodal search methods such a keywords, concept search, and the like. In text retrieval science this use of human discretion, human creativity and judgment, is called an ad hoc search. It contrasts with fully automated search, where the text retrieval experts try to eliminate the human element. See Mr EDR for more detail on 2016 TREC Total Recall Track that had both ad hoc and fully automated sections.

My work with legal search engines, especially predictive coding, has shown that new technologies do not work with the old methods and processes, such as linear review or keyword alone. New processes are required that employ new ways of thinking. The new methods that link creative human judgments (i) and the computer’s amazing abilities at text reading speed, consistency, analysis, learning and ranking (z).

A rather Fat Cat. My latest processes, Predictive Coding  3.0, are variations of Continuous Active Training (CAT) where steps four, five and six iterate until the project is concluded. Grossman & Cormack call this Continuous Active Learning or CAL, and they claim Trademark rights to CAL. I respect their right to do so (no doubt they grow weary of vendor rip-offs) and will try to avoid the acronym henceforth. My use of the acronym CAT essentially takes the view of the other side, the human side that trains, not the machine side that learns. In both Continuous Active Learning and CAT the machine keeps learning with every document that a human codes. Continuous Active Learning or Training, makes the linear seed-set method obsolete, along with the control set and random training documents. See Losey, Predictive Coding 3.0.

In my typical implementation of Continuous Active Training I do not automatically include every document coded as a training document. This is the sixth training step (‘t‘ in the prior formula). Instead of automatically using every document to train that has been coded relevant or irrelevant, I select particular documents that I decide to use to train. This, in addition to multimodal search in step six, Hybrid Active, is another way in which the equivalent of Imaginary Numbers come into my formula, the uniquely human element (ti). I typically use most every relevant document coded in step five, the ‘r‘ in the formula, as a training document, but not all. z ⇔ z + (r + ti)

I exercise my human judgment and experience to withhold certain training documents. (Note, I never withhold hot trainers (highly relevant documents)). I do this if my experience (I am tempted to say ‘my imagination‘) suggests that including them as training documents will likely slow down or confuse the algorithm, even if temporarily. I have found that this improves efficiency and effectiveness. It is one of the techniques I used to win document review contests.

robot-friendThis kind of intimate machine communication is possible because I carefully observe the impact of each set of training documents on the classifying algorithm, and carryover lessons – iterate – from one project to the next. I call this keeping a human in the loop and the attorney in charge of relevance scope adjudications. See Losey, Why the ‘Google Car’ Has No Place in Legal Search. We humans provide experienced observation, new feedback, different approaches, empathy, play and emotion. We also add a whole lot of other things too. The AI-Robot is the Knowledge fountain. We are the Wisdom fountain.That it is why we should strive to progress into and through the Knowledge stage as soon as possible. We will thrive in the end-goal Wisdom state.

Application of Chaos Theory to Information→Knowledge→Wisdom

mininformation_arrowsThe first Information stage of the post-computer society in which we live is obviously chaotic. It is like the disconnected numbers that lie completely outside of the Mandelbrot set. It is pure information with only haphazard meaning. It is often just misinformation. Just exponential. There is an overwhelming deluge of such raw information, raw data, that spirals off into an infinity of dead-ends. It leads no where and is disconnected. The information is useless. You may be informed, but to no end. That is modern life in the post-PC era.

The next stage of society we seek, a Knowledge based culture, is geometrically similar to the large black blogs that unite most of the figure. This is the finite set of numbers that provide all connectivity in the Mandelbrot set. Analogously, this will be a time when many loose-ends will be discarded, false theories abandoned, and consensus arise.

In the next stage we will not only be informed, we will be knowledgable. The information we all be processed. The future Knowledge Society will be static, responsible, serious and well fed. People will be brought together by common knowledge. There will be large scale agreements on most subjects. A tremendous amount of diversity will likely be lost.

After a while a knowledgable world will become boring. Ask any professor or academic.  The danger of the next stage will be stagnation, complacency, self-satisfaction. The smug complacency of a know-it-all world. This may be just as dangerous as the pure-chaos Information world in which we now live.

If society is to continue to evolve after that, we will need to move beyond mere Knowledge. We will need to challenge ourselves to attain new, creative applications of Knowledge. We will need to move beyond Knowledge into Wisdom.

benoit-mandelbrot-seahorse-valleyI am inclined to think that if we ever do progress to a Wisdom-based society, we will be a place and time much like the unpredictable fractal edges of the Mandelbrot. Stable to a point, but ultimately unpredictable, constantly changing, evolving. The basic patterns of our truth will remain the same, but they will constantly evolve and be refined. The deeper we dig, the more complex and beautiful it will be. The dry sameness of a Knowledgable based world will be replaced by an ever-changing flow, by more and more diversity and individuality. Our social cohesivity will arise from recursivity and similarity, not sameness and conformity. A Wisdom based society will be filled with fractal beauty. It will live ever zigzagging between the edge of the known and unknown. It will also necessarily have to be a time when people learn to get along together and share in prosperity and health, both physical and mental. It will be a time when people are accustomed to ambiguities and comfortable with them.

In Wisdom World knowledge itself will be plentiful, but will be held very lightly. It will be subject to constant reevaluation. Living in Wisdom will be like living on the rough edge of the Mandelbrot. It will be a culture that knows infinity firsthand. An open, peaceful, ecumenical culture that knows everything and nothing at the same time. A culture where most of the people, or at least a strong minority, have attained a certain level of personal Wisdom.

Conclusion

Back to our times, where we are just now discovering what machine learning can do, we are just beginning to pattern our investigations, our search for truth, in the Law and elsewhere, on new information gleaned from the Chaos theories. Active machine learning, Predictive Coding, is a natural outgrowth of Chaos Theory and the Mandelbrot Set. The insights of hidden fractal order that can only be seen by repetitive computer processes are prevalent in computer based culture. These iterative, computer assisted processes have been the driving force behind thousands of fact investigations that I have conducted since 1980.

I have been using computers to help me in legal investigations since 1980. The reliance on computers at first increased slowly, but steadily. Then from about 2006 to 2013 the increase accelerated and peaked in late 2013. The shift is beginning to level off. We are still heavily dependent on computers, but now we understand that human methods are just as important as software. Software is limited in its capacities without human additive, especially in legal search. Hybrid, Man and Machine, that is the solution. But remember that the focus should be on us, human lawyers and search experts. The AIs we are creating and training should be used to Augment and Enhance our abilities, not replace them. They should complement and complete us.

butterfly_effectThe converse realization of Chaos Theory, that disorder underlies all apparent order, that if you look closely enough, you will find it, also informs our truth-seeking investigatory work. There are no smooth edges. It is all rough. If you look close enough the border of any coastline is infinite.

The same is true of the complexity of any investigation. As every experienced lawyer knows, there is no black and white, no straight line. It always depends on so many things. Complexity and ambiguity are everywhere. There is always a mess, always rough edges. That is what makes the pursuit of truth so interesting. Just when you think you have it, the turbulent echo of another butterfly’s wings knock you about.

The various zigs and zags of e-discovery, and other investigative, truth-seeking activities, are what make them fascinating. Each case is different, unique, yet the same patterns are seen again and again with recursive similarity. Often you begin a search only to have it quickly burn out. No problem, try again. Go back to square one, back to zero, and try another complex number, another clue. Pursue a new idea, a new connection. You chase down all reasonable leads, understanding that many of them will lead nowhere. Even failed searches rule out negatives and so help in the investigation. Lawyers often try to prove a negative.

The fractal story that emerges from Hybrid Multimodal search is often unexpected. As the search matures you see a bigger story, a previously hidden truth. A continuity emerges that connects previously unrelated facts. You literally connect the dots. The unknown complex numbers – (a + bi) – the ones that do not spiral off into the infinite large or small, do in fact touch each other when you look closely enough at the spaces.

z ⇔ z2 + (a + bi)

SherlockI am no Sherlock, but I know how to find ESI using computer processes. It requires an iterative sorting processes, a hybrid multimodal process, using the latest computers and software. This process allows you to harness the infinite patience, analytics and speed of a machine to enhance your own intelligence ……. to augment your own abilities. You let the computer do the boring bits, the drudgery, while you do the creative parts.

The strength comes from the hybrid synergy. It comes from exploring the rough edges of what you think you know about the evidence. It does not come from linear review, nor simple keyword cause-effect. Evidence is always complex, always derived from chaotic systems. A full multimodal selection of search tools is needed to find this kind of dark data.

The truth is out there, but sometimes you have to look very carefully to find it. You have to dig deep and keep on looking to find the missing pieces, to move from Information → Knowledge → Wisdom.

_______

______

_____

____

___

__

_

.

Mandelbrot_zoom

.

_

.

blue zoom Mandelbrot fractal animation of looking deeper into the details

.

.


%d bloggers like this: