More Additions to AI-Ethics.com: Offer to Host a No-Press Conference to Mediate the Current Disputes on AI Ethics, Report on the Asilomar Conference and Report on Cyborg Law

September 24, 2017

This week the Introduction and Mission Statement page of AI-Ethics.com was expanded. I also added two new blogs to the AI-Ethics website. The first is a report of the 2017 conference of the Future of Life Institute. The second is a report on Cyborg Law, subtitled, Using Physically Implanted AI to Enhance Human Abilities.

AI-Ethics.com Mission
A Conference to Move AI Ethics Talk from Argument to Dialogue

The first of the three missions of AI-Ethics.com is to foster dialogue between the conflicting camps in the current AI ethics debate. We have now articulated a specific proposal on how we propose to do that, namely by hosting a  conference to move AI ethics talk from argument to dialogue. I propose to use professional mediators to help the parties reach some kind of base consensus. I know we have the legal skills to move the feuding leaders from destructive argument to constructive dialogue. The battle of the ethics robots must stop!

In arguments nobody really listens to try to understand the other side. If they hear at all it is just to analyze and respond, to strike down. The adversarial argument approach only works if there is a fair, disinterested judge to rule and resolve the disputes. In the ongoing disputes between opposing camps in AI ethics there is no judge. There is only public opinion. In dialogue the whole point is to listen and hear the other side’s position. The idea is to build common understanding and perhaps reach a consensus from common ground. There are no winners unless both sides win. Since we have no judges in AI ethics, the adversarial debate now raging is pointless, irrational. It does more hard than good for both sides. Yet this kind of debate continues between otherwise very rational people.

The AI-Ethic’s Debate page was also updated this week to include the latest zinger. This time the dig was by Google’s head of search and AI, John Giannandrea, and was, as usual, directed against Elon Musk. Check out the page to see who said what. Also see: Porneczi, Google’s AI Boss Blasts Musk’s Scare Tactics on Machine Takeover (Bloomberg 9/19/17).

The bottom line for us now is how to move from debate to dialogue. (I was into that way before Sedona.) For that reason, we offer to host a closed meeting where the two opposing camps can meet and mediate.It will work, but only when the leaders of both sides are willing to at least be in the same room together at the same time and talk this out.

Here is our revised Mission page providing more details of our capabilities. Please let me know if you want to be a part of such a conference or can help make it happen.

We know from decades of legal experience as practicing attorneys, mediators and judges that we can overcome the current conflicts. We use confidential dialogues based on earned trust, understanding and respect. Social media and thirty-second sound bites, which characterize the current level of argument, will never get us there. It will, and already has, just exasperated the problem. AI-Ethics.com proposes to host a no-press allowed conference where people can speak to each other without concern of disclosure. Everyone will agree to maintain confidentiality. Then the biggest problem will be attendance, actually getting the leaders of both sides into a room together to hash this out. Depending on turn-out we could easily have dozens of breakout sessions and professional mediators and dialogue specialists assigned to each group.

The many lawyers already in AI-Ethics.com are well qualified to execute an event like that. Collectively we have experience with thousands of mediations; yes, some of them even involving scientists, top CEOs and celebrities. We know how to keep confidences, build bridges and overcome mistrust. If need be we can bring in top judges too. The current social media sniping that has characterized the AI ethics debate so far should stop. It should be replaced by real dialogue. If the parties are willing to at least meet, we can help make it happen. We are confident that we can help elevate the discussion and attain some levels of beginning consensus. At the very least we can stop the sniping. Write us if you might be able to help make this happen. Maybe then we can move onto agreement and action.

 

 

Future of Life Institute Asilomar Conference

The Future of Life Institute was founded by the charismatic, Max Tegmark, author of Life 3.0: Being Human in the Age of Artificial Intelligence (2017). This is a must-read, entry level book on AI, AI ethics and, as the title indicates, the future of life. Max is an MIT professor and cosmologist. The primary funding for his Institute is from none other than Elon Musk. The 2017 conference was held in Asilomar, California and so was named the Asilomar Conference. Looks like a very nice place on the coast to hold a conference.

This is the event where the Future of Life Institute came up with twenty-three proposed principles for AI ethics. They are called, as you might have guessed, the Asilomar Principles. I will be writing about these in the coming months as they are the most detailed list of principles yet created.

The new web page I created this week reports on the event itself, not the principles. You can learn a lot about the state of the law and AI ethics by reviewing this page and some of the videos shared there of conference presentations. We would like to put on an event like this, only more intimate and closed to press as discussed.

We will keep pushing for a small confidential dialogue based event like this. As mostly lawyers around here we know a lot about confidentiality and mediation. We can help make it happen. We have some places in Florida in mind for the event that are just as nice as Asilomar, maybe even nicer. We got through Hurricane Irma alright and are ready to go, without or without Musk’s millions to pay for it.

Cyborg Law and Cyber-Humans

The second new page in AI-Ethics.com is a report on Cyborg Law: Using Physically Implanted AI to Enhance Human Abilities. Although we will build and expand on this page in the future, what we have created so far relies primarily upon a recent article and book. The article is by Woodrow Barfield and Alexander Williams, Law, Cyborgs, and Technologically Enhanced Brains (Philosophies 2017, 2(1), 6; doi: 10.3390/philosophies2010006). The book is by the same Woodrow Barfield and is entitled Cyber-Humans: Our Future with Machines (December, 2015). Our new page also includes a short discussion and quote from Riley v. California, 573 U.S. __,  189 L.Ed.2d 430, 134 S.Ct. 2473 (2014).

Cyborg is a term that refers generally to humans with technology integrated into their body. The technology can be designed to restore lost functions, but also to enhance the anatomical, physiological, and information processing abilities of the body. Law, Cyborgs, and Technologically Enhanced Brains.

The lead author of the cited article on cyborg law, Woody Barfield is an engineer who has been thinking about the problems of cyborg regulation longer than anyone. Barfield was an Industrial and Systems Engineering Professor at the University of Washington for many years. His research focused on the design and use of wearable computers and augmented reality systems. Barfield has also obtained both JD and LLM degrees in intellectual property law and policy. The legal citations throughout his book, Cyber-Humans, make this especially valuable for lawyers. Look for more extended discussions of Barfield’s work here in the coming months. He is the rare engineer who also understands the law.


Another TAR Course Update and a Mea Culpa for the Negative Consequences of ‘Da SIlva Moore’

June 4, 2017

We lengthened the TAR Course again by adding a video focusing on the three iterated steps in the eight-step workflow of predictive coding. Those are steps four, five and six: Training Select, AI Document Ranking, and Multimodal Review. Here is the new video introducing these steps. It is divided into two parts.

This video was added to the thirteenth class of the TAR Course. It has sixteen classes altogether, which we continue to update and announce on this blog. There were also multiple revisions to the text in this class.

Unintended Negative Consequences of Da Silva Moore

Predictive coding methods have come a long way since Judge Peck first approved predictive coding in our Da Silva Moore case. The method Brett Anders and I used back then, including disclosure of irrelevant documents in the seed set, was primarily derived from the vendor whose software we used, Recommind, and from Judge Peck himself. We had a good intellectual understanding, but it was the first use for all of us, except the vendor. I had never done a predictive coding review before, nor, for that matter, had Judge Peck. As far as I know Judge Peck still has not ever actually used predictive coding software to do document review, although you would be hard pressed to find anyone else in the world with a better intellectual grasp of the issues.

I call the methods we used in Da Silva Moore Predictive Coding 1.0. See: Predictive Coding 3.0 (October 2015) (explaining the history of predictive coding methods). Now, more than five years later, my team is on version 4.0. That is what we teach in the TAR Course. What surprises me is that the rest of the profession is still stuck in our first method, our first ideas of how to best use the awesome power of active machine learning.

This failure to move on past the Predictive Coding 1.0 methods of Da Silva Moore, is, I suspect, one of the major reasons that predictive coding has never really caught on. In fact, the most successful document review software developers since 2012 have ignored predictive coding altogether.

Mea Culpa

Looking back now at the 1.0 methods we used in Da Silva I cannot help but cringe. It is truly unfortunate that the rest of the legal profession still uses these methods. The free TAR Course is my attempt to make amends, to help the profession move on from the old methods. Mea Culpa.

In my presentation in Manhattan last month I humorously quipped that my claim to fame, Da Silva Moore, was also my claim to shame. We never intended for the methods in Da Silva Moore to be the last word. It was the first word, writ large, to be sure, but in pencil, not stone. It was like a billboard that was supposed to change, but never did. Who knew what we did back in 2012 would have such unintended negative consequences?

In Da Silva Moore we all considered the method of usage of machine learning that we came up with as something of an experiment. That is what happens when you are the first at anything. We assumed that the methods we came up with would quickly mature and evolve in other cases. They certainly did for us. Yet, the profession has mostly been silent about methods since the first version 1.0 was explained. (I could not take part in these early explanations by other “experts” as the case was ongoing and I was necessarily silenced from all public comment about it.) From what I have been told by a variety of sources many, perhaps even most attorneys and vendors are using the same methods that we used back in 2012. No wonder predictive coding has not caught on like it should. Again, sorry about that.

Why the Silence?

Still, it is hardly all my fault. I have been shouting about methods ever since 2012, even if I was muzzled from talking about Da Silva Moore. Why is no one else talking about the evolution of predictive coding methods? Why is mine the only TAR Course?

There is some discussion of methods going on, to be sure, but most of it is rehashed, or so high-level and intellectual as to be superficial and worthless. The discussions and analysis do not really go into the nitty-gritty of what to do. Why are we not talking about the subtleties of the “Stop decision?” About the in and outs of document training selection. About the respective merits of CAL versus IST? I would welcome dialogue on this with other practicing attorneys or vendor consultants. Instead, all I hear is silence and old issues.

The biggest topic still seems to be the old one of whether to filter documents with keywords before beginning machine training. That is a big, no duh, don’t do it, unless lack of money or some other circumstance forces you to, or unless the filtering is incidental and minor to cull out obvious irrelevant. See eg: Stephanie Serhan, Calling an End to Culling: Predictive Coding and the New Federal Rules of Civil Procedure, 23 Rich. J.L. & Tech. 5 (2016). Referring to the 2015 Rule Amendments, Serhan, a law student, concludes:

Considering these amendments, predictive coding should be applied at the outset on the entire universe of documents in a case. The reason is that it is far more accurate, and is not more costly or time-consuming, especially when the parties collaborate at the outset.

Also see eg, William Webber’s analysis of the Biomet case where this kind of keyword filtering was used before predictive coding began. What is the maximum recall in re Biomet?Evaluating e-Discovery (4/24/13). Webber, an information scientist, showed back in 2013 that when keyword filtering was used in the Biomet case, it filtered out over 40% of the relevant documents. This doomed the second filter predictive coding review to a maximum possible recall of 60%, even if it was perfect, meaning it would otherwise have attained 100% recall, which (almost) never happens. I have never seen a cogent rebuttal of this analysis; again, aside from proportionality, cost arguments.

There was discussion for a while on another important, yet sort of no-brainer issue, whether to keep on machine training or not, which Grossman and Cormack called Continuous Active Learning (CAL).  We did not do that in Da Silva Moore, but we were using predictive Coding 1.0 as explained by our vendor. We have known better than that now for years. In fact, later in 2012, during my two public ENRON document review experiments with predictive coding I did not follow the two-step procedure of version 1.0. Instead, I just kept on training until I could not find any more relevant documents. A Modest Contribution to the Science of Search: Report and Analysis of Inconsistent Classifications in Two Predictive Coding Reviews of 699,082 Enron Documents. (Part One); Comparative Efficacy of Two Predictive Coding Reviews of 699,082 Enron Documents(Part Two); Predictive Coding Narrative: Searching for Relevance in the Ashes of Enron (in PDF form and the blog introducing this 82-page narrative, with second blog regarding an update); Borg Challenge: Report of my experimental review of 699,082 Enron documents using a semi-automated monomodal methodology (a five-part written and video series comparing two different kinds of predictive coding search methods).

Of course you keep training. I have never heard any viable argument to the contrary. Train then review, which is the protocol in Da Silva Moore, was the wrong way to do it. Clear and simple. The right way to do machine training is to  keep training until you are done with the review. This is the main thing that separates Predictive Coding 1.0 from 2.0. See: Predictive Coding 3.0 (October 2015). I switched to version 2.0 right after Da Silva Moore in late 2012 and started using continuous on my own initiative. It seemed obvious once I had some experience under my belt.  Still, I do credit Maura Grossman and Gordon Cormack with the terminology and scientific proof of the effectiveness of CAL, a term which they have now trademarked for some reason.  They have made important contributions to methods and are tireless educators of the profession. But where are the other voices? Where are the lawyers?

The Grossman and Cormack efforts are scientific and professorial. To me this is just work. This is what I do as a lawyer to make a living. This is what I do to help other lawyers find the key documents they need in a case. So I necessarily focus on the details of how to actually do active machine learning. I focus on the methods, the work-flow. Aside from the Professors Cormack and Grossman, and myself, almost no one else is talking about predictive coding methods. Lawyers mostly just do what the vendors recommend, like I did back in Da Silva Moore days. Yet almost all of the vendors are stagnant. (The new KrolLDiscovery and Catalyst are two exceptions, and even the former still has some promised software revisions to make.)

From what I have seen of the secret sauce that leaks out in predictive coding software demos of most vendors, they are stuck in the old version 1.0 methods. They know nothing, for instance, of the nuances of double-loop learning taught in the TAR Course. The vendors are instead still using the archaic methods that I thought were good back in 2012. I call these methods Predictive Coding 1.0 an 2.0. See: Predictive Coding 3.0 (October 2015).

In addition to continuous training, or not, most of those methods still use nonsensical random control sets that ignore concept drift, a fact of life in every large review project. Id. Moreover, the statistical analysis in 1.0 and 2.0 that they use for recall does not survive close scrutiny. Most vendors routinely ignore the impact of Confidence Intervals on range and the impact on low prevalence data-sets. They do not even mention binomial calculations designed to deal with low prevalence. Id. Also See: ZeroErrorNumerics.com.

Conclusion

The e-Discovery Team will keep on writing and teaching, satisfied that at least some of the other leaders in the field are doing essentially the same thing. You know who you are. We hope that someday others will experiment with the newer methods. The purpose of the TAR Course is to provide the information and knowledge needed to try these methods. If you have tried predictive coding before, and did not like it, we hear you. We agree. I would not like it either if I still had to use the antiquated methods of Da Silva Moore.

We try to make amends for the unintended consequences of Da SIlva Moore by offering this TAR Course. Predictive coding really is breakthrough technology, but only if used correctly. Come back and give it another try, but this time use the latest methods of Predictive Coding 4.0.

Machine learning is based on science, but the actual operation is an art and craft. So few writers in the industry seem to understand that. Perhaps that is because they are not hands-on. They do not step-in. (Stepping-In is discussed in Davenport and Kirby, Only Humans Need Apply, and by Dean Gonsowski, A Clear View or a Short Distance? AI and the Legal Industry, and A Changing World: Ralph Losey on “Stepping In” for e-Discovery. Also see: Losey, Lawyers’ Job Security in a Near Future World of AI, Part Two.) Even most vendor experts have never actually done a document review project of their own. And the software engineers, well, forget about it. They know very little about the law (and what they think they know is often wrong) and very little about what really goes on in a document review project.

Knowledge of the best methods for machine learning, for AI, does not come from thinking and analysis. It comes from doing, from practice, from trial and error. This is something all lawyers understand because most difficult tasks in the profession are like that.

The legal profession needs to stop taking legal advice from vendors on how to do AI-enhanced document review. Vendors are not supposed to be giving legal advice anyway. They should stick to what they do best, creating software, and leave it to lawyers to determine how to best use the tools they make.

My message to lawyers is to get on board the TAR train. Even though Da Silva Moore blew the train whistle long ago, the train is still in the station. The tracks ahead are clear of all legal obstacles. The hype and easy money phase has passed. The AI review train is about to get moving in earnest. Try out predictive coding, but by all means use the latest methods. Take the TAR Course on Predictive Coding 4.0 and insist that your vendor adjust their software so you can do it that way.


Predictive Coding 4.0 – Nine Key Points of Legal Document Review and an Updated Statement of Our Workflow – Part Two

September 18, 2016

Team_TRECIn Part One we announced the latest enhancements to our document review method, the upgrade to Predictive Coding 4.0. We explained the background that led to this upgrade – the TREC research and hundreds of projects we have done since our last upgrade a year ago. Millions have been spent to develop the software and methods we now use for Technology Assisted Review (TAR). As a result our TAR methods are more effective and simpler than ever.

The nine insights we will share are based on our experience and research. Some of our insights may be complicated, especially our lead insight on Active Machine Learning covered in this Part Two with our new description of ISTIntelligently Spaced Training. We consider IST the smart, human empowering alternative to CAL. If I am able to write these insights up here correctly, the obviousness of them should come through. They are all simple in essence. The insights and methods of Predictive Coding 4.0 document review are partially summarized in the chart below (which you are free to reproduce without edit).

predictive_coding_6-9

1st of the Nine Insights: Active Machine Learning

Our method is Multimodal in that it uses all kinds of document search tools. Although we emphasize active machine learning, we do not rely on that method alone. Our method is also Hybrid in that we use both machine judgments and human (lawyer) judgments. Moreover, in our method the lawyer is always in charge. We may take our hand off the wheel and let the machine drive for a while, but under our versions of Predictive Coding, we watch carefully. We remain ready to take over at a moment’s notice. We do not rely on one brain to the exclusion of another. See eg. Why the ‘Google Car’ Has No Place in Legal Search (caution against over reliance on fully automated methods of active machine learning). Of course the converse is also true, we never just rely on our human brain alone. It has too many limitations. We enhance our brain with predictive coding algorithms. We add to our own natural intelligence with artificial intelligence. The perfect balance between the two, the Balanced Hybrid, is another of insights that we will discuss later.

Active Machine Learning is Predictive Coding – Passive Analytic Methods Are Not

Even though our methods are multimodal and hybrid, the primary search method we rely on is Active Machine Learning. The overall name of our method is, after all, Predictive Coding. And, as any information retrieval expert will tell you, predictive coding means active machine learning. That is the only true AI method. The passive type of machine learning that some vendors use under the name Analytics is NOT the same thing as Predictive Coding. These passive Analytics have been around for years and are far less powerful than active machine learning.

concept-searches-brainThese search methods, that used to be called Concept Search, were a big improvement upon relying on keyword search alone. I remember talking about concepts search techniques in reverent terms when I did my first Legal Search webinar in 2006 with Jason Baron and Professor Doug Oard. That same year, Kroll Ontrack bought one of the original developers and patent holders of concept search, Engenium. For a short time in 2006 and 2007 Kroll Ontrack was the only vendor to have these concept search tools. The founder of Engenium, David Chaplin came with the purchase, and became Kroll Ontrack’s VP of Advanced Search Technologies for three years. (Here is an interesting interview of Chaplin that discusses what he and Kroll Ontrack were doing with advanced search analytic-type tools when he left in 2009.)

search_globalBut search was hot and soon boutique search firms like, Clearwell, Cataphora, Content Analyst (the company recently purchased by popular newcomer, kCura), and other e-discovery vendors developed their own concept search tools. Again, they were all using passive machine learning. It was a big deal ten years ago. For a good description of these admittedly powerful, albeit now dated search tools, see the concise, well-written article by D4’s Tom Groom, The Three Groups of Discovery Analytics and When to Apply Them.

Search experts and information scientists know that active machine learning, also called supervised machine learning, was the next big step in search after concept searches, which are, in programming language, also known as passive or unsupervised machine learning. I am getting out of my area of expertise here, and so am unable go into any details, other than present the below instructional chart by Hackbright Academy that sets forth key difference between supervised learning (predictive coding) and unsupervised (analytics, aka concept search).

machine_learning_algorithms

What I do know is that the bonafide active machine learning software in the market today all use either a form of Logistic Regression, including Kroll Ontrack, or SVM, which means Support Vector Machine.

e-Discovery Vendors Have Been Market Leaders in Active Machine Learning Software

Kroll_IRTAfter Kroll Ontrack absorbed the Engenium purchase, and its founder Chaplin completed his contract with Kroll Ontrack and moved on, Kroll Ontrack focused their efforts on the next big step, active machine learning, aka predictive coding. They have always been that kind of cutting edge company, especially when it comes to search, which is one reason they are one of my personal favorites. A few of the other, then leading e-discovery vendors did too, including especially Recommind and the Israeli based search company, Equivo. Do not get me wrong, the concept search methods, now being sold under the name of TAR Analytics, are powerful search tools. They are a part of our multimodal tool-kit and should be part of yours. But they are not predictive coding. They do not rank documents according to your external input, your supervision. They do not rely on human feedback. They group documents according to passive analytics of the data. It is automatic, unsupervised. These passive analytic algorithms can be good tools for efficient document review, but they not active machine learning and are nowhere near as powerful.

ghosts

Search Software Ghosts

Many of the software companies that made the multi-million dollar investments necessary to go to the next step and build document review platforms with active machine learning algorithms have since been bought out by big-tech and repurposed out of the e-discovery market. They are the ghosts of legal search past. Clearwell was purchased by Symantec and has since disappeared. Autonomy was purchased by Hewlett Packard and has since disappeared. Equivio was purchased by Microsoft and has since disappeared. See e-Discovery Industry Reaction to Microsoft’s Offer to Purchase Equivio for $200 Million – Part One and Part Two. Recommind was recently purchased by OpenText and, although it is too early to tell for sure, may also soon disappear from e-Discovery.

Slightly outside of this pattern, but with the same ghosting result, e-discovery search company, Cataphora, was bought by Ernst & Young, and has since disappeared. The year after the acquisition, Ernst & Young added predictive coding features from Cataphora to its internal discovery services. At this point, all of the Big Four Accounting Firms, claim to have their own proprietary software with predictive coding. Along the same lines, at about the time of the Cataphora buy-out, consulting giant FTI purchased another e-discovery document review company, Ringtail Solutions (known for its petri dish like visualizations). Although not exactly ghosted by FTI from the e-discovery world after the purchase, they have been absorbed by the giant FTI.

microsoft_acquiresOutside of consulting/accountancy, in the general service e-discovery industry for lawyers, there are, at this point (late 2016) just a few document review platforms left that have real active machine learning. Some of the most popular ones left behind certainly do not. They only have passive learning analytics. Again, those are good features, but they are not active machine learning, one of the nine basic insights of Predictive Coding 4.0 and a key component of the e-Discovery Team’s document review capabilities.

predictive_coding_9_2

The power of the advanced, active learning technologies that have been developed for e-discovery is the reason for all of these acquisitions by big-tech and the big-4 or 5. It is not just about wild overspending, although that may well have been the case for Hewlett Packard payment of $10.3 Billion to buy Autonomy. The ability to do AI-enhanced document search and review is a very valuable skill, one that will only increase in value as our data volumes continue to explode. The tools used for such document review are also quite valuable, both inside the legal profession and, as the ghostings prove, well beyond into big business. See e-Discovery Industry Reaction to Microsoft’s Offer to Purchase Equivio for $200 MillionPart Two.

The indisputable fact that so many big-tech companies have bought up the e-discovery companies with active machine learning software should tell you a lot. It is a testimony to the advanced technologies that the e-discovery industry has spawned. When it comes to advanced search and document retrieval, we in the e-discovery world are the best in the world my friends, primarily because we have (or can easily get) the best tools. Smile.

usain-bolt-smiling

Search is king of our modern Information Age culture. See Information → Knowledge → Wisdom: Progression of Society in the Age of ComputersThe search for evidence to peacefully resolve disputes is, in my most biased opinion, the most important search of all. It sure beats selling sugar water. Without truth and justice all of the petty business quests for fame and fortune would crumble into anarchy, or worse, dictatorship.

With this background it is easy to understand why some of the e-discovery vendors left standing are not being completely candid about the capabilities of their document review software. (It is called puffing and is not illegal.) The industry is unregulated and, alas, most of our expert commentators are paid by vendors. They are not independent. As a result, many of the lawyers who have tried what they thought was predictive coding, and had disappointing results, have never really tried predictive coding at all. They have just used slightly updated concept search.

Ralph Losey with this "nobody read my blog" sad shirtAlternatively, some of the disappointed lawyers may have used one of the many now-ghosted vendor tools. They were all early version 1.0 type tools. For example, Clearwell’s active machine learning was only on the market for a few months with this feature before they were bought and ghosted by Symantec. (I think Jason Baron and I were the first people to see an almost completed demo of their product at a breakfast meeting a few months before it was released.) Recommind’s predictive coding software was well-developed at the time of their sell-out, but not its methods of use. Most of its customers can testify as to how difficult it is to operate. That is one reason that OpenText was able to buy them so cheaply, which, we now see, was part of their larger acquisition plan culminating in the purchase of Dell’s EMC document management software.

All software still using early methods, what we call version 1.0 and 2.0 methods based on control sets, are cumbersome and hard to operate, not just Recommind’s system. I explained this in my article last year, Predictive Coding 3.0. I also mentioned in this article that some vendors with predictive coding would only let you use predictive coding for search. It was, in effect, mono-modal. That is also a mistake. All types of search must be used – multimodal – for the predictive coding type of search to work efficiently and effectively. More on that point later.

Maura Grossman Also Blows the Whistle on Ineffective “TAR tools”

Maura Grossman aka "Mr. Grossman" to her email friends

Maura Grossman, who is now an independent expert in this field, made many of these same points in a recent interview with Artificial Lawyer, a periodical dedicated to AI and the Law. AI and the Future of E-Discovery: AL Interview with Maura Grossman (Sept. 16, 2016). When asked about the viability of the “over 200 businesses offering e-discovery services” Maura said, among other things:

In the long run, I am not sure that the market can support so many e-discovery providers …

… many vendors and service providers were quick to label their existing software solutions as “TAR,” without providing any evidence that they were effective or efficient. Many overpromised, overcharged, and underdelivered. Sadly, the net result was a hype cycle with its peak of inflated expectations and its trough of disillusionment. E-discovery is still far too inefficient and costly, either because ineffective so-called “TAR tools” are being used, or because, having observed the ineffectiveness of these tools, consumers have reverted back to the stone-age methods of keyword culling and manual review.

caveman lawyerNow that Maura is no longer with the conservative law firm of Wachtell Lipton, she has more freedom to speak her mind about caveman lawyers. It is refreshing and, as you can see, echoes much of what I have been saying. But wait, there is still more that you need to hear from the interview of new Professor Grossman:

It is difficult to know how often TAR is used given confusion over what “TAR” is (and is not), and inconsistencies in the results of published surveys. As I noted earlier, “Predictive Coding”—a term which actually pre-dates TAR—and TAR itself have been oversold. Many of the commercial offerings are nowhere near state of the art; with the unfortunate consequence that consumers have generalised their poor experiences (e.g., excessive complexity, poor effectiveness and efficiency, high cost) to all forms of TAR. In my opinion, these disappointing experiences, among other things, have impeded the adoption of this technology for e-discovery. …

ulNot all products with a “TAR” label are equally effective or efficient. There is no Consumer Reports or Underwriters Laboratories (“UL”) that evaluates TAR systems. Users should not assume that a so-called “market leading” vendor’s tool will necessarily be satisfactory, and if they try one TAR tool and find it to be unsatisfactory, they should keep evaluating tools until they find one that works well. To evaluate a tool, users can try it on a dataset that they have previously reviewed, or on a public dataset that has previously been labelled; for example, one of the datasets prepared for the TREC 2015 or 2016 Total Recall tracks. …

She was then asked by the Artificial Lawyer interviewer (name never identified), which is apparently based in the UK, another popular question:

As is often the case, many lawyers are fearful about any new technology that they don’t understand. There has already been some debate in the UK about the ‘black box’ effect, i.e., barristers not knowing how their predictive coding process actually worked. But does it really matter if a lawyer can’t understand how algorithms work?

Maura_Goog_GlassesThe following is an excerpt of Maura’s answer. Suggest you consult the full article for a complete picture. AI and the Future of E-Discovery: AL Interview with Maura Grossman (Sept. 16, 2016). I am not sure whether she put on her Google Glasses to answer (probably not), but anyway, I rather like it.

Many TAR offerings have a long way to go in achieving predictability, reliability, and comprehensibility. But, the truth that many attorneys fail to acknowledge is that so do most non-TAR offerings, including the brains of the little black boxes we call contract attorneys or junior associates. It is really hard to predict how any reviewer will code a document, or whether a keyword search will do an effective job of finding substantially all relevant documents. But we are familiar with these older approaches (and we think we understand their mechanisms), so we tend to be lulled into overlooking their limitations.

The brains of the little black boxes we call contract attorneys or junior associates. So true. We will go into that more throughly in our discussion of the GIGO & QC insight.

Recent Team Insights Into Active Machine Learning

To summarize what I have said so far, in the field of legal search, only active machine learning:

  • effectively enhances human intelligence with artificial intelligence;
  • qualifies for the term Predictive Coding.

I want to close on this discussion of active machine learning with one more insight. This one is slightly technical, and again, if I explain it correctly, should seem perfectly obvious. It is certainly not new, and most search experts will already know this to some degree. Still, even for them, there may some nuances to this insight that they have not thought of. It can be summarized as follows: active machine learning should have a double feedback loop with active monitoring by the attorney trainers.

robot-friend

feedback_loopsActive machine learning should create feedback for both the algorithm (the data classified) AND the human managing the training. Both should learn, not just the robot. They should, so to speak, be friends. They should get to know each other

Many predictive coding methods that I have read about, or heard described, including how I first used active machine learning, did not sufficiently include the human trainer in the feedback loop.  They were static types of training using single a feedback loop. These methods are, so to speak, very stand-offish, aloof. Under these methods the attorney trainer does not even try to understand what is going on with the robot. The information flow was one-way, from attorney to machine.

Mr_EDRAs I grew more experienced with the EDR software I started to realize that it is possible to start to understand, at least a little, what the black box is doing. Logistic based AI is a foreign intelligence, but it is intelligence. After a while you start to understand it. So although I started just using one-sided machine training, I slowly gained the ability to read how EDR was learning. I then added another dimension, another feedback loop that was very interesting one indeed. Now I not only trained and provided feedback to the AI as to whether the predictions of relevance were correct, or not, but I also received training from the AI as to how well, or not, it was learning. That in turn led to the humorous personification of the Kroll Ontrack software that we now call Mr. EDR. See MrEDR.com. When we reached this level, machine training became a fully active, two-way process.

We now understand that to fully supervise a predictive coding process you to have a good understanding of what is happening. How else can you supervise it? You do not have to know exactly how the engine works, but you at least need to know how fast it is going. You need a speedometer. You also need to pay attention to how the engine is operating, whether it is over-heating, needs oil or gas, etc. The same holds true to teaching humans. Their brains are indeed mysterious black boxes. You do not need to know exactly how each student’s brain works in order to teach them. You find out if your teaching is getting through by questions.

For us supervised learning means that the human attorney has an active role in the process. A role where the attorney trainer learns by observing the trainee, the AI in creation. I want to know as much as possible, so long as it does not slow me down significantly.

In other methods of using predictive coding that we have used or seen described the only role of the human trainer is to say yes or no as to the relevance of a document. The decision as to what documents to select for training has already been predetermined. Typically it is the highest ranked documents, but sometimes also some mid-ranked “uncertain documents” or some “random documents” are added in the mix. The attorney
has no say in what documents to look at. They are all fed to him or her according to predetermined rules. These decision making rules are set in ralph_boredadvance and do not change. These active machine learning methods work, but they are slow, and less precise, not to mention boring as hell.

The recall of these single-loop passive supervision methods may also not be as good. The jury is still out on that question. We are trying to run experiments on that now, although it can be hard to stop yawning. See an earlier experiment on this topic testing the single loop teaching method of random selection: Borg Challenge: Report of my experimental review of 699,082 Enron documents using a semi-automated monomodal methodology.

These mere yes or no, limited participation methods are hybrid Man-Machine methods, but, in our opinion, they are imbalanced towards the Machine. (Again, more on the question of Hybrid Balance will be covered in the next installment of this article.) This single versus dual feedback approach seems to be the basic idea behind the Double Loop Learning approach to human education depicted in the diagram below. Also see Graham Attwell, Double Loop Learning and Learning Analytics (Pontydysgu, May 4, 2016).

double-loop-learning

To quote Wikipedia:

The double loop learning system entails the modification of goals or decision-making rules in the light of experience. The first loop uses the goals or decision-making rules, the second loop enables their modification, hence “double-loop.” …

Double-loop learning is contrasted with “single-loop learning”: the repeated attempt at the same problem, with no variation of method and without ever questioning the goal. …

Double-loop learning is used when it is necessary to change the mental model on which a decision depends. Unlike single loops, this model includes a shift in understanding, from simple and static to broader and more dynamic, such as taking into account the changes in the surroundings and the need for expression changes in mental models.

double-loop-learning2

The method of active machine learning that we use in Predictive Coding 4.0 is a type of double loop learning system. As such it is ideal for legal search, which is inherently ad hoc, where even the understanding of relevance evolves as the project develops. As Maura noted near the end of the Artificial Lawyer interview:

… e-discovery tends to be more ad hoc, in that the criteria applied are typically very different for every review effort, so each review generally begins from a nearly zero knowledge base.

The driving impetus behind our double feedback look system is to allow for training document selection to vary according to the circumstances encountered. Attorneys select documents for training and then observe how these documents impact the AI’s overall ranking of the documents. Based on this information decisions are then made by the attorney as to which documents to next submit for training. A single fixed mental model is not used, such as only submitting the ten highest ranked documents for training.

The human stays involved and engaged and selects the next documents to add to the training based on what she sees. This makes the whole process much more interesting. For example, if I find a group of relevant spreadsheets by some other means, such as a keyword search, then, when I add these document to the training, I observe how these documents impact the overall ranking of the dataset. For instance, did this training result in an increase of relevance ranking of other spreadsheets? Was the increase nominal or major? How did it impact the ranking of other documents? For instance, were emails with a lot of numbers in them suddenly much higher ranked? Overall, was this training effective? Were the documents in fact relevant as predicted that moved up in rank to the top, or near top of probable relevance? What was the precision rate like for these documents? Does the AI now have a good understanding of relevance of spreadsheets, or need more training on that type of document? Should we focus our search on other kinds of documents?

You see all kinds of variations on that. If the spreadsheet understanding (ranking) is good, how does it compare to its understanding (correct ranking) of Word Docs or emails? Where should I next focus my multimodal searches? What documents should I next assign to my reviewers to read and make a relevancy determination? These kind of considerations keep the search interesting, fun even. Work as play is the best kind. Typically we simply assign the documents for attorney review that have the highest ranking (which is the essence of what Grossman and Cormack call CAL), but not always. We are flexible. We, the human attorneys, are the second positive feedback loop.

EDR_lookWe like to remain in charge of teaching the classifier, the AI. We do not just turn it over to the classifier to teach itself. Although sometimes, when we are out of ideas and are not sure what to do next, we will do exactly that. We will turn over to the computer the decision of what documents to review next. We just go with his top predictions and use those documents to train. Mr. EDR has come through for us many times when we have done that. But this is more of an exception, than the rule. After all, the classifier is a tabula rasa. As Maura put it: each review generally begins from a nearly zero knowledge base. Before the training starts, it knows nothing about document relevance. The computer does not come with built-in knowledge of the law or relevance. You know what you are looking for. You know what is relevant, even if you do not know how to find it, or even whether it exists at all. The computer does not know what you are looking for, aside from what you have told it by your yes-no judgments on particular documents. But, after you teach it, it knows how to find more documents that probably have the same meaning.

raised_handsBy observation you can see for yourself, first hand, how your training is working, or not working. It is like a teacher talking to their students to find out what they learned from the last assigned reading materials. You may be surprised by how much, or how little they learned. If the last approach did not work, you change the approach. That is double-loop learning. In that sense our active monitoring approach it is like continuous dialogue. You learn how and if the AI is learning. This in turn helps you to plan your next lessons. What has the student learned? Where does the AI need more help to understand the conception of relevance that you are trying to teach it.

Only_Humans_Need_ApplyThis monitoring of the AI’s learning is one of the most interesting aspects of active machine learning. It is also a great opportunity for human creativity and value. The inevitable advance of AI in the law can mean more jobs for lawyers overall, but only for those able step up and change their methods. The lawyers able to play the second loop game of active machine learning will have plenty of employment opportunities. See eg. Thomas H. Davenport, Julia Kirby, Only Humans Need Apply: Winners and Losers in the Age of Smart Machines (Harper 2016).

Going down into the weeds a little bit more, our active monitoring dual feedback approach means that when we use Kroll Ontrack’s EDR software, we adjust the settings so that new learning sessions are not created automatically. They only run when and if we click on the Initiate Session button shown in the EDR screenshot below (arrow and words were added). We do not want the training to go on continuously in the background (typically meaning at periodic intervals of every thirty minutes or so.) We only want the learning sessions to occur when we say so. In that way we can know exactly what documents EDR is training on during a session. Then, when that training session is complete, we can see how the input of those documents has impacted the overall data ranking.  For instance, are there now more documents in the 90% or higher probable relevance category and if so, how many? The picture below is of a completed TREC project. The probability rankings are on the far left with the number of documents shown in the adjacent column. Most of the documents in the 290,099 collection of Bush email were in the 0-5% probable relevant ranking not included in the screen shot.

edr_initiate_session

This means that the e-Discovery Team’s active learning is not continuous, in the sense of always training. It is instead intelligently spaced. That is an essential aspect of our Balanced Hybrid approach to electronic document review. The machine training only begins when we click on the “Initiate Session” button in EDR that the arrow points to. It is only continuous in the sense that the training continues until all human review is completed. The spaced training, in the sense of staggered  in time, is itself an ongoing process until the production is completed. We call this Intelligently Spaced Training or IST. Such ongoing training improves efficiency and precision, and also improves Hybrid human-machine communications. Thus, in our team’s opinion, IST is a better process of electronic document review than training automatically without human participation, the so-called CAL approach promoted (and recently trademarked) by search experts and professors, Maura Grossman and Gordon Cormack.

ist-sm

Exactly how we space out the timing of training in IST is a little more difficult to describe without going into the particulars of a case. A full, detailed description would require the reader to have intimate knowledge of the EDR software. Our IST process is, however, software neutral. You can follow the IST dual feedback method of active machine learning with any document review software that has active machine learning capacities and also allows you to decide when to initiate a training session. (By the way, a training session is the same thing as a learning session, but we like to say training, not learning, as that takes the human perspective and we are pro-human!) You cannot do that if the training is literally continuous and cannot be halted while you input a new batch of relevance determined documents for training.

The details of IST, such as when to initiate a training session, and what human coded documents to select next for training, is an ad hoc process. It depends on the data itself, the issues involved in the case, the progress made, the stage of the review project and time factors. This is the kind of thing you learn by doing. It is not rocket science, but it does help keep the project interesting. Hire one of our team members to guide your next review project and you will see it in action. It is easier than it sounds. With experience Hybrid Multimodal IST becomes an intuitive process, much like riding a bicycle.

ralph_trecTo summarize, active machine learning should be a dual feedback process with double-loop learning. The training should continue throughout a project, but it should be spaced in time so that you can actively monitor the progress, what we call IST. The software should learn from the trainer, of course, but the trainer should also learn from the software. This requires active monitoring by the teacher who reacts to what he or she sees and adjusts the training accordingly so as to maximize recall and precision.

This is really nothing more than a common sense approach to teaching. No teacher who just mails in their lessons, and does not pay attention to the students, is ever going to be effective. The same is true for active machine learning. That’s the essence of the insight. Simple really.

Next, in Part Three, I will address the related insights of Balanced Hybrid.

To be Continued …


Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part Four

August 3, 2014

This is the conclusion of my four part blog: Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part One and Part Two and Part Three.

Cormack and Grossman’s Conclusions

Maura-and-Gordon_Aug2014Gordon Cormack and Maura Grossman have obviously put a tremendous amount of time and effort into this study. In their well written conclusion they explain why they did it, as well as provide a good summary of their findings

Because SPL can be ineffective and inefficient, particularly with the low-prevalence collections that are common in ediscovery, disappointment with such tools may lead lawyers to be reluctant to embrace the use of all TAR. Moreover, a number of myths and misconceptions about TAR appear to be closely associated with SPL; notably, that seed and training sets must be randomly selected to avoid “biasing” the learning algorithm.

This study lends no support to the proposition that seed or training sets must be random; to the contrary, keyword seeding, uncertainty sampling, and, in particular, relevance feedback – all non-random methods – improve significantly (P < 0:01) upon random sampling.

While active-learning protocols employing uncertainty sampling are clearly more effective than passive-learning protocols, they tend to focus the reviewer’s attention on marginal rather than legally significant documents. In addition, uncertainty sampling shares a fundamental weakness with passive learning: the need to define and detect when stabilization has occurred, so as to know when to stop training. In the legal context, this decision is fraught with risk, as premature stabilization could result in insufficient recall and undermine an attorney’s certification of having conducted a reasonable search under (U.S.) Federal Rule of Civil Procedure 26(g)(1)(B).

This study highlights an alternative approach – continuous active learning with relevance feedback – that demonstrates superior performance, while avoiding certain problems associated with uncertainty sampling and passive learning. CAL also offers the reviewer the opportunity to quickly identify legally significant documents that can guide litigation strategy, and can readily adapt when new documents are added to the collection, or new issues or interpretations of relevance arise.

Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014, at pg. 9.

The insights and conclusions of Cormack and Grossman are perfectly in accord with my own experience and practice with predictive coding search efforts, both with messy real world projects, and the four controlled scientific tests I have done over the last several years (only two of which have to date been reported, and the fourth is still in progress). I agree that a relevancy approach that emphasizes high ranked documents for training is one of the most powerful search tools we now have. So too is uncertainty training (mid ranked) when used judiciously, as well as keywords, and a number of other methods. All the many tools we have to find both relevant and irrelevant documents for training should be used, depending on the circumstances, including even some random searches.

In my view, we should never use just one method to select documents for machine training, and ignore the rest, even when it is a good method like Cormack and Grossman have shown CAL to be. When the one method selected is the worst of all possible methods, as random search has now been shown to be, then the monomodal approach is a recipe for ineffective, over-priced review.

Why All the Foolishness with Random Search?

random samplingAs shown in Part One of this article, it is only common sense to use what you know to find training documents, and not rely on a so-called easy way of rolling dice. A random chance approach is essentially a fool’s method of search. The search for evidence to do justice is too important to leave to chance. Cormack and Grossman did the legal profession a favor by taking the time to prove the obvious in their study. They showed that even very simplistic mutlimodal search protocols, CAL and SAL, do better at machine training than monomodal random only.

scientist on simpsonInformation scientists already knew this rather obvious truism, that multimodal is better, that the roulette wheel is not an effective search tool, that random chance just slows things down and is ineffective as a machine training tool. Yet Cormack and Grossman took the time to prove the obvious because the legal profession is being led astray. Many are actually using chance as if it that were a valid search method, although perhaps not in the way they describe. As Cormack and Grossman explained in their report:

While it is perhaps no surprise to the information retrieval community that active learning generally outperforms random training [22], this result has not previously been demonstrated for the TAR Problem, and is neither well known nor well accepted within the legal community.

Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014 at pg. 8.

As this quoted comment suggests, everyone in the information science search community knew this already, that the random only approach to search is inartful. So do most lawyers, especially the ones with years of hands-on experience in search for relevant ESI. So why in the world is random search only still promoted by some software companies and their customers? Is it really to address the so called problem of “not knowing what you don’t know.” That is the alleged inherent bias of using knowledge to program the AI. The total-random approach is also supposed to prevent overt, intentional bias, where lawyers might try to mis-train the AI searcher algorithm on purpose. These may be the stated reasons by vendors, but there are other reasons. There must be, because these excuses do not hold water. This was addressed in Part One of this article.

This bias-avoidance claim must just be an excuse because there are many better ways to counter myopic effects of search driven too narrowly. There are many methods and software enhancements that can be used to avoid overlooking important, not yet discovered types of relevant documents. For instance, allow machine selection of uncertain documents, as was done here with the SAL protocol. You could also include some random document selection into the mix, and not just make the whole thing random. It is not all or nothing, not logically at least, but perhaps it is as a practical matter for some software.

My preferred solution to the problem of “not knowing what you don’t know” is to use a combination of all those methods, buttressed by a human searcher that is aware of the limits of knowledge. In mean, really! The whole premise behind using random as the only way to avoid a self-looping trap of “not knowing what you don’t know” assumes that the lawyer searcher is a naive boob or dishonest scoundrel. It assumes lawyers are unaware that they don’t know what they don’t know. Please, we know that perfectly well. All experienced searchers know that. This insight is not just the exclusive knowledge of engineers and scientists. Very few attorneys are that arrogant and self absorbed, or that naive and simplistic in their approach to search.

No, this whole you must use random only search to avoid prejudice is just a smoke screen to hide real reason a vendor sells software that only works that way. The real reason is that poor software design decisions were made in a rush to get predictive coding software to market. Software was designed to only use random search because it was easy and quick to build software like that. It allowed for quick implementation of machine training. Such simplistic types of AI software may work better than poorly designed keyword searches, but it is still far inferior to more complex machine training system, as Cormack and Grossman have now proven. It is inferior to a multimodal approach.

The software vendors with random only training need to move on. They need to invest in their software to adopt a multimodal approach. In fact, it appears that many have already done so, or are in the process. Yes, such software enhancements take time and money to implement. But we need software search tools for adults. Stop all of the talk about easy buttons. Lawyers are not simpletons. We embrace hard work. We are masters of complexity. Give us choices. Empower the software so that more than one method can be used. Do not force us to use only random selection.

We need software tools that respect the ability of attorneys to perform effective searches for evidence. This is our sand box. That is what we attorneys do, we search for evidence. The software companies are just here to give us tools, not to tell us how to search. Let us stop the arguments and move on to discuss more sophisticated search methods and tools that empower complex methods.

Attorneys want software with the capacity to integrate all search functions, including random, into a mulitmodal search process. We do not want software with only one type of machine training ability, be it CAL, SAL or SPL. We do not want software that can only do one thing, and then have the vendor build a false ideology around their one capacity that says their method is the best and only way. These are legal issues, not software issues.

Attorneys do not just want one search tool, we want a whole tool chest. The marketplace will sort out whose tools are best, so will science. For vendors to remain competitive they need to sell the biggest tool chest possible, and make sure the tools are well built and perform as advertised. Do not just sell us a screwdriver and tell us we do not need a hammer and pliers too.

Leave the legal arguments as to reasonability and rules to lawyers. Just give us the tools and we lawyers will find the evidence we need. We are experts at evidence detection. It is in our blood. It is part of our proud heritage, our tradition.

King_Solomon_JudgeFinding evidence is what lawyers do. The law has been doing this for millennia. Think back to story of the judicial decision of King Solomon. He decided to award the child to the woman he saw cry in response to his sham decision to cut the baby in half. He based his decision on the facts, not ideology. He found the truth in clever ways built around facts, around evidence.

Lawyers always search to find evidence so that justice can be done. The facts matter. It has always been an essential part of what we do. Lawyers always adapt with the times. We always demand and use the best tools available to do our job. Just think of Abraham Lincoln who readily used telegraphs, the great new high-tech invention of his day. When you want to know the truth of what happened in an event that took place in the recent past, you hire a lawyer, not an engineer nor scientist. That is what we are trained to do. We separate the truth from the lies. With great tools we can and will do an even better job.

Many multimodal based software vendors already understand all of this. They build software that empowers attorneys to leverage their knowledge and skills. That is why we use their tools. Empowerment of attorneys with the latest AI tools empowers our entire system of justice. That is why the latest Cormack Grossman study is so important. That is why I am so passionate about this. Join with us in this. Demand diversity and many capacities in your search software, not just one.

Vendor Wake Up Call and Plea for Change

Ralph_x-mas_2013My basic message to all manufacturers of predictive coding software who use only one type of machine training protocol is to change your ways. I mean no animosity at all. Many of you have great software already, it is just the monomondal method built into your predictive coding features that I challenge. This is a plea for change, for diversity. Sell us a whole tool chest, not just a single, super-simple tool.

Yes, upgrading software takes time and money. But all software companies need to do that anyway to continue to supply tools to lawyers in the Twenty-First Century. Take this message as both a wake up call and a respectful plea for change.

Dear software designers: please stop trying to make the legal profession look only under the random lamp. Treat your attorney customers like mature professionals who are capable of complex analysis and skills. Do not just assume that we do not know how to perform sophisticated searches. I am not the only attorney with multimodal search skills. I am just the only one with a blog who is passionate about it. There are many out there with very sophisticated skills and knowledge. They may not be as old (I prefer to say experienced) and loud mouthed (I prefer to say outspoken) as I am, but they are just as skilled. They are just as talented. More importantly, their numbers are growing rapidly. It is a generation thing too, you know. Your next generation of lawyer customers are just as comfortable with computers and big data as I am, maybe more so. Do you really doubt that Adam Losey and his generation will not surpass our accomplishments with legal search. I don’t.

Dear software designers: please upgrade your software and get with the multi-feature program. Then you will have many new customers, and they will be empowered customers. Do not have the money to do that? Show your CEO this article. Lawyers are not stupid. They are catching on, and they are catching on fast. Moreover, these scientific experiments and reports will keep on too. The truth will come out. Do you want to be survive the inevitable vendor closures and consolidation? Then you need to invest in more sophisticated, fully featured software. Your competitors are.

Dear software designers: please abandon the single feature approach, then you will be welcome in the legal search sandbox. I know that the limited functionality software that some of you have created is really very good. It already has many other search capacities. It just needs to be better integrated with predictive coding. Apparently some single feature software already produces decent results, even with the handicap of random-only. Continue to enhance and build upon your software. Invest in the improvements needed to allow for full multimodal, active, judgmental search.

Conclusion

Flashlights_taticalrandom only search method for predictive coding training documents is ineffective. The same applies to any other training method if it is applied to the exclusion of all others. Any experienced searcher knows this. Software that relies solely on a random only method should be enhanced and modified to allow attorneys to search where they know. All types of training techniques should be built into AI based software, not just random. Random may be easy, but is it foolish to only search under the lamp post. It is foolish to turn a blind eye to what you know. Attorneys, insist on having your own flashlight that empowers you to look wherever you want. Shine your light wherever you think appropriate. Use your knowledge. Equip yourself with a full tool chest that allows you to do that.