Hybrid Multimodal is the preferred legal search method of the e-Discovery Team. The Hybrid part of our method means that our Computer Assisted Review, our CAR, uses active machine learning (predictive coding), but still has a human driver. They work together. Our review method is thus like the Tesla’s Model S car with full autopilot capabilities. It is designed to be driven by both Man and Machine. Our CAR is unlike the Google car, which can only be driven by a machine. When it comes to legal document review, we oppose fully autonomous driving. In our view there is no place for a Google car in legal search.
Google cars have no steering wheel, no brakes, no gas pedal, no way for a human to drive it at all. It is fully autonomous. A human driver cannot take over, even if they wanted to. In Google’s view, allowing humans to take over makes driverless cars less safe. Google thinks passengers could try to assert themselves in ways that could lead to a crash, so it is safer to be autonomous.
We have no opinion about the driverless automobile debate, and only like the analogy up to a point. Our opinion is limited to computer assisted review CARs that search for relevant evidence in law suits. For purposes of Law, we want our CARs to be like a Tesla. You can let the car drive and go hands free, if and when you want to. The Tesla AI will then drive the car for you. But you can still drive the car yourself. The second you grab the wheel, the Tesla senses that and turns the Autopilot off. Full control is instantly passed back to you. It is your car, and you are the driver, but you can ask your car to help you drive, when, in your judgment, that is appropriate. For instance, it has excellent fully autonomous parallel parking features, and you can even summon it to come pick you up from out of a nearby parking lot, a truly cool valet service. It is also good in slow commuter traffic and highways, much like cruise control.
When it comes to law, and legal review, we want an attorney’s hands on, or at least near the wheel at all times. Our Hybrid Multimodal approach includes an autopilot mode using active machine learning, but our attorneys are always responsible. They may allow the programmed AI to take over in some situations, and go hands free, much like autonomous parallel parking or highway driving, but they always control the journey.
Defining the Terms
The e-Discovery Team’s Hybrid Multimodal method of document review is based on a flexible blend of human and machine skills, where a lawyer may often delegate, but always retains control. Before we explore this further, a quick definition of terms is in order. Multimodal means that we use all kinds of search methods, and not just one type. For example, we do not just use active machine learning, a/k/a Predictive Coding, to find relevant documents. We do not just use keyword search, or concept search. We use every kind of search we can. This is shown in the search pyramid below, which does not purport to be complete, but catches the main types of document search used today. Using our car analogy, this means that when a human drives, they have a stick shift, and can run in many gears, use many search engines. They can also let go of the wheel, when they want to, and use AI-enhanced search.
We call this a Hybrid method because of the manner in which we use one particular kind of search, predictive coding. To us predictive coding means active machine learning. See eg. Legal Search Science. It is a Man-Machine process, a hybrid process, where we work together with our machine, our robot, whom we call Mr. EDR. In other words, we use the artificial intelligence generated by active machine learning, but we keep lawyers in the loop. We stay involved, hands on or near the wheel.
Augmentation, Not Automation
The e-Discovery Team’s Hybrid approach enhances what lawyers do in document review. It improves our ability to make relevance assessments of complex legal issues. The hybrid approach thus leads to augmentation, where lawyers can do more, faster and better. It does not lead to automation, where lawyers are replaced by machines.
The Hybrid Multimodal approach is designed to improve a lawyer’s ability to find evidence. It is not designed to fully automate the tasks. It is not designed to replace lawyers with robots. Still, since one lawyer with our methods can now do the work of hundreds, some lawyers will inevitably be out of a job. They will be replaced by other, more tech savvy lawyers that can work with the robots, that can control them and be empowered by them at the same time. This development in turn creates new jobs for the experts who design and care for the robots, and for lawyers who find new ways to use them.
We think that empowering lawyers, and keeping them in the loop, hands near the wheel, is a good thing. We believe that lawyers bring an instinct and a moral sense that is way beyond the grasp of all automation. Moreover, at least today, lawyers know the law, and robots do not. The active machine learning process – predictive coding – begins with a blank slate. Our robots only know what we teach them about relevance. This may change soon, but we are not there yet. See PreSuit.com. Another advantage that we currently have, again one that may someday be replaced, is legal analysis. Humans are capable of legal reasoning, at least after years of schooling and years of legal practice. Right now no machine in the world is even close. But again, we concede this may someday be automated, but we suspect this is at least ten years away.
The one thing we do not think can ever be automated is the human moral sense of right and wrong, our ethics, our empathy, our humor, our instinct for justice, and our capacity for creativity and imagination, for molding novel remedies to attain fair results in new fact scenarios. This means that, at the present time at least, only lawyers have an instinct for the probative value of documents and their ability to persuade. Even if legal knowledge and legal analysis are some day programmed into a machine, we contend that the unique human qualities of ethics, fairness, empathy, humor, imagination, creativity, flexibility, etc., will always keep trained lawyers in the loop. When it comes to questions of law and justice, humans will always be needed to train and supervise the machines. Not everyone agrees with us.
There is a struggle going on about this right now, one that is largely under the radar. The clash became apparent to the e-Discovery Team during our venture into the world of science and academia at TREC 2015. Some argue that lawyers should be replaced, not enhanced. They favor fully automated methods for a variety of reasons, including cost, a point with which we agree, but also including the alleged inherent unreliability and dishonesty of humans, especially lawyers, a point with which we strenuously disagree. Some scientists and technologists do not appreciate the unique capabilities that humans bring to legal search. More than that, some even think that lawyers should not to be trusted to find evidence, especially documents that could hurt their client’s case. They doubt our ability to be honest in an adversarial system of justice. They see the cold hard logic of machines as the best answer to human subjectivity and deceitfulness. They see machines as the impartial counter-point to human fallibility. They would rather trust a machine than a lawyer. They see fully automated processes as a way to overcome the base elements of man. We do not. This is an important Roboethics issue that has ramifications far beyond legal search.
Although we have faced our fair share of dishonest lawyers, we still contend they are the rare exception, not the rule. Lawyers can be trusted to do the right thing. The few bad actors can be policed. The existence of a few unethical lawyers should not dictate the processes used for legal search. That is the tail wagging the dog. It makes no sense and, frankly, is insulting. Just because there are a few bad drivers on the road, does not mean that everyone should be forced into a Google car. Plus, please remember the obvious, these same bad actors could also program their robots to do evil for them. Asimov’s laws are a fiction. Not only that, think of the hacking exposure. No. Turning it all over to supposedly infallible and honest machines is not the answer. A hybrid relationship with Man in control is the answer. Trust, but verify.
The e-Discovery Team members have been searching for evidence, both good and bad, all of our careers. We do not put our thumb on the scale of justice. Neither do the vast majority of attorneys. We do, however, routinely look for ways to show bad evidence in a good light; that is what lawyers are supposed to do. Making silk purses out of sow’s ears is Trial Law 101. But we never hide the ears. We argue the law, and application of the law to the facts. We also argue what the facts may be, what a document may mean for instance, but we do not hide facts that should be disclosed. We do not destroy or alter evidence. Explaining is fine, but hiding is not.
Many laypersons outside of the law do not understand the clear line. The same misunderstanding applies to some novice lawyers too, especially the ones that have only heard of trials. Hiding and destroying evidence are things that criminals do, not lawyers. If we catch opposing counsel hiding the ball, we respond accordingly. We do not give up and look for ways to turn our system of justice over to cold machines.
We should not take away everyone’s license just because a few cannot drive straight. A Computer Assisted Review guided solely by AI alone has no place in the law. AI guidance is fine, we encourage that, that is what Hybrid means, but the CARs should always have a steering wheel and brake. Lawyers should always participate. It is total delegation to AI that we oppose, fully automated search. Legal robots can and should be our friends, but they should never be our masters.
Having said that, we do concede that the balance between Man and Machine is slowly shifting. The e-Discovery Team is gradually placing more and more reliance on the Machine. We learned many lessons on that in our participation in the TREC experiments in 2015. The fully automated methods that the academic teams used did surprisingly well, at least in relatively simple searches requiring limited legal analysis. We expect to put greater and greater reliance on AI in years to come as the software improves, but we will always keep our hands near the wheel.
We believe in a collaborative Man-Machine process, but insist that Man, here Lawyers, be the leaders. The buck must stop with the attorney of record, not a robot, even a superior AI like our Mr. EDR. Man must be responsible. Artificial intelligence can enhance our own intelligence, but should never replace it. Back to the AI car analogy, we can and should let the robot drive from time to time, they are, for instance, great a parallel parking, but we should never discard the steering wheel. Law is not a logic machine, nor should it be. It is an exercise in ethics, in fairness, justice and empathy. We should never forget the priority of the human spirit. We should never put too much faith in inhuman automation.
For more on these issues, the hybrid multimodal method, competition with fully automated methods, and much more, please see the e-Discovery Team’s final report of its participation in the 2015 TREC, Total Recall Track, found on NIST’s web at: http://trec.nist.gov/pubs/trec24/papers/eDiscoveryTeam-TR.pdf. It was just published last week. At 116 pages, it should help you to fall asleep for many nights, but hopefully, not while you are driving like the bozos in the hands-free driving video below.
Other participant papers (manual and automatic) are also available in the TREC proceedings: http://trec.nist.gov/pubs/trec24/trec2015.html
An overview paper will be added within the next few weeks; in the interim use your browser to search for “Total Recall” within the proceedings to find participant papers.
Google’s car AI team reports that they face a significant challenge: deeming when humans have sufficiently trained the car to be competent but to return control to the human when it isn’t.
We face the same challenge with our industry’s CAR (aka TAR) technology.
Not only is CAR/TAR’s capabilities vaguely defined (but as you focus here) the process of our teaching AI what it needs to “know” without confusing it or manipulating it is unclear, too.
My experience is that such training requires experienced awareness of (and sometimes wisdom about) the problem space: issues, terms of art, corpus, machine learning’s inferences, etc. Others, however, seem to view that coaching as meddlesome and a bit superstitious (e.g., “TAR Whispering”).
How can we hope to view CAR/TAR reliable until we define its capabilities, its necessary operating requirements and its limitations and, as you point out here, determine how human intelligence can train Artificial Intelligence reliably?
Actually we think we have figured out, how, as you put it, “human intelligence can train Artificial Intelligence reliably.” That is our multimodal method as further explained in Predictive Coding 3.0 See below.
This current blog using the analogy of the Google Car is saying that skilled humans need to be involved in the training, that, in the Law at least, AI should not train itself without active human (lawyer) supervision and participation.
Sorry I didn’t respond to you earlier on that. Of course, I fully agree with your wisdom comments, etc. and, as always, appreciate your views and comments.
The Car and Its Operation
What must be done before starting the Car? Answer No. 1
Before starting the car, fill the radiator (by removing cap at top) with clean fresh water. If perfectly clean water cannot be obtained it is advisable to strain it through muslin or other similar material [ . . . ]
What about Gasoline? Answer No. 2
The ten-gallon gasoline tank should be filled—nearly full—and the supply should never be allowed to get low. When filling the tank be sure that there are no naked flames within several feet, as the vapor is extremely volatile and travels rapidly [ . . . ]
How about the Oiling System? Answer No. 3
Upon. receipt of the car see that a supply of medium light, high-grade gas engine oil is poured into the crank case through the breather pipe at the front of the engine [ . . . ]
How are Spark and Throttle Levers used? Answer No. 4
Under the steering wheel are two small levers. The right-hand (throttle) lever controls the amount of mixture (gasoline and air) which goes into the engine. When the engine is in operation, the farther this lever is moved downward toward the driver (referred to as “opening the throttle”) the faster the engine runs and the greater the power furnished, the left-hand lever controls the spark, which explodes the gas in the cylinders of the engine. The advancing of this lever ” advances the spark,” [ . . . ]
[ . . . ]
What care should be given the Filling Plugs and Connections? Answer No. 141
Keep the filling plugs and connections tight and the top of the battery clean. Wiping the battery with a rag moistened with ammonia will counteract the effect of any of the solution which may be on the outside of the battery. A coating of heavy oil or vaseline will protect the connectors from corrosion.
Very funny Gordon. And if you did not properly prepare, you might very well have driven into a TAR pit.