Is the IRS’s Inability to Find Emails the Result of Unethical Behavior? New Opinion by U.S. Tax Court Provides Some Clues – Part 2

October 5, 2014

Lerner_arroganceThis is Part Two of the essay where I go into the specifics of the holding in Dynamo. Please read Part One first: Is the IRS’s Inability to Find Emails the Result of Unethical Behavior? New Opinion by U.S. Tax Court Provides Some Clues – Part One. There I pointed out that the IRS attitude towards email discovery, particularly predictive coding, shows that they belong to the unethical category I call The Clueless. Yes, the IRS is clueless, but not in an affable Pink Panther Inspector Clouseau way, but in an arrogant, super-know-it-all way of egomaniac types. It is wonderfully personified in Ms. Lerner’s face during her Congressional non-testimony. Like Congress did to Lerner, the Tax Court in Dynamo properly cut down the IRS attorneys and rejected all of their IRS’ anti-predictive coding non-sense arguments.

Dynamo Holdings Opinion

Judge_Ronald_Buch2Dynamo Holdings, Ltd. vs. Commissioner, 143 T.C. No. 9 (Sept. 17, 2014) is a very well written opinion by United Stated Tax Court by Judge Ronald L. Buch. I highly recommend that you study and cite this opinion. It is so good that I have decided to devote the rest of this blog to quotation of the portions of it that pertain to predictive coding.

I cannot refrain from provided some comments too, of course, otherwise what would be the point of doing more than provide a link? But for the sake of clarity, and purity, although I will intermix my [side bar comments] along with the quotes, I will do so with blue font, and italics, so you will not mistake the court’s words with my own. Yes, I know, that is not how you do things in law review articles, that this is way too creative. So what? It will be a lot more interesting for you to read it that way, and quicker too. So damn with the old rules of legal writing, here goes.

[P]etitioners request that the Court let them use predictive coding, a technique prevalent in the technological industry but not yet formally sanctioned by this Court, to efficiently and economically identify the nonprivileged information responsive to respondent’s discovery request. [The Petitioners are the defendants, and Respondents are the plaintiff, IRS. The IRS sued to collect tax on certain transfers between business entities alleging they were disguised gifts to the owners of Dynamo. Seems like a pretty clear cut issue to me, and I cannot see why it was necessary to look at millions of emails to find out what happened. The opinion does not explain that. The merits of the case are not addressed and a detailed proportionality analysis is not provided.]

Respondent [IRS] opposes petitioners’ request to use predictive coding because, he states, predictive coding is an “unproven technology”. Respondent adds that petitioners need not devote their claimed time or expense to this matter because they can simply give him access to all data on the two tapes and preserve the right (through a “clawback agreement”) to later claim that some or all of the data is privileged information not subject to discovery.2 [This is the disingenuous part I referred to previously.]

FN 2 – We understand respondent’s use of the term “clawback agreement” to mean that the disclosure of any privileged information on the tapes would not be a waiver of any privilege that would otherwise apply to that information.

The Court held an evidentiary hearing on respondent’s motion. [It looks like the Tax Court followed Judge David Waxse on this often debated issue as to whether an evidentiary hearing should be provided, but he only went part way. As you will see, a full scale Daubert type hearing was not provided. Instead, Judge Buch treated their testimony as informal input. Most judges agree that this is appropriate, even if they do not agree with Judge Waxe's position that Daubert type rulings are appropriate in a mere discovery dispute. Most judges I have talked to think that Evidence Rule 702 does not apply, since there is no evidence or trial, and no presentation to the jury to protect; there is just a dispute as to discovery search methods.]

[W]e hold that petitioners must respond to respondent’s discovery request but that they may use predictive coding in doing so. [The defendants had argued they should not have to search two backup tapes for email at all, and the use of predictive coding was a fall back argument. The decision did not provide any detailed explanation as to necessity, and I get the impression that it was not really pushed, that the main focus of the briefs was on predictive coding.]

Petitioners ask the Court to let them use predictive coding to efficiently and economically help identify the nonprivileged information that is responsive to respondent’s discovery request. More specifically, petitioners want to implement the following procedure to respond to the request: [I have omitted the first four reasons as not terribly interesting.] … 5. Through the implementation of predictive coding, review the remaining data using search criteria that the parties agree upon to ascertain, on the one hand, information that is relevant to the matter, and on the other hand, potentially relevant information that should be withheld as privileged or confidential information.

[T]he Court is not normally in the business of dictating to parties the process that they should use when responding to discovery. [This is a very important point. See Sedona Principle Six. The defendants did not really need the plaintiff's approval to use predictive coding. Judge Buch is suggesting that this whole permission motion is an unnecessary waste of time, but he will indulge them anyway and address it. I for one am glad that he did.] If our focus were on paper discovery, we would not (for example) be dictating to a party the manner in which it should review documents for responsiveness or privilege, such as whether that review should be done by a paralegal, a junior attorney, or a senior attorney. Yet that is, in essence, what the parties are asking the Court to consider–whether document review should be done by humans or with the assistance of computers. [These are all very good points.] Respondent fears an incomplete response to his discovery. [Parties in litigation always fear that. The U.S. employs a "trust based" system of discovery that relies on the honesty of the parties, and especially relies on the honesty and cooperativeness of the attorneys who conduct the discovery. There are alternatives, like having judges control discovery. Most of the world has such judge controlled discovery, but lawyers in the U.S. do not want that, and it is doubtful that taxpayers would want to fund an alternative court based approach.] If respondent believes that the ultimate discovery response is incomplete and can support that belief, he can file another motion to compel at that time. Nonetheless, because we have not previously addressed the issue of computer-assisted review tools, we will address it here.

Each party called a witness to testify at the evidentiary hearing as an expert. Petitioners’ witness was James R. Scarazzo. Respondent’s witness was Michael L. Wudke. [I added these links. Scarazzo is with the well known vendor, FTI, in Washington D.C., and Wudke is with another vendor in N.Y., Transperfect Legal Solutions. He used to be with Deloitte.] The Court recognized the witnesses as experts on the subject matter at hand. We may accept or reject the findings and conclusions of the experts, according to our own judgment.

Predictive coding is an expedited and efficient form of computer-assisted review that allows parties in litigation to avoid the time and costs associated with the traditional, manual review of large volumes of documents. Through the coding of a relatively small sample of documents, computers can predict the relevance of documents to a discovery request and then identify which documents are and are not responsive. The parties (typically through their counsel or experts) select a sample of documents from the universe of those documents to be searched by using search criteria that may, for example, consist of keywords, dates, custodians, and document types, and the selected documents become the primary data used to cause the predictive coding software to recognize patterns of relevance in the universe of documents under review. The software distinguishes what is relevant, and each iteration produces a smaller relevant subset and a larger set of irrelevant documents that can be used to verify the integrity of the results. [That is not technically correct, at least not in most cases. The relevance subset does not get smaller and smaller. The probability predictions do, however, get more accurate. True predictive coding as used by most vendors today is active machine learning. It ranks the relevance of the probability of all documents. See Eg AI-EnhancedReview.com] Through the use of predictive coding, a party responding to discovery is left with a smaller set of documents to review for privileged information, resulting in a savings both in time and in expense. [Now the judge is back on track and this is an essential truth.] The party responding to the discovery request also is able to give the other party a log detailing the records that were withheld and the reasons they were withheld. [Judge Buch is referring to the privilege log, or in some cases, also a confidentiality log.]

Andrew J. PeckMagistrate Judge Andrew Peck published a leading, oft-cited article on predictive coding which is helpful to our understanding of that method. [Of course Judge Peck's photograph is not in the opinion.See Andrew Peck, “Search, Forward: Will Manual Document Review and Keyboard Searches be Replaced by Computer-Assisted Coding?”, L. Tech. News (Oct. 2011). The article generally discusses the mechanics of predictive coding and the shortcomings of manual review and of keyword searches. The article explains that predictive coding is a form of “computed-assisted coding”, which in turn means “tools * * * that use sophisticated algorithms to enable the computer to determine relevance, based on interaction with (i.e., training by) a human reviewer.” Id. at 29. The article explains that:

Unlike manual review, where the review is done by the most junior staff, computer-assisted coding involves a senior partner (or team) who review and code a “seed set” of documents.Less_More_Ralph [Judge Peck wrote this back in 2011. I believe his understanding of “senior parter” level skill needed for training has since evolved. I can elaborate, but it would take us too far astray. Let’s just say what is needed is a single, or at least, very small team of real experts on the relevance facts at issue in the case. See Eg. Less Is More: When it comes to predictive coding training, the “fewer reviewers the better” – Part One, Part Two, Part ThreeThe computer identifies properties of those documents that it uses to code other documents. As the senior reviewer continues to code more sample documents, the computer predicts the reviewer’s coding. (Or, the computer codes some documents and asks the senior reviewer for feedback.)

When the system’s predictions and the reviewer’s coding sufficiently coincide, the system has learned enough to make confident predictions for the remaining documents. Typically, the senior lawyer (or team) needs to review only a few thousand documents to train the computer. [The number depends, of course. For some projects, tens of thousands of documents may be needed over multiple iterations to adequately train the computer. Some projects are much harder than others, despite the skills of the search designers involved. Yes, it takes a great deal of skill and experience to properly design a large predictive coding search and review project. It also takes good predictive coding software that ranks all document probabilities.]

Some systems produce a simple yes/no as to relevance, while others give a relevance score (say, on a 0 to 100 basis) that counsel can use to prioritize review. For example, a score above 50 may produce 97% of the relevant documents, but constitutes only 20% of the entire document set. [All good software today ranks all documents, typically 0 to 100% probability, rather than give a simplistic yes/no ranking.]

Counsel may decide, after sampling and quality control tests, that documents with a score of below 15 are so highly likely to be irrelevant that no further human review is necessary. Counsel can also decide the cost-benefit of manual review of the documents with scores of 15-50. [Typically the cut off point is way above 15% probability. I have no idea where that number came from. A more logical and frequent number is below 50%, meaning they are probably not relevant.]

Id.

The substance of the article was eventually adopted in an opinion that states: “This judicial opinion now recognizes that computer-assisted review is an acceptable way to search for relevant ESI in appropriate cases.” Moore v. Publicis Groupe, 287 F.R.D. 182, 183 (S.D.N.Y. 2012), adopted sub nom. Moore v. Publicis Groupe SA, No. 11 Civ. 1279 (ALC)(AJP), 2012 WL 1446534 (S.D.N.Y. Apr. 26, 2012).

Respondent asserts that predictive coding should not be used in these cases because it is an “unproven technology”. We disagree. [The alternative methods, keyword search and linear human review are the "unproven technologies," not predictive coding. Indeed, the science proves that keyword and linear review are unreliable. See Eg. LEGAL SEARCH SCIENCE.  The new gold standard is active machine learning, aka predictive coding, not hundreds of low paid contract lawyers sitting in cubicles all day.] Although predictive coding is a relatively new technique, and a technique that has yet to be sanctioned (let alone mentioned) by this Court in a published Opinion, the understanding of e-discovery and electronic media has advanced significantly in the last few years, thus making predictive coding more acceptable in the technology industry than it may have previously been. In fact, we understand that the technology industry now considers predictive coding to be widely accepted for limiting e-discovery to relevant documents and effecting discovery of ESI without an undue burden.10 [Excellent point. Plus it is not really all that "new" by today's standards. It has been around in academic circles since the 1990s.]

FN 10 – Predictive coding is so commonplace in the home and at work in that most (if not all) individuals with an email program use predictive coding to filter out spam email. See Moore v. Publicis Groupe, 287 F.R.D. 182, n.2 (S.D.N.Y. 2012), adopted sub nom. Moore v. Publicis Groupe SA, No. 11 Civ. 1279 (ALC)(AJP), 2012 WL 1446534 (S.D.N.Y. Apr. 26, 2012).

See Progressive Cas. Ins. Co. v. Delaney, No. 2:11-cv-00678-LRH-PAL, 2014 WL 3563467, at *8 (D. Nev. July 18, 2014) (stating with citations of articles that predictive coding has proved to be an accurate way to comply with a discovery request for ESI and that studies show it is more accurate than human review or keyword searches); F.D.I.C. v. Bowden, No. CV413-245, 2014 WL 2548137, at *13 (S.D. Ga. June 6, 2014) (directing that the parties consider the use of predictive coding). See generally Nicholas Barry, “Man Versus Machine Review:  The Showdown between Hordes of Discovery Lawyers and a Computer-Utilizing Predictive-Coding Technology”, 15 Vand. J. Ent. & Tech. L. 343 (2013); Lisa C. Wood, “Predictive Coding Has Arrived”, 28 ABA Antitrust J. 93 (2013). The use of predictive coding also is not unprecedented in Federal litigation. See, e.g., Hinterberger v. Catholic Health Sys., Inc., No. 08-CV-3805(F), 2013 WL 2250603 (W.D.N.Y. May 21, 2013); In Re Actos, No. 6:11-md-2299, 2012 WL 7861249 (W.D. La. July 27, 2012); Moore, 287 F.R.D. 182. Where, as here, petitioners reasonably request to use predictive coding to conserve time and expense, and represent to the Court that they will retain electronic discovery experts to meet with respondent’s counsel or his experts to conduct a search acceptable to respondent, we see no reason petitioners should not be allowed to use predictive coding to respond to respondent’s discovery request. Cf. Progressive Cas. Ins. Co., 2014 WL 3563467, at *10-*12 (declining to allow the use of predictive coding where the record lacked the necessary transparency and cooperation among counsel in the review and production of ESI responsive to the discovery request).

Mr. Scarazzo’s expert testimony supports our opinion. He testified that11 discovery of ESI essentially involves a two-step process.

FN 11 – Mr. Wudke did not persuasively say anything to erode or otherwise undercut Mr. Scarazzo’s testimony. [This is to the credit of Mr. Wudke, an honest expert.]

First, the universe of data is narrowed to data that is potentially responsive to a discovery request. Second, the potentially responsive data is narrowed down to what is in fact responsive. He also testified that he was familiar with both predictive coding and keyword searching, two of the techniques commonly employed in the first step of the two-step discovery process, and he compared those techniques by stating:

[K]ey word searching is, as the name implies, is a list of terms or terminologies that are used that are run against documents in a method of determining or identifying those documents to be reviewed. What predictive coding does is it takes the type of documents, the layout, maybe the whispets of the documents, the format of the documents, and it uses a computer model to predict which documents out of the whole set might contain relevant information to be reviewed.

So one of the things that it does is, by using technology, it eliminates or minimizes some of the human error that might be associated with it. [Note proper use of the word "some," it eliminates some of the human error. It cannot be eliminated entirely.] Sometimes there’s inefficiencies with key word searching in that it may include or exclude documents, whereas training the model to go back and predict this, we can look at it and use statistics and other sampling information to pull back the information and feel more confident that the information that’s being reviewed is the universe of potentially responsive data.

He concluded that the trend was in favor of predictive coding because it eliminates human error and expedites review. [The modifier "some" to "eliminates human error" is not used here, and thus is a slight overstatement.]

In addition, Mr. Scarazzo opined credibly and without contradiction that petitioners’ approach to responding to respondent’s discovery request is the most reasonable way for petitioners to comply with that request. Petitioners asked Mr. Scarazzo to analyze and to compare the parties’ dueling approaches in the setting of the data to be restored from Dynamo’s backup tapes and to opine on which of the approaches is the most reasonable way for petitioners to comply with respondent’s request. Mr. Scarazzo assumed as to petitioners’ approach that the restored data would be searched using specific criteria, that the resulting information would be reviewed for privilege, and that petitioners would produce the nonprivileged information to respondent. He assumed as to respondent’s approach that the restored data would be searched for privileged information without using specific search criteria, that the resulting privileged information would be removed, and that petitioners would then produce the remaining data to respondent. As to both approaches, he examined certain details of Dynamo’s backup tapes, interviewed the person most knowledgeable on Dynamo’s backup process and the contents of its backup tapes (Dynamo’s director of information technology), and performed certain cost calculations.

Mr. Scarazzo concluded that petitioners’ approach would reduce the universe of information on the tapes using criteria set by the parties to minimize review time and expense and ultimately result in a focused set of information germane to the matter. He estimated that 200,000 to 400,000 documents would be subject to review under petitioners’ approach at a cost of $80,000 to $85,000, while 3.5 million to 7 million documents would be subject to review under respondent’s approach at a cost of $500,000 to $550,000. [This is a huge reduction, and shows the importance of predictive coding. It is a reduction of from between 2.2 million to 6.6 million documents. That seems credible to me, but the actual cost saving quoted here seems off, or at least, seems incomplete. For instance, if you assume 300,000 documents, the mid-point of the estimated document count using predictive coding, and a projected cost of $85,000, that is only $00.28 per document. That is a valid number for the predictive coding culling process, but not for the actual review of the documents for confidentiality and privilege, and to confirm the privilege predictions.]

Our Rules, including our discovery Rules, are to “be construed to secure the just, speedy, and inexpensive determination of every case.” Rule 1(d). Petitioners may use predictive coding in responding to respondent’s discovery request. If, after reviewing the results, respondent believes that the response to the discovery request is incomplete, he may file a motion to compel at that time. See Rule 104(b), (d).


Should Lawyers Be Big Data Cops?

September 1, 2014

Police_Cartoon_haltMany police departments are using big data analytics to predict where crime is likely to take place and prevent it. Should lawyers do the same to predict and stop illegal, non-criminal activities? This is not the job of police, but should it be the job of lawyers? We already have the technology to do this, but should we? Should lawyers be big data cops? Does anyone even want that?

Crime Prevention by Data Analytics is Already in Use by Many Police Departments

precrimeThe NY Times reported on this back in 2011 when it was relatively new: Sending the Police Before There’s a Crime. The Times reported how the Santa Cruz California police were using data analysis to predict where burglaries and other crimes might take place and to deploy police officers accordingly:

The arrests were routine. Two women were taken into custody after they were discovered peering into cars in a downtown parking garage in Santa Cruz, Calif. One woman was found to have outstanding warrants; the other was carrying illegal drugs.

But the presence of the police officers in the garage that Friday afternoon in July was anything but ordinary: They were directed to the parking structure by a computer program that had predicted that car burglaries were especially likely there that day.

The Times reported that several cities were already using data analysis to try to systematically anticipate when and where crimes will occur, including the Chicago Police Department. Chicago created a predictive analytics unit back in 2010.

This trend is growing and precrime detection technologies are now used by many police departments around the world, including the Department of Homeland Security, not to mention the NSA analytics of metadata. See eg The Minority Report: Using Predictive Analytics to prevent the crime from happening in the first place! (IBM); In Hot Pursuit of Numbers to Ward Off Crime (NY Times); Police embracing tech that predicts crimes (CNN); U.S. Cities Relying on Precog Software to Predict Murder (Wired). The analytics are already pretty good at predicting places and times where cars will be stolen, houses robbed and people mugged.

Abig_brotherlthough these programs help improve efficient crime fighting, they are not without serious privacy and due process critics. Imagine the potential abuses if an evil Big Brother government was not only watching you, but could arrest you based on computer predictions of what you might do. Although no one is arresting people yet for what they might do as in the Minority Report, they are subjecting people to significantly increased scrutiny, even home visits. See eg. Professor Elizabeth Joh, Policing by Numbers: Big Data and the Fourth Amendment; Professor Brandon Garrett, Big Data and Due ProcessThe minority report: Chicago’s new police computer predicts crimes, but is it racist? (The Verge, 2014); Eric Holder Warns About America’s Disturbing Attempts at Precrime. Do we really want to give computers, and the people who operate them, that much power? Does the Constitution as now written even allow that?

Should Lawyers Detect and Stop Law Suits Before They Happen?

Police_SWATShould lawyers follow our police departments and use data analytics to predict and stop illegal, but non-criminal activities? The police will not do it. It is beyond their jurisdiction. Their job is to fight crime, not torts, not breach of contract, nor the tens of thousand of other civil wrongs that people and corporations sue each other about every day. Should lawyers do it? Is that the next step for the plaintiff’s bar? Is that the next step for corporate defense lawyers? For corporate compliance lawyers?  For the Civil Division of the Department of Justice? How serious is the potential loss in privacy and other rights to go that route? What other risks do we take in using our new found predictive coding skills in this way?

There are millions of civil wrongs committed each year that are beyond the purview of the criminal justice system. Many of them cause disputes, and many of these disputes in turn lead to state and federal litigation. Evidence of these illegal activities is present in the both public and private data. Should lawyers mine this data to look for civil wrongs? Should the civil justice system include prevention? Should lawyers not only bring and defend law suits, but also prevent them?

robo_cop_RalphThis is not the future we are talking about here. The necessary software and search skills already exist to do this. Lawyers with big data skills can already detect and prevent breach of contract, torts, and statutory violations, if they have access to the data. It is already possible for skilled lawyers to detect and stop these illegal activities before damages are caused, before disputes arise, before law suits are filed. Lawyers with artificial intelligence enhanced evidence search skills can already do this.

I have written about this several times before and even coined a word for this legal service. I call it “PreSuit.” It is a play off the term PreCrime from the Minority Report movie. I have built a website that provides an overview on how these services can be performed. Some lawyers have even begun rendering such services. But should they? Some lawyers, myself included, know how to use existing predictive coding software to mine data and make predictions as to where illegal activities are likely to take place. We know how to use this predictive technology to intervene to prevent such illegal activity. But should we?

Presuit

Just because new technology empowers us to do new things, does not mean we should. Perhaps we should refrain from becoming big data cops? We do not need the extra work. No one is clamoring for this new service. Should we build a new bomb just because we can?

Do we really want to empower an elite group of technology enhanced lawyers in this way? After all, society has gotten along just fine for centuries using traditional civil dispute resolution procedures. We have gotten along just fine by using a court system that imposes after-the-fact damages and injunctions to provide redress for civil wrongs. Should we really turn the civil justice system on its head by detecting the wrongs in advance and avoiding them?

Is it really in the best interest of society for lawyers to be big data cops? Or anyone else for that matter? Is it in the best interests of corporate world to have this kind of private police action? Is it in the best interest of lawyers? The public? What are the privacy and due process ramifications?

Some Preliminary Thoughts

Ralph LoseyI do not have any answers on this yet. It is too early in my own analysis to say for sure. These kind of complex constitutional issues require a lot of thought and discussions. All sides should be heard. I would like to hear what others have to say about this before I start reaching any conclusions. I look forward to hearing your public and private comments. I do, however, have a few preliminary thoughts and predictions to start the discussion. Some are serious, some are just designed to be thought-provoking. You figure out which are which. If you quote me, please remember to include this disclaimer. None of these thoughts are yet firm convictions, nor certain predictions. I may change my mind on all of this as my understanding improves. As a better Ralph than I once said: “A foolish consistency is the hobgoblin of little minds.”

First of all, there is no current demand for this service by the people who need it the most, large corporations. They may never want this, even though such opposition is irrational. It would, after all, reduce litigation costs and make their company more profitable. I am not sure why, and do not think it is as simple as some would say, that they just want to hide their illegal activities. Let me tell you an experience from my 34 years as a litigator that may shed some light on this. This is an experience that I know is common with many litigators. It has to do with the relationship between lawyers and management in most large companies.

Occasionally during a case I would become aware of a business practice in my client corporation that should obviously be changed. Typically it was a business practice that created or at least contributed to the law suit I just defended. The practice was not blatantly illegal, but was a grey-area. The case had shown that it was stupid and should be changed, if for no other reason than to prevent another case like that from happening. Since I had just seen the train wreck in slow motion, and knew full well how much it had cost the company, mostly in my fees, I thought I would help the company to prevent it from happening again. I would make a recommendation as to what should be changed and why. Sometimes I would explain in detail how the change would have prevented the litigation I just finished. I would explain how a change in the business practice would save the company money.

bored_yawn_obamaI have done this several times as a litigator at other firms before going to my current firm where I only do e-discovery. Do you know what kind of reaction I got? Nothing. No response at all, except perhaps a bored, polite thanks. I doubt my lessons learned memos were even read. I was, after all, just an unknown, young partner in a Floriduh law firm. I was not pointing out an illegal practice, nor one that had to be changed to avoid illegal activities. I was just pointing out a very ill-advised practice. I have had occasions to point out illegal activities too, in fact this is a more frequent occurrence, and there the response is much different. I was not ignored. I was told this would be changed. Sometimes I was asked to assist in that change. But when it came to recommendations to change something not outright illegal, suggestions to improve business practices, the response was totally different. Crickets. Just crickets. And big yawns. When will lawyers learn their place?

A couple of times I talked to in-house counsel about this, and tried to enlist their support to get the legal, but stupid, business practice changed. They would usually agree with me, full-heartedly, on the stupid part, after all they had seen the train wreck too. But they were cynical. They would explain that no one in upper management would listen to them. I am speaking about large corporations, ones with big bureaucracies. It may be better in small companies. In large companies in-house would express frustration. They knew the law department had far less juice than most others in the company. (Only the poor records department, or compliance department, if there is one, typically gets less respect than legal.) Many other parts of a company actually generate revenue, or at least provide cool toys that management wants, such as IT. All Legal does is spend money and aggravate everyone. The department that usually has the most juice in a company is sales, and they are the ones with most of the questionable practices. They are focused on money-making, not abstractions like legal compliance and dispute avoidance. Bottom line, in my experience upper management is not interested in hearing the opinions of lawyers, especially outside counsel, on what they should do differently.

Based on this experience I do not think the idea of lawyers as analytic cops to prevent illegal activities will get much traction with upper management. They do not want a lawyer in the room. It would stifle their creativity, their independent management acumen. They see all lawyers as nay sayers, deal breakers. Listen to lawyers and you’ll get paralysis by analysis. No, I do not see any welcome sign appearing for lawyers as big data cops, even if you present chart after chart as to how much data, time and frustration you will save the company in litigation avoidance. Of course, I never was much of a salesman. I’m just a lawyer who follows the hacker way of management (an iterative, pragmatic, action-based approach, which is the polar opposite of paralysis by analysis). So maybe some vendor salesmen out there will be able to sell the PreSuit concept, but not lawyers, at least not me.

field-of-dreams-2

I have tried all year. I have talked about this idea at several events. I have written about it, and created the PreSuit website with details. Do you know how many companies have responded? How many have expressed at least some interest in the possibility of reducing litigation costs by data analytics? Build it and they will come, they say. Not in my experience. I’ve built it and no one has come. There has been no response at all. Weeds are starting to grow on this field of dreams. Oh well. I’m a golfer. I’m used to disappointment.

This is probably just as well because reduction of litigation is not really in the best interests of the legal profession. After all, most law firms make most of their money in litigation. Lawyers should refuse to be big data cops and should let the CEOs carry on in ignorant bliss. Let them continue to function with eyes closed and spawn expensive litigation for corporate counsel to defend and for plaintiff’s counsel to get rich on. The litigation system works fine for the lawyers, and for the courts and judges too. Why muck up a big money generating machine by avoiding the disputes that the keep whole thing running? Especially when no one wants that.

Great-Depression_LitigatorsAll of the established powers want to leave things just the way they are. Can you imagine the devastating economic impact a fifty percent reduction in litigation would cause on the legal system? On lawyers everywhere? Both plaintiff’s and defendant’s bars? Hundreds of thousands of lawyers and support staff  would be out of work. No. This will be ignored, and if not ignored, attacked as radical, new, unproven, and perhaps most effective of all, as dangerous to privacy rights and due process. The privacy anti-big-brother groups will, for once, join forces with corporate America. Protect the workers they will say. Unions everywhere will oppose PreSuit. Labor and management will finally have an issue they can agree upon. Only a few high-tech lawyers will oppose them, and they are way outnumbered, especially in the legal profession.

No, I predict this will never be adopted voluntarily, nor will it ever be required by legislation. The politicians of today do not lead, they follow. The only thing I see now that will cause people to want to avoid litigation, to use data analytics to detect and prevent disputes, is the collapse, or near-collapse, of our current system of civil litigation. Lawyers as big data cops will only come out of desperation. This might happen sooner than you think.

There is another way of course. True leadership could come from the new ranks of corporate America. They could see the enlightened self-interest of PreSuit litigation avoidance. They could understand the value of data analytics and value of compliance. This may not come from our current generation old-school leaders, they barely know what data analytics is anyway. But maybe it will come from the next wave of leaders. There is always hope that the necessary changes will be made out of intelligence, not crises. If history is any guide, this is unlikely, but not impossible.

privacy-vs-googleOn the other hand, maybe this is benevolent neglect. Maybe the refusal to adopt these new technologies is for the best. Maybe the power to predict civil wrongs would be abused by a small technical elite of e-discovery lawyer cops. Maybe it would go to their head, and before you know it, their heavy hands would descend to rob all employees of their last fragments of privacy. Maybe innovation would be stifled by the fear that new creative actions might be seen as a precursor to illegal activities. This chilling effect could cause everyone to just play it safe.

The next generation of Steve Jobs would never arise in conditions such as this. They would instead come from the last remaining countries that still maintained a heavy litigation load. They would arise in cultures that still allow the workforce to do as it damn well pleases, and just let the courts sort it all out later. Legal smegal, just get the job done. Maybe expensive chaos is the best incubator we have for creative genius? Maybe it is best to keep lawyers out of the boardroom? Much less give them a badge and let them police anything. It is better to keep data analytics in Sales where it belongs. Let us know what our customers are doing and thinking, but keep a blind eye to ourself. That way we can do what we want.

Conclusion

I always end my blogs with a conclusion. But not this time. I have no conclusions yet. This could go either way. This game is too close to call. We are still in the early innings yet. Who knows? A few star CEOs may come out of the cornfields yet. Then we could find out fast whether PreSuit is a good thing. A few test cases should flush out the facts, good and bad.


Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part Four

August 3, 2014

This is the conclusion of my four part blog: Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part One and Part Two and Part Three.

Cormack and Grossman’s Conclusions

Maura-and-Gordon_Aug2014Gordon Cormack and Maura Grossman have obviously put a tremendous amount of time and effort into this study. In their well written conclusion they explain why they did it, as well as provide a good summary of their findings

Because SPL can be ineffective and inefficient, particularly with the low-prevalence collections that are common in ediscovery, disappointment with such tools may lead lawyers to be reluctant to embrace the use of all TAR. Moreover, a number of myths and misconceptions about TAR appear to be closely associated with SPL; notably, that seed and training sets must be randomly selected to avoid “biasing” the learning algorithm.

This study lends no support to the proposition that seed or training sets must be random; to the contrary, keyword seeding, uncertainty sampling, and, in particular, relevance feedback – all non-random methods – improve significantly (P < 0:01) upon random sampling.

While active-learning protocols employing uncertainty sampling are clearly more effective than passive-learning protocols, they tend to focus the reviewer’s attention on marginal rather than legally significant documents. In addition, uncertainty sampling shares a fundamental weakness with passive learning: the need to define and detect when stabilization has occurred, so as to know when to stop training. In the legal context, this decision is fraught with risk, as premature stabilization could result in insufficient recall and undermine an attorney’s certification of having conducted a reasonable search under (U.S.) Federal Rule of Civil Procedure 26(g)(1)(B).

This study highlights an alternative approach – continuous active learning with relevance feedback – that demonstrates superior performance, while avoiding certain problems associated with uncertainty sampling and passive learning. CAL also offers the reviewer the opportunity to quickly identify legally significant documents that can guide litigation strategy, and can readily adapt when new documents are added to the collection, or new issues or interpretations of relevance arise.

Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014, at pg. 9.

The insights and conclusions of Cormack and Grossman are perfectly in accord with my own experience and practice with predictive coding search efforts, both with messy real world projects, and the four controlled scientific tests I have done over the last several years (only two of which have to date been reported, and the fourth is still in progress). I agree that a relevancy approach that emphasizes high ranked documents for training is one of the most powerful search tools we now have. So too is uncertainty training (mid ranked) when used judiciously, as well as keywords, and a number of other methods. All the many tools we have to find both relevant and irrelevant documents for training should be used, depending on the circumstances, including even some random searches.

In my view, we should never use just one method to select documents for machine training, and ignore the rest, even when it is a good method like Cormack and Grossman have shown CAL to be. When the one method selected is the worst of all possible methods, as random search has now been shown to be, then the monomodal approach is a recipe for ineffective, over-priced review.

Why All the Foolishness with Random Search?

random samplingAs shown in Part One of this article, it is only common sense to use what you know to find training documents, and not rely on a so-called easy way of rolling dice. A random chance approach is essentially a fool’s method of search. The search for evidence to do justice is too important to leave to chance. Cormack and Grossman did the legal profession a favor by taking the time to prove the obvious in their study. They showed that even very simplistic mutlimodal search protocols, CAL and SAL, do better at machine training than monomodal random only.

scientist on simpsonInformation scientists already knew this rather obvious truism, that multimodal is better, that the roulette wheel is not an effective search tool, that random chance just slows things down and is ineffective as a machine training tool. Yet Cormack and Grossman took the time to prove the obvious because the legal profession is being led astray. Many are actually using chance as if it that were a valid search method, although perhaps not in the way they describe. As Cormack and Grossman explained in their report:

While it is perhaps no surprise to the information retrieval community that active learning generally outperforms random training [22], this result has not previously been demonstrated for the TAR Problem, and is neither well known nor well accepted within the legal community.

Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014 at pg. 8.

As this quoted comment suggests, everyone in the information science search community knew this already, that the random only approach to search is inartful. So do most lawyers, especially the ones with years of hands-on experience in search for relevant ESI. So why in the world is random search only still promoted by some software companies and their customers? Is it really to address the so called problem of “not knowing what you don’t know.” That is the alleged inherent bias of using knowledge to program the AI. The total-random approach is also supposed to prevent overt, intentional bias, where lawyers might try to mis-train the AI searcher algorithm on purpose. These may be the stated reasons by vendors, but there are other reasons. There must be, because these excuses do not hold water. This was addressed in Part One of this article.

This bias-avoidance claim must just be an excuse because there are many better ways to counter myopic effects of search driven too narrowly. There are many methods and software enhancements that can be used to avoid overlooking important, not yet discovered types of relevant documents. For instance, allow machine selection of uncertain documents, as was done here with the SAL protocol. You could also include some random document selection into the mix, and not just make the whole thing random. It is not all or nothing, not logically at least, but perhaps it is as a practical matter for some software.

My preferred solution to the problem of “not knowing what you don’t know” is to use a combination of all those methods, buttressed by a human searcher that is aware of the limits of knowledge. In mean, really! The whole premise behind using random as the only way to avoid a self-looping trap of “not knowing what you don’t know” assumes that the lawyer searcher is a naive boob or dishonest scoundrel. It assumes lawyers are unaware that they don’t know what they don’t know. Please, we know that perfectly well. All experienced searchers know that. This insight is not just the exclusive knowledge of engineers and scientists. Very few attorneys are that arrogant and self absorbed, or that naive and simplistic in their approach to search.

No, this whole you must use random only search to avoid prejudice is just a smoke screen to hide real reason a vendor sells software that only works that way. The real reason is that poor software design decisions were made in a rush to get predictive coding software to market. Software was designed to only use random search because it was easy and quick to build software like that. It allowed for quick implementation of machine training. Such simplistic types of AI software may work better than poorly designed keyword searches, but it is still far inferior to more complex machine training system, as Cormack and Grossman have now proven. It is inferior to a multimodal approach.

The software vendors with random only training need to move on. They need to invest in their software to adopt a multimodal approach. In fact, it appears that many have already done so, or are in the process. Yes, such software enhancements take time and money to implement. But we need software search tools for adults. Stop all of the talk about easy buttons. Lawyers are not simpletons. We embrace hard work. We are masters of complexity. Give us choices. Empower the software so that more than one method can be used. Do not force us to use only random selection.

We need software tools that respect the ability of attorneys to perform effective searches for evidence. This is our sand box. That is what we attorneys do, we search for evidence. The software companies are just here to give us tools, not to tell us how to search. Let us stop the arguments and move on to discuss more sophisticated search methods and tools that empower complex methods.

Attorneys want software with the capacity to integrate all search functions, including random, into a mulitmodal search process. We do not want software with only one type of machine training ability, be it CAL, SAL or SPL. We do not want software that can only do one thing, and then have the vendor build a false ideology around their one capacity that says their method is the best and only way. These are legal issues, not software issues.

Attorneys do not just want one search tool, we want a whole tool chest. The marketplace will sort out whose tools are best, so will science. For vendors to remain competitive they need to sell the biggest tool chest possible, and make sure the tools are well built and perform as advertised. Do not just sell us a screwdriver and tell us we do not need a hammer and pliers too.

Leave the legal arguments as to reasonability and rules to lawyers. Just give us the tools and we lawyers will find the evidence we need. We are experts at evidence detection. It is in our blood. It is part of our proud heritage, our tradition.

King_Solomon_JudgeFinding evidence is what lawyers do. The law has been doing this for millennia. Think back to story of the judicial decision of King Solomon. He decided to award the child to the woman he saw cry in response to his sham decision to cut the baby in half. He based his decision on the facts, not ideology. He found the truth in clever ways built around facts, around evidence.

Lawyers always search to find evidence so that justice can be done. The facts matter. It has always been an essential part of what we do. Lawyers always adapt with the times. We always demand and use the best tools available to do our job. Just think of Abraham Lincoln who readily used telegraphs, the great new high-tech invention of his day. When you want to know the truth of what happened in an event that took place in the recent past, you hire a lawyer, not an engineer nor scientist. That is what we are trained to do. We separate the truth from the lies. With great tools we can and will do an even better job.

Many multimodal based software vendors already understand all of this. They build software that empowers attorneys to leverage their knowledge and skills. That is why we use their tools. Empowerment of attorneys with the latest AI tools empowers our entire system of justice. That is why the latest Cormack Grossman study is so important. That is why I am so passionate about this. Join with us in this. Demand diversity and many capacities in your search software, not just one.

Vendor Wake Up Call and Plea for Change

Ralph_x-mas_2013My basic message to all manufacturers of predictive coding software who use only one type of machine training protocol is to change your ways. I mean no animosity at all. Many of you have great software already, it is just the monomondal method built into your predictive coding features that I challenge. This is a plea for change, for diversity. Sell us a whole tool chest, not just a single, super-simple tool.

Yes, upgrading software takes time and money. But all software companies need to do that anyway to continue to supply tools to lawyers in the Twenty-First Century. Take this message as both a wake up call and a respectful plea for change.

Dear software designers: please stop trying to make the legal profession look only under the random lamp. Treat your attorney customers like mature professionals who are capable of complex analysis and skills. Do not just assume that we do not know how to perform sophisticated searches. I am not the only attorney with multimodal search skills. I am just the only one with a blog who is passionate about it. There are many out there with very sophisticated skills and knowledge. They may not be as old (I prefer to say experienced) and loud mouthed (I prefer to say outspoken) as I am, but they are just as skilled. They are just as talented. More importantly, their numbers are growing rapidly. It is a generation thing too, you know. Your next generation of lawyer customers are just as comfortable with computers and big data as I am, maybe more so. Do you really doubt that Adam Losey and his generation will not surpass our accomplishments with legal search. I don’t.

Dear software designers: please upgrade your software and get with the multi-feature program. Then you will have many new customers, and they will be empowered customers. Do not have the money to do that? Show your CEO this article. Lawyers are not stupid. They are catching on, and they are catching on fast. Moreover, these scientific experiments and reports will keep on too. The truth will come out. Do you want to be survive the inevitable vendor closures and consolidation? Then you need to invest in more sophisticated, fully featured software. Your competitors are.

Dear software designers: please abandon the single feature approach, then you will be welcome in the legal search sandbox. I know that the limited functionality software that some of you have created is really very good. It already has many other search capacities. It just needs to be better integrated with predictive coding. Apparently some single feature software already produces decent results, even with the handicap of random-only. Continue to enhance and build upon your software. Invest in the improvements needed to allow for full multimodal, active, judgmental search.

Conclusion

Flashlights_taticalrandom only search method for predictive coding training documents is ineffective. The same applies to any other training method if it is applied to the exclusion of all others. Any experienced searcher knows this. Software that relies solely on a random only method should be enhanced and modified to allow attorneys to search where they know. All types of training techniques should be built into AI based software, not just random. Random may be easy, but is it foolish to only search under the lamp post. It is foolish to turn a blind eye to what you know. Attorneys, insist on having your own flashlight that empowers you to look wherever you want. Shine your light wherever you think appropriate. Use your knowledge. Equip yourself with a full tool chest that allows you to do that.




Follow

Get every new post delivered to your Inbox.

Join 3,558 other followers