This is Part Two of the essay where I go into the specifics of the holding in Dynamo. Please read Part One first: Is the IRS’s Inability to Find Emails the Result of Unethical Behavior? New Opinion by U.S. Tax Court Provides Some Clues – Part One. There I pointed out that the IRS attitude towards email discovery, particularly predictive coding, shows that they belong to the unethical category I call The Clueless. Yes, the IRS is clueless, but not in an affable Pink Panther Inspector Clouseau way, but in an arrogant, super-know-it-all way of egomaniac types. It is wonderfully personified in Ms. Lerner’s face during her Congressional non-testimony. Like Congress did to Lerner, the Tax Court in Dynamo properly cut down the IRS attorneys and rejected all of their IRS’ anti-predictive coding non-sense arguments.
Dynamo Holdings Opinion
Dynamo Holdings, Ltd. vs. Commissioner, 143 T.C. No. 9 (Sept. 17, 2014) is a very well written opinion by United Stated Tax Court by Judge Ronald L. Buch. I highly recommend that you study and cite this opinion. It is so good that I have decided to devote the rest of this blog to quotation of the portions of it that pertain to predictive coding.
I cannot refrain from provided some comments too, of course, otherwise what would be the point of doing more than provide a link? But for the sake of clarity, and purity, although I will intermix my [side bar comments] along with the quotes, I will do so with blue font, and italics, so you will not mistake the court’s words with my own. Yes, I know, that is not how you do things in law review articles, that this is way too creative. So what? It will be a lot more interesting for you to read it that way, and quicker too. So damn with the old rules of legal writing, here goes.
[P]etitioners request that the Court let them use predictive coding, a technique prevalent in the technological industry but not yet formally sanctioned by this Court, to efficiently and economically identify the nonprivileged information responsive to respondent’s discovery request. [The Petitioners are the defendants, and Respondents are the plaintiff, IRS. The IRS sued to collect tax on certain transfers between business entities alleging they were disguised gifts to the owners of Dynamo. Seems like a pretty clear cut issue to me, and I cannot see why it was necessary to look at millions of emails to find out what happened. The opinion does not explain that. The merits of the case are not addressed and a detailed proportionality analysis is not provided.]
Respondent [IRS] opposes petitioners’ request to use predictive coding because, he states, predictive coding is an “unproven technology”. Respondent adds that petitioners need not devote their claimed time or expense to this matter because they can simply give him access to all data on the two tapes and preserve the right (through a “clawback agreement”) to later claim that some or all of the data is privileged information not subject to discovery.2 [This is the disingenuous part I referred to previously.]
FN 2 – We understand respondent’s use of the term “clawback agreement” to mean that the disclosure of any privileged information on the tapes would not be a waiver of any privilege that would otherwise apply to that information.
The Court held an evidentiary hearing on respondent’s motion. [It looks like the Tax Court followed Judge David Waxse on this often debated issue as to whether an evidentiary hearing should be provided, but he only went part way. As you will see, a full scale Daubert type hearing was not provided. Instead, Judge Buch treated their testimony as informal input. Most judges agree that this is appropriate, even if they do not agree with Judge Waxe’s position that Daubert type rulings are appropriate in a mere discovery dispute. Most judges I have talked to think that Evidence Rule 702 does not apply, since there is no evidence or trial, and no presentation to the jury to protect; there is just a dispute as to discovery search methods.]
[W]e hold that petitioners must respond to respondent’s discovery request but that they may use predictive coding in doing so. [The defendants had argued they should not have to search two backup tapes for email at all, and the use of predictive coding was a fall back argument. The decision did not provide any detailed explanation as to necessity, and I get the impression that it was not really pushed, that the main focus of the briefs was on predictive coding.]
Petitioners ask the Court to let them use predictive coding to efficiently and economically help identify the nonprivileged information that is responsive to respondent’s discovery request. More specifically, petitioners want to implement the following procedure to respond to the request: [I have omitted the first four reasons as not terribly interesting.] … 5. Through the implementation of predictive coding, review the remaining data using search criteria that the parties agree upon to ascertain, on the one hand, information that is relevant to the matter, and on the other hand, potentially relevant information that should be withheld as privileged or confidential information.
[T]he Court is not normally in the business of dictating to parties the process that they should use when responding to discovery. [This is a very important point. See Sedona Principle Six. The defendants did not really need the plaintiff’s approval to use predictive coding. Judge Buch is suggesting that this whole permission motion is an unnecessary waste of time, but he will indulge them anyway and address it. I for one am glad that he did.] If our focus were on paper discovery, we would not (for example) be dictating to a party the manner in which it should review documents for responsiveness or privilege, such as whether that review should be done by a paralegal, a junior attorney, or a senior attorney. Yet that is, in essence, what the parties are asking the Court to consider–whether document review should be done by humans or with the assistance of computers. [These are all very good points.] Respondent fears an incomplete response to his discovery. [Parties in litigation always fear that. The U.S. employs a “trust based” system of discovery that relies on the honesty of the parties, and especially relies on the honesty and cooperativeness of the attorneys who conduct the discovery. There are alternatives, like having judges control discovery. Most of the world has such judge controlled discovery, but lawyers in the U.S. do not want that, and it is doubtful that taxpayers would want to fund an alternative court based approach.] If respondent believes that the ultimate discovery response is incomplete and can support that belief, he can file another motion to compel at that time. Nonetheless, because we have not previously addressed the issue of computer-assisted review tools, we will address it here.
Each party called a witness to testify at the evidentiary hearing as an expert. Petitioners’ witness was James R. Scarazzo. Respondent’s witness was Michael L. Wudke. [I added these links. Scarazzo is with the well known vendor, FTI, in Washington D.C., and Wudke is with another vendor in N.Y., Transperfect Legal Solutions. He used to be with Deloitte.] The Court recognized the witnesses as experts on the subject matter at hand. We may accept or reject the findings and conclusions of the experts, according to our own judgment.
Predictive coding is an expedited and efficient form of computer-assisted review that allows parties in litigation to avoid the time and costs associated with the traditional, manual review of large volumes of documents. Through the coding of a relatively small sample of documents, computers can predict the relevance of documents to a discovery request and then identify which documents are and are not responsive. The parties (typically through their counsel or experts) select a sample of documents from the universe of those documents to be searched by using search criteria that may, for example, consist of keywords, dates, custodians, and document types, and the selected documents become the primary data used to cause the predictive coding software to recognize patterns of relevance in the universe of documents under review. The software distinguishes what is relevant, and each iteration produces a smaller relevant subset and a larger set of irrelevant documents that can be used to verify the integrity of the results. [That is not technically correct, at least not in most cases. The relevance subset does not get smaller and smaller. The probability predictions do, however, get more accurate. True predictive coding as used by most vendors today is active machine learning. It ranks the relevance of the probability of all documents. See Eg AI-EnhancedReview.com] Through the use of predictive coding, a party responding to discovery is left with a smaller set of documents to review for privileged information, resulting in a savings both in time and in expense. [Now the judge is back on track and this is an essential truth.] The party responding to the discovery request also is able to give the other party a log detailing the records that were withheld and the reasons they were withheld. [Judge Buch is referring to the privilege log, or in some cases, also a confidentiality log.]
Magistrate Judge Andrew Peck published a leading, oft-cited article on predictive coding which is helpful to our understanding of that method. [Of course Judge Peck’s photograph is not in the opinion.] See Andrew Peck, “Search, Forward: Will Manual Document Review and Keyboard Searches be Replaced by Computer-Assisted Coding?”, L. Tech. News (Oct. 2011). The article generally discusses the mechanics of predictive coding and the shortcomings of manual review and of keyword searches. The article explains that predictive coding is a form of “computed-assisted coding”, which in turn means “tools * * * that use sophisticated algorithms to enable the computer to determine relevance, based on interaction with (i.e., training by) a human reviewer.” Id. at 29. The article explains that:
Unlike manual review, where the review is done by the most junior staff, computer-assisted coding involves a senior partner (or team) who review and code a “seed set” of documents. [Judge Peck wrote this back in 2011. I believe his understanding of “senior parter” level skill needed for training has since evolved. I can elaborate, but it would take us too far astray. Let’s just say what is needed is a single, or at least, very small team of real experts on the relevance facts at issue in the case. See Eg. Less Is More: When it comes to predictive coding training, the “fewer reviewers the better” – Part One, Part Two, Part Three. The computer identifies properties of those documents that it uses to code other documents. As the senior reviewer continues to code more sample documents, the computer predicts the reviewer’s coding. (Or, the computer codes some documents and asks the senior reviewer for feedback.)
When the system’s predictions and the reviewer’s coding sufficiently coincide, the system has learned enough to make confident predictions for the remaining documents. Typically, the senior lawyer (or team) needs to review only a few thousand documents to train the computer. [The number depends, of course. For some projects, tens of thousands of documents may be needed over multiple iterations to adequately train the computer. Some projects are much harder than others, despite the skills of the search designers involved. Yes, it takes a great deal of skill and experience to properly design a large predictive coding search and review project. It also takes good predictive coding software that ranks all document probabilities.]
Some systems produce a simple yes/no as to relevance, while others give a relevance score (say, on a 0 to 100 basis) that counsel can use to prioritize review. For example, a score above 50 may produce 97% of the relevant documents, but constitutes only 20% of the entire document set. [All good software today ranks all documents, typically 0 to 100% probability, rather than give a simplistic yes/no ranking.]
Counsel may decide, after sampling and quality control tests, that documents with a score of below 15 are so highly likely to be irrelevant that no further human review is necessary. Counsel can also decide the cost-benefit of manual review of the documents with scores of 15-50. [Typically the cut off point is way above 15% probability. I have no idea where that number came from. A more logical and frequent number is below 50%, meaning they are probably not relevant.]
The substance of the article was eventually adopted in an opinion that states: “This judicial opinion now recognizes that computer-assisted review is an acceptable way to search for relevant ESI in appropriate cases.” Moore v. Publicis Groupe, 287 F.R.D. 182, 183 (S.D.N.Y. 2012), adopted sub nom. Moore v. Publicis Groupe SA, No. 11 Civ. 1279 (ALC)(AJP), 2012 WL 1446534 (S.D.N.Y. Apr. 26, 2012).
Respondent asserts that predictive coding should not be used in these cases because it is an “unproven technology”. We disagree. [The alternative methods, keyword search and linear human review are the “unproven technologies,” not predictive coding. Indeed, the science proves that keyword and linear review are unreliable. See Eg. LEGAL SEARCH SCIENCE. The new gold standard is active machine learning, aka predictive coding, not hundreds of low paid contract lawyers sitting in cubicles all day.] Although predictive coding is a relatively new technique, and a technique that has yet to be sanctioned (let alone mentioned) by this Court in a published Opinion, the understanding of e-discovery and electronic media has advanced significantly in the last few years, thus making predictive coding more acceptable in the technology industry than it may have previously been. In fact, we understand that the technology industry now considers predictive coding to be widely accepted for limiting e-discovery to relevant documents and effecting discovery of ESI without an undue burden.10 [Excellent point. Plus it is not really all that “new” by today’s standards. It has been around in academic circles since the 1990s.]
FN 10 – Predictive coding is so commonplace in the home and at work in that most (if not all) individuals with an email program use predictive coding to filter out spam email. See Moore v. Publicis Groupe, 287 F.R.D. 182, n.2 (S.D.N.Y. 2012), adopted sub nom. Moore v. Publicis Groupe SA, No. 11 Civ. 1279 (ALC)(AJP), 2012 WL 1446534 (S.D.N.Y. Apr. 26, 2012).
See Progressive Cas. Ins. Co. v. Delaney, No. 2:11-cv-00678-LRH-PAL, 2014 WL 3563467, at *8 (D. Nev. July 18, 2014) (stating with citations of articles that predictive coding has proved to be an accurate way to comply with a discovery request for ESI and that studies show it is more accurate than human review or keyword searches); F.D.I.C. v. Bowden, No. CV413-245, 2014 WL 2548137, at *13 (S.D. Ga. June 6, 2014) (directing that the parties consider the use of predictive coding). See generally Nicholas Barry, “Man Versus Machine Review: The Showdown between Hordes of Discovery Lawyers and a Computer-Utilizing Predictive-Coding Technology”, 15 Vand. J. Ent. & Tech. L. 343 (2013); Lisa C. Wood, “Predictive Coding Has Arrived”, 28 ABA Antitrust J. 93 (2013). The use of predictive coding also is not unprecedented in Federal litigation. See, e.g., Hinterberger v. Catholic Health Sys., Inc., No. 08-CV-3805(F), 2013 WL 2250603 (W.D.N.Y. May 21, 2013); In Re Actos, No. 6:11-md-2299, 2012 WL 7861249 (W.D. La. July 27, 2012); Moore, 287 F.R.D. 182. Where, as here, petitioners reasonably request to use predictive coding to conserve time and expense, and represent to the Court that they will retain electronic discovery experts to meet with respondent’s counsel or his experts to conduct a search acceptable to respondent, we see no reason petitioners should not be allowed to use predictive coding to respond to respondent’s discovery request. Cf. Progressive Cas. Ins. Co., 2014 WL 3563467, at *10-*12 (declining to allow the use of predictive coding where the record lacked the necessary transparency and cooperation among counsel in the review and production of ESI responsive to the discovery request).
Mr. Scarazzo’s expert testimony supports our opinion. He testified that11 discovery of ESI essentially involves a two-step process.
FN 11 – Mr. Wudke did not persuasively say anything to erode or otherwise undercut Mr. Scarazzo’s testimony. [This is to the credit of Mr. Wudke, an honest expert.]
First, the universe of data is narrowed to data that is potentially responsive to a discovery request. Second, the potentially responsive data is narrowed down to what is in fact responsive. He also testified that he was familiar with both predictive coding and keyword searching, two of the techniques commonly employed in the first step of the two-step discovery process, and he compared those techniques by stating:
[K]ey word searching is, as the name implies, is a list of terms or terminologies that are used that are run against documents in a method of determining or identifying those documents to be reviewed. What predictive coding does is it takes the type of documents, the layout, maybe the whispets of the documents, the format of the documents, and it uses a computer model to predict which documents out of the whole set might contain relevant information to be reviewed.
So one of the things that it does is, by using technology, it eliminates or minimizes some of the human error that might be associated with it. [Note proper use of the word “some,” it eliminates some of the human error. It cannot be eliminated entirely.] Sometimes there’s inefficiencies with key word searching in that it may include or exclude documents, whereas training the model to go back and predict this, we can look at it and use statistics and other sampling information to pull back the information and feel more confident that the information that’s being reviewed is the universe of potentially responsive data.
He concluded that the trend was in favor of predictive coding because it eliminates human error and expedites review. [The modifier “some” to “eliminates human error” is not used here, and thus is a slight overstatement.]
In addition, Mr. Scarazzo opined credibly and without contradiction that petitioners’ approach to responding to respondent’s discovery request is the most reasonable way for petitioners to comply with that request. Petitioners asked Mr. Scarazzo to analyze and to compare the parties’ dueling approaches in the setting of the data to be restored from Dynamo’s backup tapes and to opine on which of the approaches is the most reasonable way for petitioners to comply with respondent’s request. Mr. Scarazzo assumed as to petitioners’ approach that the restored data would be searched using specific criteria, that the resulting information would be reviewed for privilege, and that petitioners would produce the nonprivileged information to respondent. He assumed as to respondent’s approach that the restored data would be searched for privileged information without using specific search criteria, that the resulting privileged information would be removed, and that petitioners would then produce the remaining data to respondent. As to both approaches, he examined certain details of Dynamo’s backup tapes, interviewed the person most knowledgeable on Dynamo’s backup process and the contents of its backup tapes (Dynamo’s director of information technology), and performed certain cost calculations.
Mr. Scarazzo concluded that petitioners’ approach would reduce the universe of information on the tapes using criteria set by the parties to minimize review time and expense and ultimately result in a focused set of information germane to the matter. He estimated that 200,000 to 400,000 documents would be subject to review under petitioners’ approach at a cost of $80,000 to $85,000, while 3.5 million to 7 million documents would be subject to review under respondent’s approach at a cost of $500,000 to $550,000. [This is a huge reduction, and shows the importance of predictive coding. It is a reduction of from between 2.2 million to 6.6 million documents. That seems credible to me, but the actual cost saving quoted here seems off, or at least, seems incomplete. For instance, if you assume 300,000 documents, the mid-point of the estimated document count using predictive coding, and a projected cost of $85,000, that is only $00.28 per document. That is a valid number for the predictive coding culling process, but not for the actual review of the documents for confidentiality and privilege, and to confirm the privilege predictions.]
Our Rules, including our discovery Rules, are to “be construed to secure the just, speedy, and inexpensive determination of every case.” Rule 1(d). Petitioners may use predictive coding in responding to respondent’s discovery request. If, after reviewing the results, respondent believes that the response to the discovery request is incomplete, he may file a motion to compel at that time. See Rule 104(b), (d).