HEARTBLEED: A Lawyer’s Perspective On Cyber Liability and the Biggest Programming Error in History

April 22, 2014

heartbleed_SeggelmannThe Negligence of One Open Source Software Contributor Put Millions of Internet Users’ Confidential Data At Risk.

Is Open Source Software Appropriate for Major Security Applications?

In law we usually do not know who is to blame for a mistake. With Heartbleed we know exactly who made the error. It was a German programmer, Robin Seggelmann, who was a PhD student at the time. Assuming he is telling the truth, that this error was a mistake, not an intentional act of sabotage, Segglemann now apparently has the dubious distinction of having made the biggest computer programming error in history. Some journalist are calling Seggelmann the man who broke the Internet. That is an exaggeration, but I cannot think of a programming error that has ever had a bigger impact.

It was small oversight. Segglemann forgot to add a single line of code limiting the size of memory access to a feature called heartbeat (thus the nickname for the bug, heartbleed). Oops. These things can happen. Easy to understand. Hey, it was, after all, one minute before midnight on New Years eve 2011 when he submitted his work. I kid you not. Segglemann knew that another expert was going to check his work anyway, so why should  he be too concerned? Too bad the supervising expert missed the error too. Oops again. Oh well, that’s open source for you. Segglemann did not get paid for his work, and there may be no legal consequences for his gift to the world, a gift that many security experts call the worst thing to ever happen to Internet security.

Bruce Schneier, a leading digital security analyst that I follow, says that “‘Catastrophic’ is the right word. On the scale of one to 10, this is an 11.” Brean, How a programmer’s small error created Heartbleed — a secret back door to supposedly secure sites (National Post, 4/11/14). For more on Schneir’s thoughts on Heartbleed, see the Harvard Business Review interview of him by Scott Berinato. 

Rusty Foster wrote in The New Yorker that: “In the worst-case scenario, criminal enterprises, intelligence agencies, and state-sponsored hackers have known about Heartbleed for more than two years, and have used it to systematically access almost everyone’s encrypted data. If this is true, then anyone who does anything on the Internet has likely been affected by the bug.” Forbes cybersecurity columnist Joseph Steinberg wrote, “Some might argue that [Heartbleed] is the worst vulnerability found (at least in terms of its potential impact) since commercial traffic began to flow on the Internet.”

For details on Heartbleed see the Hacker News article by , HeartBleed Bug Explained – 10 Most Frequently Asked Questions (Hacker News, 4/14/14). Hacker News includes this good video explanation of the details by Fierce Outlaws:

Bottom line, this little programming error, which some experts refer to as a buffer over-read bug, has huge implications for the security of the Internet, and Android phones, which use the same protocol. It could affect almost every Internet user in the world, depending on who else knew about the mistake, and for how long. At this point, nobody knows. So far only one 19-year old Canadian hacker has been arrested for exploiting the bug, and one online railroad payment system in Russia has discovered it had been hacked. If all that were not bad enough, the security firm Mandiant claims to have found evidence of heartbleed based attacks on one of its client’s virtual private networks (outside of the Internet). Heartbleed vulnerability was used to get pass the firewall and gain access to the VPN.

The Heatbleed catastrophe has dramatically revealed that our current system of Internet security is primarily based  on an open source software called OpenSSL. Heartbleed has shown that the whole security of the Internet can depend on one unpaid volunteer like Segglemann, a lone PhD student in Münster, Germany, who had nothing better to do on New year’s eve than finish a freebie software coding project. No doubt he thought it would help his resume. Bad decision.

Something is terribly wrong when the whole Internet is vulnerable due to the mistake of one math student. This mistake should be a wake up call to change the system. I conclude this blog with a call for dialogue among security experts, open source experts, white-hat hackers, lawyers, the FBI, consumer advocates, and others, to come up with serious reforms to our current Internet security infrastructure, including especially reforms of OpenSSL as an organization, and to do so by the end of this year. The public trust in the security of the Internet cannot withstand another Heartbleed, especially if it turns out that thousands, perhaps millions, have been injured. (We already have reports that the hack of the Russian railroad website allowed 10,000 credit card accounts to be stolen.)


Seggelmann claims it was just a mistake. In his words, a trivial error. He seems kind of blasé about it in his only interview to date, a short talk with an Australian journalist. The interview is quoted in full below. In fairness, I do not think Seggelmann realized the implications of his error at the time he spoke. (He has stopped talking now, no doubt on advice of legal counsel, and his current employer, Deutsche Telekom.)

Seggelmann denies that he was paid to do this by the NSA or anyone else. Most of the articles on him written to date just take his word for it. Oddly enough, most writers even express sympathy for Segglemann. The response you see in Huffington Post is typical: “You could blame the author, but he did this work for free, for the community, and with the best of intentions. ” Oh really. How do you know that? Because he said so? I am tempted to say something about naive bleeding heart liberals, but have been accused of being one myself, and besides, it is a bad pun, so I will not.

I hope Robin Seggelmann is telling the truth true too, but have been a lawyer far too long to believe anything a person in his position now says. Plus, the circumstance of posting such important code just a minute before New Years is clearly indicative of carelessness. If the NSA was behind it, and Bloomberg has reported that they have known about the defect for years, then I expect we will know that soon enough from Snowden. If someone else was, we may never know.

Are Seggelmann or OpenSSL Liable for any Damages that Heartbleed May Cause? 

Judicial_HellholesEven if this was just a mistake, not fraud, major errors like this have consequences, but for whom? Innocent users of websites operating this code for over two years may already have been victimized. We do not know yet what damages this mistake may cause, but we already have had reports on one arrest in Canada, and one theft of credit card numbers in Russia.

That is just the tip of the iceberg. Who will be responsible for the damages caused to so many? Will it be Seggelmann himself, or perhaps the not-for-profit open source group, OpenSSL, that he did this work for as an unpaid volunteer? Although I have no sympathy for a person whose negligence has caused such havoc, I doubt Seggelmann will ever be forced to reimburse anyone for the harm he has caused. 

Perhaps the operators of websites who told their users that their website was secure? Probably not them either, but it may be a closer question. This may be a rare situation where there is no remedy for people damaged by another’s negligence. It will depend on the facts, the as yet unknown details. But, rest assured, much more of the truth will come out in due time. I fully expect some lawyer, somewhere, will file suit when damaged victims appear, or maybe even before.

It will probably be difficult to hold OpenSSL liable for a number of reasons. First or all, who or what is OpenSSL? It appears to be a type of legal entity that we would call in the U.S. an unincorporated non-profit association. It is often treated something like a partnership in U.S. law.

According to the Washington Post, OpenSSL‘s headquarters — to the extent one exists at all — is the home the group’s only employee, a part timer at that, located on Sugarloaf Mountain, Maryland. He lives and works amid racks of servers and an industrial-grade Internet connection. Craig Timberg, Heartbleed bug puts the chaotic nature of the Internet under the magnifying glass (Washington Post, 4/9/14).

You cannot make up stuff like this. Truth is always stranger than fiction. Tineberg’s article also reports that the software that serves as the backbone for security on the Internet has, due to the lack of personnel and funds of OpenSSL, never been though a security audit, a meticulous process that involves testing the software for vulnerabilities.

Here is what OpenSSL has to say about themselves:

The OpenSSL Project is a collaborative effort to develop a robust, commercial-grade, full-featured, and Open Source toolkit implementing the Secure Sockets Layer … managed by a worldwide community of volunteers that use the Internet to communicate, plan, and develop the OpenSSL toolkit and its related documentation.  . . .  [Y]ou are free to get and use it for commercial and non-commercial purposes.  . . .

The OpenSSL project is volunteer-driven. We do not have any specific requirement for volunteers other than a strong willingness to really contribute while following the projects goal. The OpenSSL project is formed by a development team, which consists of the current active developers and other major contributors. Additionally a subset of the developers form the OpenSSL core team which globally manages the OpenSSL project.

Https_symbolThis is the way most open source software works. As open source software goes, OpenSSL is one of the most successful in the world (well, it was, until this whole catastrophe thing). Their product, OpenSSL, was, and still is, the world’s most popular open source cryptographic library. It is used to encrypt most of the traffic on the Internet. About two-thirds of web sites with an “S” at the end of the HTTP address use this freebie software. 

Seggelmann had an accomplice of sorts at OpenSSL, although I do not mean to imply any type of conspiracy by use of this word. There is no evidence of that. But I am sure people will look into that possibility, not only government investigators, but also private eyes, especially if and when they are motivated by the kind of mixed greed and fear incentives that only law suits can bring. The appearances now all suggest that the double-checker just happened to miss the trivial error too. OpenSSL, like most good open source projects, has quality control procedures. Proposed code contributions are double checked for mistakes by a senior contributor to OpenSSL before they are accepted.

In this case Seggelmann’s work was checked by Stephen Henson. He is a freelance crypto consultant (his words) based in the U.K. who has a PhD in Mathematics. He is still listed by OpenSSL as one of only four core team members of this open source group. That’s right, four people; one of them the part-time employee who works out his home on a mountain that serves as the group’s headquarters.

So after two people looked over the new code contribution, way too casually as we now know, the code was approved. Soon thereafter millions of websites started using it and so made themselves susceptible to attack.

I could only find one legal disclaimer in the OpenSSL website, but I bet this changes soon as the academics in charge of this non-profit association start to wake up to legal realities:


I suspect the enforceability of this language may get tested in some court somewhere in the world, probably in the U.S., but to what end? I doubt OpenSSL has any assets, much less insurance, and, even if you could prove proximate causation, how deep could the pockets of Segglemann, Henson, and other contributors be? It is more likely the primary targets for restitution will be the companies who used the defective open source software in their servers, thus exposing their users’ confidential information.

Was it negligence for commercial sites to rely on Open Source software?

SSL_secureWas it negligence for commercial sites to rely on Open Source software? Free software donated by all-too-human expertse? I do not think so. I will not go that far, but I’m willing to bet some lawyer somewhere will go that far. With a mistake this size it is almost inevitable that a class action suit will eventually get filed against somebody.

With the facts I have seen to date I do not think there was adequate notice to the adopters of this free software as to its unreliability to support a cause of action against them for negligent use of OpenSSL. But can the same be said now after the Heartbleed disaster has come to light? Now that we know the mistakes of only two men can put everyone at risk? Maybe not.

Is this the Beginning of the End for Open Source?

password-heartbleed-thumbThis may spell the beginning of the end of widespread commercial adoption of free software, at least for security purposes. After all, it took a for-profit company, Google, to discover the error in the software that many of its websites were using too. What we do not know is how many hackers, government sponsored or free lance, had previously discovered this mistake. How many had previously exploited this flaw to discover the supposedly secret passwords of the hundreds of millions persons potentially impacted. What makes this even worse is that we will never know. The biggest programming error in history made it possible for hackers to steal your data without ever leaving a trace.

Many believe the NSA has been exploiting this flaw for years. Who knows what criminal enterprises and foreign governments may have done the same?

There are two rational responses to this open source security scandal. One, stop using the Internet for anything that you want to keep secure, like all financial information. Or two, stop using Open Source, and instead use paid software, software with real safeguards, and with an entity or entities who will stand behind their products, and insurers who will stand behind them.

Society today relies heavily on the Internet. Commerce relies heavily on the Internet. If security is at risk, our current way of life is at risk. It is that important. So the first alternative is out.

This means we have to stop reliance on Open Source software for security, at least the way it is run now. We need the safety of big corporations who will have a direct economic incentive to take responsibility for their work. We need paid employees, not volunteers. Ones who will get paid bonuses for doing great work, and fired if the make Heartbleed type errors that put us all at risk. Either that, or we need major reform of this open source non-profit so that they are accountable. We are way beyond the hobbyist beginnings of the Internet, a time I remember well, and yet we still delegate major Internet responsibilities to small, unregulated groups of independent associations.

The Heartbleed disaster shows that reliance on open source software for commerce is a risky proposition when it comes to security. It may save users some money, but the risk of error may be too high. Consumers will demand that companies pay up and protect their personal data security. As Chris Williams put it in his article for The RegisterOpenSSL Heartbleed: Bloody nose for open-source bleeding hearts:

Open source or open sores? The crux of the matter is that OpenSSL is used by millions and millions of people, even if they don’t know it, but this vital encryption software – used to secure online shopping and banking, mobile apps, VPNs and much more – has a core developer team of just four volunteers who rely on donations and sponsorship. The library code is free and open source, and is used in countless products and programs, but Seggelmann and others point out that the project receives little help.

The use of open source software for everything was a fine experiment, an idealistic one based on the notion that crowdsourcing provided a better alternative to free enterprise, that capitalism could be replaced by a volunteer society of dedicated altruists. Personally, I was always skeptical. I think that competition is a good thing and helps build better products. Heartbleed confirms the skepticism was warranted. Heartbleed has exposed the dark side of crowdsourcing, the inherent weaknesses of volunteerism. The dark side of crowdsourcing is that the crowds will not come, or will stop coming. Here the crowd that checked a critical update to the code consisted of two people only, Robin Seggelmann and Stephen Henson. Two is never a crowd. In fact, a jury may some day be called upon to decide whether it was reasonable to release security code after only two people looked at it.

Robin Seggelmann’s Side of the Story

heartbleed_responsibleBen Grubb is the only journalist so far to get an interview of Robin Seggelmann, published in Man who introduced serious ‘Heartbleed’ security flaw denies he inserted it deliberately (Sydney Morning Herald, 4/11/14). Here are the key excerpts and quotes from Grubb’s article, but I suggest you read the entire article and Grubb’s followup articles too. He has an interesting perspective, including criticism of Google’s handling of the release of information of the bug’s discovery.

Dr. Seggelmann, of Münster in Germany, said the bug which introduced the flaw was “unfortunately” missed by him and a reviewer when it was introduced into the open source OpenSSL encryption protocol over two years ago. “I was working on improving OpenSSL and submitted numerous bug fixes and added new features,” he said. “In one of the new features, unfortunately, I missed validating a variable containing a length.” After he submitted the code, a reviewer “apparently also didn’t notice the missing validation”, Dr Seggelmann said, “so the error made its way from the development branch into the released version.” Logs show that reviewer was Dr Stephen Henson. Dr Seggelmann said the error he introduced was “quite trivial”, but acknowledged that its impact was “severe”.

Conspiracy theories. A number of conspiracy theorists have speculated the bug was inserted maliciously. Dr Seggelmann said it was “tempting” to assume this, especially after the disclosure by Edward Snowden of the spying activities conducted by the US National Security Agency and others. “But in this case, it was a simple programming error in a new feature, which unfortunately occurred in a security relevant area,” he said. “It was not intended at all, especially since I have previously fixed OpenSSL bugs myself, and was trying to contribute to the project.” Despite denying he put the bug into the code intentionally, he said it was entirely possible intelligence agencies had been making use of it over the past two years. “It is a possibility, and it’s always better to assume the worst than best case in security matters, but since I didn’t know [about] the bug until it was released and [I am] not affiliated with any agency, I can only speculate.”

Benefits of discovery If anything had been demonstrated by the discovery of the bug, Dr Seggelmann said it was awareness that more contributors were needed to keep an eye over code in open source software. “It’s unfortunate that it’s used by millions of people, but only very few actually contribute to it,” he said. “The benefit of open source software is that anyone can review the code in the first place. “The more people look at it, the better, especially with a software like OpenSSL.”

Future Heartbleed prevention. Asked how OpenSSL would make sure something like Heartbleed didn’t happen in the future, OpenSSL core team member Ben Laurie, who just happens to work at Google, said no promises could be made. “No one knows how to write completely secure code,” he said, speaking on behalf of OpenSSL. “However, a better job could be done of reducing the risk. For example, code audit, more review of changes. These things take more manpower, which can either come from donated time or donated money.”

Call for Dialogue and Reforms

Ralph_2014From Ben Grubb’s article it seems that even OpenSSL agrees that some change is now needed to open source Internet security code. Not surprisingly, their answer is to give them more money, a lot more. According to a NY Times Bits article, OpenSSL has only been able to raise $2,000 per year. Nicole Perlroth, OpenSSL and Linux: A Tale of Two Open-Source Projects (NYT Bits, 4/18/14). Sorry, but that is beyond pathetic. Is a catastrophe really a good fundraising strategy? I think much more fundamental reforms are now required to protect the security of the Internet. Heartbleed has proven that.

I do not have the answers. But I do have a proposal. I call for real dialogues between security experts, and a broad range of other interested parties, to come up with ideas for serious Internet security reform, and then to act on them. This should be completed before the end of this year, 2014.

linux_logoI suggest that Jim Zemlin, the executive director of the highly successful open source project, Linux, assume at least part of the lead on this, but not take control, and not limit the agenda to open source. The NYT Bits article by Perlroth suggests that Zemlin, and other open source leaders, think that better funding for OpenSSL is all that is needed to fix the problem and reassure the public after the Heartbleed catastrophe.

Data_protection_Cyber-Liability-Insurance-3-Ways-to-Secure-Your-ComputerI think they are wrong about this. The public does not care at all about the survival of open source. All they care about is the survival and security of the Internet. After all, their bank account and refrigerator are connected to the Internet today; tomorrow it could be their pacemaker. It is a key part of their life. They do not care if Microsoft or other companies profit from keeping it secure. They want their personal data secure from criminals. They do not want their bank accounts drained or their identity stolen. They want security. They want insurance.

I hope that Zemlin and other outsourcing leaders get this, and will consider other, deeper reforms than better open source fund raising. The input of security experts, ones not tied to the open source movement, including its commercial competitors, should be considered. This is not an open source problem, this is a security problem. Opponents to the open source movement should also be invited so all sides can be heard, so too should open source neutrals and outsiders, which, for the record, is my position. I am not an open source fanboy, but, on the other hand, I do use open source software, WordPress, for my blog and most websites. Others who should be invited to the conference include all shades of security experts, white-hat hackers, lawyers, consumer advocates, and others. Even the government, including the FBI and NSA.  They should all be invited to dialogue and come up with serious reforms to our current Internet security infrastructure, including especially reforms of OpenSSL, and to do so by the end of this year.

Like it or not, the views of law and lawyers must be considered. Lawsuits are not the answer. But still, they will come. Proposed reforms should take legal consequences into consideration. Real people, innocent people, may already have been harmed by these security errors. It will take years to find out what damages have been caused by OpenSSL‘s major blooper. Some courts may find that they are entitled to restitution.

ralph_1990sThe Internet is not a no-mans-land of irresponsibility. It has laws and is subject to laws. I first pointed that out in my 1996 book for MacMillan, Your Cyber Rights and Responsibilities: The Law of the Internet, Chapter 3 of Que’s Special Edition Using the Internet. Persons committing crimes on the Internet must and will be prosecuted no matter where their bodies are located. The same goes for negligent actors, be they human, corporate, or robot. Responsibility for our actions must always be considered in any human endeavor, even online. Not-for-profit status is not a get out of jail free card. That is one reason why lawyers must have a seat at the table and participate in the Internet security dialogue. Law and cyber liability issues must be considered.

From my perspective as a lawyer I expect that any real reform of Internet security will include the development of new rules. They will likely be focused on mandatory procedures to safeguard quality. The rules will try to prevent the reoccurrence of another major screwup like Heartbleed. For instance, if there is no bona fide crowd sourcing, say a minimum of 10 to 20 experts reviewing each line of code, not just two, then other safeguards should be required. In that event, perhaps deep-pocket corporations should be hired to audit everything. They should be made to vouch for the code, to stand behind it.

All alternatives should be considered, not just better fundraising and publicity for OpenSSL. (Frankly, I think it is too late for publicity to ever help OpenSSL.) Maybe private enterprise should take over OpenSSL, at least in part? Or maybe some kind of quasi-governmental entity should get involved in Internet security. For example, maybe it should be a part of ICANN’s duties?

Maybe private or public insurance should be required for any software like this, and so spread the risk among all users. Although this may offend open source fanatics, but the reality is, as Heartbleed proves, Free is not necessarily a good thing when you are looking for quality. Perhaps providers should pay for at least part of all Open Source. Most are, after all, profiting from it in one way or another. Although I hate to say it, since most politicians are technically clueless, perhaps new laws should also be considered? Laws that place incentives for quality, that impose both carrot and stick consequences. I would put everything on the table for discussion. More of the same is too risky.

I invite this dialogue to begin here and now. Email me or leave a comment below.  If that dialogue is already happening elsewhere, please let me know. In any event, feel free to forward this call for dialogue. I will report on it all here, no matter where and how it occurs, so long as it is real dialogue, people really listening to what each other have to say, and not just posturing and win/lose debate.

If this happens, I will report on the parts that I can understand, the aspects that are not overly technical, and aspects that are somewhat legal in nature. If someone or organization wants to volunteer to convene a Congress to conclude the dialogue and facilitate consensus decisions, then I will assist in publicity and report on that too. I will also be happy to attend, if at all possible. If I have anything to say on issues, I will also do that, and not just report. But for now, aside from the few general suggestions already provided here, my message at this time is to sound an alarm on the need to take action, and to suggest that the action be preceded by dialogue. I would like to know what you think about all of this?

Fears and Loathing (and Pain) in Seattle: a Case Lesson in How NOT to Implement a Litigation Hold and Search for Email – Part Two

April 20, 2014

Ralph_Fear_Loathing_VegasThis is part two of a two-part blog, Fears and Loathing (and Pain) in Seattle. Part one is found here. This is not really a Hunter S. Thompson worthy story, but it is Seattle after all. And the name of the law firm involved here just begs for the analogy.

Before you begin reading part two of this sanctions saga, take a look at the poll results from Part One. If you have not already done so, cast your vote. I promise you it is all anonymous. The last time I checked it was about evenly split on both questions, but not enough readers have voted. So, please join in now.

Seattle Court’s Finding of Bad Faith

Seattle-skylineJudge Robart in Knickerbocker v Corinthian Colleges found that there was clear and convincing evidence the defendant, and their counsel, the Seattle law firm of Payne & Fears, had refused to participate forthrightly in the discovery process and that this refusal constitutes or is tantamount to bad faith. He found that they had delayed resolution of Plaintiffs’ claims, expended both the court’s and Plaintiffs’ limited resources on matters ancillary to the merits, and threatened to interfere with the rightful decision of this case.

Judge Robart did not think too much of defendants argument against all sanctions because the email was eventually found and produced. Here is his well written response to this argument (citations removed and emphasis added):

Corinthian argues that, at least with respect to emails, no spoliation has occurred because Corinthian has since recovered and produced all responsive employee emails from the backup tapes. The court notes that this argument contravenes what appears to have been Corinthian’s previous position that the backup tapes were not reasonably accessible. Corinthian’s characterization of the backup tapes has shifted with the winds throughout this litigation, adopting whatever posture is most convenient in the immediate context. (Compare Ruiz Decl. ¶ 17 (“I explained that it was unreasonable and impractical to search them . . . .”) with 12/12/13 Trans. (“It would be perfect. It would be one day, $1,000.”) (Mr. Brown testifying).)

Corinthian cannot have it both ways. If the information on the backup tapes was unavailable within the meaning of Federal Rule of Civil Procedure 26(b)(2)(B) such that Corinthian was not required to recover it, then the Plaintiffs’ deleted emails were, in fact, spoliated evidence. If, as Corinthian’s counsel represented at oral argument, the information on the backup tapes was accessible, then Corinthian had little basis for refusing to search the backup tapes under the parties’ Stipulated Order, no basis for filing a verification with the court affirming that it had searched “all available electronic sources”, and appears to have assumed a misleading stance with Plaintiffs from the beginning.

Corinthian counters that it encountered substantial technical difficulties and costs in retrieving the emails from the backup tapes. But any obstacles Corinthian faced in recovering the emails were the direct result of Corinthian’s inadequate discovery search, deletion of evidence, and lack of candor with both Plaintiffs and with the court. Such obstacles do not transform bad faith into good.

The judge basically accuses the defendant’s law firm, and thus the defendant itself, of not being straight with the court about plaintiffs’ emails and the defendant’s backup tapes.

Throughout the course of the litigation, Corinthian did not once provide a straight-forward explanation of the process and cost of extracting information from the tapes.

Here is how Judge Robart wrapped it all up.

In sum, the court finds, by clear and convincing evidence, that Corinthian’s and Corinthian’s counsel’s lackluster search for documents, failure to implement a litigation hold, deletion of evidence, refusal to cooperate with Plaintiffs in the discovery process (particularly as evidenced by its withholding of information regarding both the backup tapes and its interpretation of the parties’ Stipulated Order), reliance on a recklessly false declaration, shifting litigation positions, and inaccurate representations to the court constitute bad faith or conduct tantamount to bad faith.

Bad Faith Does Not Necessarily Mean Dispositive Sanctions

ZeroEven though the court found bad faith, no dispositive sanctions were granted. The adverse inference instruction the plaintiffs had requested was also denied. These harsh sanctions were denied because plaintiffs provided, as the judge put it – zero evidence that any evidence of significance to the case was not produced. They only offered conjecture. As Judge Robart noted: produced documents cannot form the basis for a spoliation instruction. 

I am kind of surprised by plaintiffs’ failure to offer up some evidence that relevant evidence was not produced. You would think the plaintiffs would be able to come up with something concerning their own email.

Based on this record, the wise Judge Robart, although obviously upset with defense counsel, wanted the racial discrimination case to be tried on the merits. Besides, perhaps he knew that the emails that were produced were good enough for the plaintiffs to prove their case. Or maybe it was the opposite. The plaintiffs could have had a very weak case. We cannot tell from this opinion. We can only tell that the judge wanted the case tried on the merits, despite the bad faith e-discovery by defendant.

The judge got his message across on his intolerance of bad faith by imposition of the $10,000 fine against the Payne & Fears law firm, and the $25,000 fine against defendant. He also awarded the plaintiff’s reasonable attorney fees and costs incurred in connection with the sanctions motions and duplicative discovery related thereto. Justice was done.

Lessons learned from Knickerbocker

no-BS-signSeveral lessons can be learned from this case. For one thing, there is the trial lawyers lesson. Be careful how you answer questions posed to you by the judge. Be sure you remember these magic words: I don’t know. Restrain the urge to speculate or BS. Just keep to the facts you do know. Ask to get back to the judge on important questions with a supplemental brief or something. This case clearly shows why that is important.

The obvious primary e-discovery lesson is to always implement a litigation hold. The hold should be in writing and there should be follow up by conversations with the custodians on hold and with IT. Auto-deletions programs should be suspended, and, if the size of the case warrants it under proportionality analysis, preservation of ESI by bulk IT collection should be done. In smaller cases, collection may not be required and preservation-in-place may be adequate. There is no one-size fits all in e-discovery. Although there are plenty of plaintiff’s experts out there ready to tell a court every case should be treated like the Taj Mahal. They should not. Efforts should be scaled proportionally. See eg: My Basic Plan for Document Reviews: The “Bottom Line Driven” Approach – Part Two (e-Discovery Team, 10/9/13)


The final lesson here pertains to backup tape restoration and search. It is never as easy as you think. Indeed, the tape or tapes may have deteriorated to the point that restoration is impossible. You never know until you try. Once you restore, finding the relevant ESI can also be a challenge. Do not ever sat easy peasy when it comes to backup tapes.

This opinion does not really go into the defendant’s search efforts here, merely stating that about 3,000 relevant emails were found from a search of the emails of all employees at one location. That still seems like a low production. But I suspect the “search” consisted of running keyword terms agreed upon with plaintiff’s counsel, and then manual review of the emails that contained the terms. If they were relevant, they were part of the 3,000 produced. If not, then of course they were not produced. You do not produce irrelevant email just because they happen to have an agreed upon search term. I suspect this kind of procedure was followed here, and if so, the plaintiffs cannot complain about the search efforts made by defense counsel. They were following the parties agreed upon protocol.

We really do not know what that protocol was, but if, as I suspect, it was a keyword search protocol, then, questions of estoppel aside, the issue of whether it was a reasonable effort would depend on whether the common sense dictates for keyword search contained in Judge Peck’s Gross Construction opinion were followed. William A. Gross Construction Associates, Inc. v. American Manufacturers Mutual Insurance Co., 256 F.R.D. 134 (S.D.N.Y. 2009). Were the witnesses interviewed as to the language used? Were various keywords tested? Was the underlying data studied? The key documents? Or was it all done in the blind, like a child’s game of GO FISHChild’s Game of “Go Fish” is a Poor Model for e-Discovery Search (e-Discovery Team blog, 10/4/09).

Tested Keyword Search is Adequate for Most Cases

fear-loathingKeyword search alone, when done according to the standards set forth in Gross Construction, is a fair and adequate effort in most employment discrimination cases like the one in Knickerbocker v Corinthian CollegesMost employment cases are not really that complicated. For that reason the key documents needed to try most of these cases are not that difficult to find. Keyword search can and does work in the average case to meet the requirements of both Rule 26(g) and Rule 1 (just, speedy and inexpensive). It apparently worked just fine in Knickerbocker too, that is, after defense counsel stopped their Hunter S. Thompson routines and started playing it straight

There are some exceptional employment cases where keywords are inadequate. It depends on the case and the type of ESI, and the importance of the ESI to the case, and volume of ESI. But for most employment law cases the tested keyword search method of Gross Construction is reasonable and proportional. More sophisticated search methods, such as my favorite, predictive coding, may be needed in larger, more complex cases in other fields of law, as well as in some class action employment cases. But tested keywords work just fine for the vast majority of small cases that now flood our court system.

Most of these small cases in federal court are employment law cases. It seems like everyone has a beef these days. You would not believe the kind of frivolous cases that we see every day in my firm. Plaintiff’s counsel are not being selective. Many seem unable to overcome the natural trial lawyer tendency to be overconfident, unable to objectively predict the likely outcome of a potential client’s case. See: Lawyers as Legal-Fortune Tellers, (e-discovery Team, 3/30/14); Goodman-Delahunty, Granhag, Hartwig, Loftus, Insightful or Wishful: Lawyers’ Ability to Predict Case Outcomes, (Psychology, Public Policy, and Law, 2010, Vol. 16, No. 2, 133–157).

This limit of predictive coding to larger, more difficult cases will probably change in the future. The ever growing volume and types of ESI may demand the use of predictive coding in more and more cases. That should be made easier as the software costs of using predictive coding comes down even further. (For instance, my firm just closed a deal with Kroll Ontrack that lowers the costs for our clients even further. Look for press releases on this soon.) In the future predictive coding will expand to many more types and sizes of cases, but for now, predictive coding remains the exception in e-discovery, not the rule.

If your life revolves around discovery in the big cases, the complex cases with tons of ESI (actually, its weightless you know), then you should be using predictive coding all of the time. But for the vast majority of lawyers, dealing with the vast majority of relatively simple cases, it is not needed yet. You might as well hunt mosquitos with an elephant gun. Keyword search, done right, still works fine for the mosquito cases. Do not misunderstand me, mosquito bites can still hurt, especially if you get hit by too many of these blood suckers. You have to defend your company, but bad faith attempts to avoid discovery are never the way to go. Knickerbocker shows that.


Be straight with your judges. Always tell the truth. Talk about proportionality. They get it. The judges will protect you from the disproportionate use of e-discovery as an extortion tactic. We all know it still goes on. Has been for a long time as my parting string cite below reminds us. Both responding and requesting parties have to conduct discovery in good faith. When they do not, there are plenty of good judges around like James L. Robart to stop the abuse.


Discovery abuse as a weapon. See, e.g.:

  • Advisory Committee Note to the 1983 Amendment of the Federal Rules of Civil Procedure creating Rule 26(g) (“Thus the spirit of the rules is violated when advocates attempt to use discovery tools as tactical weapons rather than to expose the facts and illuminate the issues by overuse of discovery or unnecessary use of defensive weapons or evasive responses.”)
  •  Branhaven LLC v. Beeftek, Inc., _F.R.D._, 2013 WL 388429 (D. Md. Jan. 4, 2013) (Rule 26(g) enforced and counsel sanctioned for reckless disregard of their discovery duties.) The Increasing Importance of Rule 26(g) to Control e-Discovery Abuses (e-Discovery Team, 2/24/13).
  • Judge Refers Defendant’s e-Discovery Abuse to U.S. Attorney for Criminal Prosecution of the Company and Four of Its Top Officers (e-Discovery Team, 4/10/11); Philips Electronics N.A. Corp. v. BC Technical, 2011 WL 677462 at *2 (D.Utah, Feb. 16, 2011).
  • Discovery As Abuse, (e-Discovery Team, 1/18/11); Discovery As Abuse, 69 B.U. L. REV. 635 (1989).
  • Kipperman v. Onex Corp., 2009 WL 1473708 (N.D.Ga., 2009) (“The court regards the instant case as a textbook case of discovery abuse.”)
  • Qualcomm Inc. v. Broadcom Corp., No. 05-CV-1958-B(BLM) Doc. 593 (S.D. Cal. Aug. 6, 2007) (Clear and convincing evidence that Qualcomm['s] counsel participated in an organized program of litigation misconduct and concealment throughout discovery, trial, and post-trial)
  • Malautea v. Suzuki Motor Co., Ltd., 987 F.2d 1536, 1542 (11th Cir.1993) (Fed.R.Civ.P. 26(g) was “designed to curb discovery abuse by explicitly encouraging the imposition of sanctions.”)
  • Bondi v. Capital & Fin. Asset Mgmt. S.A., 535 F.3d 87, 97 (2d Cir. 2008) (”This Court . . . has taken note of the pressures upon corporate defendants to settle securities fraud ‘strike suits’ when those settlements are driven, not by the merits of plaintiffs’ claims, but by defendants’ fears of potentially astronomical attorneys’ fees arising from lengthy discovery.”)
  • Spielman v. Merrill Lynch, Pierce, Fenner & Smith, Inc., 332 F.3d 116, 122-23 (2d Cir. 2003) (“The PSLRA afforded district courts the opportunity in the early stages of litigation to make an initial assessment of the legal sufficiency of any claims before defendants were forced to incur considerable legal fees or, worse, settle claims regardless of their merit in order to avoid the risk of expensive, protracted securities litigation.”)
  • Lander v. Hartford Life & Annuity Ins. Co., 251 F.3d 101, 107 (2d Cir. 2001) (“Because of the expense of defending such suits, issuers were often forced to settle, regardless of the merits of the action. PSLRA addressed these concerns by instituting . . . a mandatory stay of discovery so that district courts could first determine the legal sufficiency of the claims in all securities class actions.” (citations omitted))
  • Kassover v. UBS A.G., 08 Civ. 2753, 2008 WL 5395942 at *3 (S.D.N.Y. Dec. 19, 2008) (“PSLRA’s discovery stay provision was promulgated to prevent conduct such as: (a) filing frivolous securities fraud claims, with an expectation that the high cost of responding to discovery demands will coerce defendants to settle; and (b) embarking on a ‘fishing expedition’ or ‘abusive strike suit’ litigation.”)

Fears and Loathing (and Pain) in Seattle: a Case Lesson in How NOT to Preserve and Produce Email – Part One

April 13, 2014

Fear_Loathing_SeattleA recent case in Seattle provides a text-book example of how not to do e-discovery. It concludes with a sanctions order against the defendant, and the defendant’s law firm, Payne & Fears LLP. The law firm was fined $10,000, payable to the court, due to the conduct of two of its attorneys. The defendant, Corinthian Colleges, was fined another $25,000. Knickerbocker v Corinthian Colleges, Case No. C12-1142JLR, (WDWA, April 7, 2014).

How does a sanctions disaster like this come to pass? And in such a laid back city like Seattle? The court awarded sanctions because of a failure to preserve and a subsequent delay in producing evidence. Note that I did not say sanctions for a loss of evidence, only delay. The ESI at issue in the sanctions motions was the email of three of defendant’s former employees. They were the plaintiffs who were now suing Corinthian for alleged racial discrimination, harassment, and retaliation. Corinthian’s attorneys eventually found and produced the email from back-up tapes, but it was an ordeal to get there. The court got the distinct impression that the attorneys involved for the employer were not playing straight, that they were attempting to hide the ball.

Fear_Loathing_VegasDigging a little deeper into this 27-page Order, which is replete with facts, as all good sanctions orders are, we see a series of bad decisions. The decisions were all made by the attorneys for the defense. It would take me another 27-pages to review them all in detail, so I will just examine the segments that seem to me to have the most instructional value. (It is still going to be a two-part blog, even without these details!)

After you hear the story, and hopefully also read the opinion itself, you be the judge as to whether these bad decisions were just incompetence on the part of these attorneys, or actual bad faith. I will include a poll at the end of each segment where you can anonymously vote. The judge clearly thought it was bad faith by Pain & Fears’ attorneys. But, who knows for sure (aside from the attorneys themselves). It is often hard to tell the difference between dishonesty and incompetence, especially if all you know about the case is what you read in one court order.

Judge’s Quiz of Defense Counsel Uncovers a Complete Failure to Impose a Litigation Hold

trial_sceneThe first bad mistake made in this case was the defendant’s failure to issue a litigation hold. The specific facts surrounding this failure are interesting, so too are they way they came out. Maybe they tell a story of stupidity, or maybe intent to hide? Again, you be the judge. Personally, on this first error at least, I am inclined to think it was just a lack of knowledge and understanding. But I readily admit I could be wrong. Maybe they did not issue a hold because they wanted incriminating email to be destroyed. Naturally, that is what the plaintiffs’ alleged.

The facts of the no-holds bar (not a typo, think about it) were clarified during an evidentiary hearing on the plaintiff’s first motion for sanctions. Most of the clarification was attained by the judge’s questioning of the Payne & Fears attorneys themselves, and not the witnesses they had brought with them.

When a district court judge decides to quiz legal counsel about something he is curious about, you had better respond fully and truthfully. This is the same judge who is going to decide whether to sanction your client for misconduct. The compulsion to speak is especially strong in a situation like this where the other side is urging a sanction against your client for hiding the truth. Do you want to dig your hole even deeper to be buried in? No. The Payne & Fear attorney at the hearing, Jeffrey Brown, had no choice but to answer the judge as best he could.

Judge-RobartThe District Court Judge, James L. Robart, a wise and sage judge if there ever was one, cut to the chase and asked Jeffrey about the litigation holds. No doubt Jeffrey was nervous when he heard the question directed to him. He was looking up at Judge Robart in black robe several feet above him on his bench. I am pretty sure the Judge was not smiling like we see him in this photo. Jeffrey knew that the Judge would not like the answer he was about to give. He was right about that. Here is Judge Robart’s account of this exchange from the opinion:

At the hearing, Jeffrey Brown, counsel for Corinthian, admitted that, although Corinthian had issued litigation holds in previous litigations, Corinthian did not issue a litigation hold with respect to this case. (12/12/13 Trans. at 4-5.) Mr. Brown represented that, instead of issuing a company-wide notice, Corinthian had hand-selected certain employees and requested that they retrieve and retain relevant documents. (Id. at 5-7, 12, 18.)

Note the use of the word “represented” in Judge Robart’s account of Jeffrey’s response. It is a term of art, a kind of word you use when describing fraud. This is still early in the opinion, but experienced case law readers knows this is a set up word for things to come.

As we will see later when discussing other e-discovery blunders in this case, Mr. Brown quickly became a favorite target of Judge Robart’s questioning. His opinion is filled with quotes from poor Jeffrey. No doubt he acquired a major headache in that courtroom.


Here is Judge Robart’s summary of the defendant’s preservation mishaps (emphasis added and numerous citations to the record omitted):

Corinthian has issued litigation holds in previous actions. Nonetheless, for this case, Corinthian only requested that a subset of employees, whom it deemed to be “key” witnesses, search for and save relevant documents. Testimony by some of these “key” witnesses, however, casts doubt on Corinthian’s claim that these employees in fact performed any—let alone a thorough—search for relevant documents. Specifically, Ms. Austin and Ms. Paulino testified that they did not search for documents relevant to the litigation, and Ms. Givens and Ms. Phillips testified that they did not recall searching for documents.5 Such self-selection of a limited pool of discovery materials, combined with doubt as to what searches, if any, were performed of this pool of materials, gives the court no confidence in the quality of Corinthian’s discovery production. Yet, due to the lack of a litigation hold, it is not clear that the current additional discovery period, instituted 18 months after Plaintiffs filed suit, can remedy this deficiency.

So much for the corporation’s litigation hold procedures in this case. Turns out that the deposition testimony contradicted the representations made by Mr. Brown to Judge Robart as to his hand-selection of certain employees and request that they retrieve and retain relevant documents. These same witnesses testified that no one told them to search for anything, and they had not made a search. This suggests that the representations made to Judge Robart were not accurate. In fact, they seem down right misleading.

Not good. Not good at all. Perhaps now you understand the fines against the attorneys. But wait, there is more. An even bigger mistake was allowing all of plaintiff’s emails to be deleted from the corporation’s Exchange server after the case started. No doubt the judge was beginning to believe the plaintiffs’ allegations that the employer allowed the plaintiffs’ emails to be destroyed on purpose because they knew the content would harm the defense. But before we move on. What do you think? Did the employer not institute a litigation hold on purpose so that they could get away with destroying incriminating evidence, or is this just another case of lawyer incompetence?

More Drama Concerning the Defendant’s Destruction of Email

"You can’t handle the truth." -- Jack Nicholson in “A Few Good Men” (1989)The facts on this alleged destruction were cloudy and contradictory at the time of the first hearing. So, once again, what did Judge Robart do? That’s right, he turned to defense counsel and asked him if it was true, had his company deleted all of the plaintiffs’ email as plaintiffs’ allege. Here is Judge Robart’s later findings on this issue, which, once again, relies extensively on the responses of Mr. Brown to his impromptu questions:

First Corinthian’s counsel, Mr. Brown, conceded that Plaintiffs’ email boxes were in fact deleted pursuant to this practice. (“With respect to the plaintiffs’ email boxes, no, your Honor. Those emails exist on the backup tapes, but those email boxes were deleted per the policy that Mr. Banash explained to you.”) Mr. Brown also conceded that the deletion of Plaintiffs’ emails occurred after Corinthian received Plaintiffs’ EEOC notices:

If you put an order in that says delete the mailbox in 30 days, should somebody have spotted the EEOC charges and made the connection and gotten around and suspended that? That’s something I can’t argue. That is something that we have looked at. Yeah, we have to find a way to fix that system, your Honor. I cannot sit here and tell you that is the best way to do things.  

Mr. Brown also explained that no other document destruction program was in place at Corinthian. Although Plaintiffs pointed to a Corinthian document purporting to establish a six-month automatic email deletion policy, the policy was apparently never implemented. Plaintiffs claim that Mr. Ruiz represented that the reason Corinthian had produced so few emails was due to the alleged six-month deletion policy; Corinthian disputes that Mr. Ruiz ever made such a representation.  

Another big mistake was conceded by Mr. Brown. The plaintiff’s email was deleted after Corinthian received Plaintiffs’ EEOC notices. That means after a duty to preserve was triggered. He has just admitted a breach of duty. He has admitted spoliation. But he has a fall back argument, only that argument puts the credibility of other Payne & Fear attorneys at issue.

Mr. Brown was now arguing that the destruction of the email after notice was a harmless breach of duty, because, after all, there is a complete backup copy of all of the plaintiffs’ email. Did Mr. Brown forget that other attorneys in his firm had previously said they could not produce plaintiffs’ emails because the employer did not have them? They claimed that the emails were all destroyed in the normal course, and so the destruction was protected from sanctions by Rule 37(e). After the judge put Mr. Brown on the spot, and grilled him on his poor job of preservation, Mr. Brown responded by saying how easy it will be to just produce the emails off of the backup tapes. That reminds me of one of my favorite deposition questions: “Were you lying then, or are you lying now?

chief_judge_james_robartJudge James Robart, who has been a lawyer since he graduated from Georgetown in 1973, and a District Court Judge since his nomination by President Bush in 2004, knows a fair amount about IT. He was a member of the Ninth Circuit IT Committee from  2007 to 2009, and chief judge in 2008. I suspect Judge Robart knew a lot more about technology than any of the attorneys in this case, although they apparently did not know that.

Judge Robart analyzed the different things that he was being told by the attorneys for the defendant and reached this considered opinion:

Corinthian’s attempt to influence (if not misdirect) the court with such unsubstantiated information falls below acceptable standards of professional conduct.

In case you did not know it, that’s polite judge-speak for much stronger thoughts and condemnations, including my favorite deposition question.

Back to defense counsel’s fall back argument, that no harm was done because of the backup tapes, Judge Robart deals with this position in his sanction’s Order by again relying on counsel’s own words:

Specifically, Mr. Brown averred that:

First of all, there are backup tapes that have every single email that has been referred to on them. Every single day that Corinthian has operated in Bremerton there are backup tapes with those e-mails. Everything we are talking about in this motion has been preserved and is available.

The problem with this position, as the judge knew full well from study of the record, is that defendant never searched these tapes for the email. They never searched, even though another Pain & Fears lawyer had previously filed a certification with the court stating that:

Corinthian, at its own expense, conducted a full and complete search for all documents responsive to Plaintiffs’ Requests for Production Nos. 1, 2, 3, 4, 5, and 27 (subject to any and all objections and limitations previously agreed to by the parties) on all available electronic sources and/or servers . . . .

Judge Robart later found that this verification was, in the judge’s words: incorrect in that Corinthian’s backup tapes were not searched, despite the fact that Plaintiffs had not agreed that the backup tapes were not “available.” 

Obviously the lawyers were getting a little too cute with the use of the word available to try to justify their hiding the fact that they had these emails on electronic sources, namely backup tapes, but did not search them.

cant_handle_truthThis kind of squirmy behavior never goes over well with a judge, any judge, but especially not a U.S. District Court Judge like Judge Robart. He had long experience with complex litigation in Seattle, coupled with recent experience with IT. Believe me, a judge like that can handle the truth, the whole truth, and nothing but the truth. So that is what you had better give him, and you had better give it to him straight.

Here are Judge Robart’s words, where he once again quotes the words of Mr. Brown. Note how he begins each paragraph by invoking his name. (Again I am deleting the many citations to the record.)

Mr. Brown took the position that the Stipulated Order’s requirement for a “full and complete search” that “shall include documents on backup servers” did not extend to documents kept on backup tapes, because “[t]he tape is an entirely different ballgame from the servers.” (testimony by Mr. Banash confirming that Corinthian has both backup tapes and backup servers).) Plaintiffs’ counsel, on the other hand, claimed that they were not aware that Corinthian intended to draw a “distinction between backup servers and backup tapes,” and that they understood the Stipulated Order to refer “generally to backup media.” Similarly, with respect to the Verification of Compliance, which affirmed that Corinthian had searched “all available electronic sources and/or servers,” Plaintiffs maintained that they had not agreed that the backup tapes were not “available.”

Mr. Brown emphasized multiple times that accessing the information on the backup tapes would solve the spoliation problem facing the court: (paras added for ease of reading)

MR. BROWN: And the answer is lying right under our noses. We brought it up in April. We can go get those tapes. If there is something supposedly in Michelle Paulino’s mailbox, we can look at it. . . . We can do that. A thousand dollars a day and it is here and we are done.

THE COURT: Is it your position, sir, that that is not an intentional deletion of information once you are on notice of litigation?

MR. BROWN: Your Honor, number one, that is my position, because the emails exist on the backup tapes. And we can get them.  . . .  I know those things are still there. I can tell you, it is about a thousand dollars per day of recovery time. It can be done. It is sitting there.

Mr. Brown represented to Judge Robart during the hearing that the cost of retrieval of plaintiffs’ email from backup tapes would not be as expensive as Judge Robart feared. I wonder how Mr. Brown knew that, or thought he knew that? I suspect he just got carried away in trying to defend his client. Anyway, here is more Q&A between Mr. Brown and Judge Robart on the topic. You be the judge.

THE COURT: You are sitting here telling me over and over and over again, we have the backup tapes, it solves the entire problem. I don’t know if we can go back to 2004, but that would be, what, 365 days a year times ten years at $1,000. . . . Corinthian is going to have a very large bill.

MR. BROWN: The one day is a snapshot, though. For example, if you asked how would the world be different if we had sent a litigation hold that stopped the deletion of one of the plaintiffs’ emails, all we need—you can look at the termination date and get the backup tape for that date, and now you have got their email box exactly how it existed as of the date of their termination. It would be perfect. It would be one day, $1,000. If you wanted that for ten employees, then you get to $10,000. You don’t have to do it as separate days, because it is cumulative.

THE COURT: I think what I want it for, sir, is every employee of Corinthian, because I have no confidence in your search. . . .

MR. BROWN: . . . With respect to every employee, if you are looking at every employee at the Bremerton campus where this occurred, we can do that, your Honor. It can be done. We are not going to have to pay individually for each employee. If the backup tape for one day—That is across the whole campus. We can capture everybody that way. . . .

Judge Robart decided to accept the representations of Mr. Brown that recovering data from the backup tapes “would be perfect. It would be one day, $1000.” For that reason Judge Robart deferred ruling on Plaintiffs’ first motion for sanctions. Instead, he issued an order compelling defendant to “retrieve from the backup tapes all employee email accounts as they existed on or near the date that the last Plaintiff’s employment was terminated” and to “search the retrieved information according to the terms articulated in the parties’ stipulated order.” Because the parties’ trial date of January 6, 2014, was looming in less than one month, the court set a deadline of December 20, 2013.

tape-backupAs it turned out, and, as I dare say, could not have been overly surprising to the IT sophisticated Judge Robart, the defendant’s backup tape recovery process proved, in Judge Robart’s words, considerably less straightforward than Mr. Brown represented at oral argument. I just love the mastery of understatement that most judges seem to have. Judge Robart’s footnote four spells out the details. The first vendor defendant hired was unable to retrieve any information from the backup tapes. Oops! The tapes were then returned to defendant, who somehow managed to recover and produce some emails itself. Defendant then hired a second vendor to complete recovery of the emails. After still more delay, this vendor was able to retrieve a subset of the remaining emails.  The tapes were once again returned to defendant, who eventually somehow supposedly recovered and produced the rest of the missing emails.  

Needless to say, the December 20, 2013, production deadline was not met. In fact, defendant was seven weeks late, and even then, only produced about 3,000 additional emails from the backup tapes. Still, that looked good compared to defendant’s prior ESI search efforts, where it had only produced 110 email strings and 1,270 pages of other documents. After the late production, the court reopened discovery and extended the trial date until November 3, 2014.

The next hearing the court has on e-discovery in this case is a second motion for sanctions filed by plaintiffs just before trial. It is not heard until March 21, 2014. It is another one of those last minute attempts to win a case on e-discovery failures instead of the merits.

Hunter_ThompsonBut before we go to the grand finale, which I warn you is not as grand as you might expect, here is your second chance to express your opinion. Do you think the defendant intentionally withheld production of the emails? Or do you think it was all just accident and confusion. Maybe the result of legal recreational use or something. This is Seattle after all.

So straighten up and vote.

To be continued in next week’s blog . . .

Lawyers as Legal-Fortune Tellers

March 30, 2014

crystal_ball_IBMMost lawyers predict the future as part of their every day work. The best lawyers are very good at it. Indeed, the top lawyers I have worked with have all been great prognosticators, at least when it comes to predicting litigation outcomes. That is why concepts of predictive coding come naturally to them. Since they already do probability analysis as part of their work, it is easy for them to accept the notion that new software can extend these forward-looking skills. They are not startled by the ability of predictive analytics to discover evidence.

Although these lawyers will not know how to operate predictive coding software, nor understand the many intricacies of computer assisted search, they will quickly understand the concepts of probability relevance predictions. This deep intuitive ability is found in all good transactional and litigation attorneys. Someday soon AI and data analytics, perhaps in the form as Watson as a lawyer, will significantly enhance all  lawyer’s abilities. It will not only help them to find relevant evidence, but also to predict case outcomes.

Transactional Lawyers and Future Projections

crystal-ball.ESCHER.Losey2A good contract lawyer is also a good prognosticator. They try to imagine all of the problems and opportunities that may arise from a new deal. The lawyer will help the parties foresee issues that he or she thinks are likely to arise in the future. That way the parties can address the issues in advance. The lawyers include provisions in the agreement to implement the parties intent. They predict events that may, or may not, ever come to pass. Even if it is a new type of deal, one that has never been done before, they try to predict the future of what is likely to happen. I recall doing this when I helped create some of the first Internet hosting agreements in the mid-nineties. (We started off making them like shopping center agreements and used real estate analogies.)

Contract lawyers become very good at predicting the many things that might go wrong and provide specific remedies for them. Many of the contractual provisions based on possible future events are fairly routine. For instance, what happens if a party does not make a payment? Others are creative and pertain to specific conduct in the agreement. Like what happens if any party loses any shared information? What disclosure obligations are triggered? What other curative actions? Who pays for it?

Most transactional lawyers focus on the worst case scenario. They write contract provisions that try to protect their clients from major damages if bad things happen. Many become very good at that. Litigators like myself come to appreciate that soothsaying gift. When a deal goes sour, and a litigator is then brought in to try to resolve a dispute, the first thing we do is read the contract. If we find a contract provision that is right on point, our job is much easier.

Litigation Lawyers and Future Projections

magic_8_ball_animatedIn litigation the prediction of probable outcomes is a constant factor in all case analysis. Every litigator has to dabble in this kind of future prediction. The most basic prediction, of course, is will you win the case? What are the probabilities of prevailing? What will have to happen in order to win the case? How much can you win or lose? What is the probable damage range? What is the current settlement value of the case? If we prevail on this motion, how will that impact settlement value? What would be the best time for mediation? How will the judge rule on various issues? How will the opposing counsel respond to this approach? How will this witness hold up under the pressure of deposition?

All litigation necessarily involves near constant probability analysis. The best litigators in the world become very good at this kind of future projection. They can very accurately predict what is likely to happen in a case. Not only that, they can provide pretty good probability ranges for each major future event. It becomes a part of their everyday practice.

Clients rely on this analysis and come to expect their lawyers to be able to accurately predict what will happen in court. Trust develops as they see their lawyer’s predictions come true. Eventually clients become true believers in their legal oracles. They even accept it when they are told from time to time that no reasonable prediction is possible, that anything might happen. They also come to accept that there are no certainties. They get used to probability ranges, and so do the soothsaying lawyers.

Good lawyers quickly understand the limits of all predictions. A successful lawyer will never say that anything will certainly happen, well almost never. Instead the lawyer almost always speaks in terms of probabilities. For instance, they rarely say we cannot lose this motion, only that loss is highly unlikely. That way they are almost never wrong.

Insightful or Wishful

Professor Jane Goodman-Delahunty, JD, PhD.

Professor Jane Goodman-Delahunty, JD, PhD, Australia.

An international team of law professors have looked into the legal-fortune telling aspects of lawyers and litigation. Goodman-Delahunty, Granhag, Hartwig, Loftus, Insightful or Wishful: Lawyers’ Ability to Predict Case Outcomes, (Psychology, Public Policy, and Law, 2010, Vol. 16, No. 2, 133–157). This is the introduction to their study:

In the course of regular legal practice, judgments and meta-judgments of future goals are an important aspect of a wide range of litigation-related decisions. (English & Sales, 2005). From the moment when a client first consults a lawyer until the matter is resolved, lawyers must establish goals in a case and estimate the likelihood that they can achieve these goals. The vast majority of lawyers recognize that prospective judgments are integral features of their professional expertise. For example, a survey of Dutch criminal lawyers acknowledged that 90% made predictions of this nature in some or all of their real-life cases (Malsch, 1990). The central question addressed in the present study was the degree of accuracy in lawyers’ forecasts of case outcomes. To explore this question, we contacted a broad national sample of U.S. lawyers who predicted their chances of achieving their goals in real-life cases and provided confidence ratings in their predictions.

Assoc. Professor Maria Hartwig, PhD,  Sweden, Psychology and Law

Assoc. Professor Maria Hartwig, PhD, Psychology & Law, Sweden

Prediction of success is of paramount importance in the system for several reasons. In the course of litigation, lawyers constantly make strategic decisions and/or advise their clients on the basis of these predictions. Attorneys make decisions about future courses of action, such as whether to take on a new client, the value of a case, whether to advise the client to enter into settlement negotiations, and whether to accept a settlement offer or proceed to trial. Thus, these professional judgments by lawyers are influential in shaping the cases and the mechanisms selected to resolve them. Clients’ choices and outcomes therefore depend on the abilities of their counsel to make reasonably accurate forecasts concerning case outcomes. For example, in civil cases, after depositions of key witnesses or at the close of discovery, the parties reassess the likelihood of success at trial in light of the impact of these events.

Professor Pär Anders Granhag, Ph.D. Psychology, Sweden

Professor Pär Anders Granhag, PhD, Psychology, Sweden

In summary, whether lawyers can accurately predict the outcome of a case has practical consequences in at least three areas: (a) the lawyer’s professional reputation and financial success; (b) the satisfaction of the client; and (c) the justice environment as a whole. Litigation is risky, time consuming, and expensive. The consequences of judgmental errors by lawyers can be costly for lawyers and their clients, as well as an unnecessary burden on an already overloaded justice system. Ultimately, a lawyer’s repute is based on successful calculations of case outcome. A lawyer who advises clients to pursue litigation without delivering a successful outcome will not have clients for long. Likewise, a client will be most satisfied with a lawyer who is accurate and realistic when detailing the potential outcomes of the case. At the end of the day, it is the accurate predictions of the lawyer that enable the justice system to function smoothly without the load of cases that were not appropriately vetted by the lawyers.

Elizabeth F. Loftus, Professor of Social Ecology, and Professor of Law, and Cognitive Science Ph.D., Stanford University

Elizabeth F. Loftus, Professor of Social Ecology, Law and Cognitive Science, PhD., California

The law professors found that a lawyer’s prognostication ability does not necessarily come from experience. This kind of legal-fortune telling appears to be a combination of special gift, knowledge, and learned skills. It certainly requires more than just age and experience.

The law professor survey showed two things: (1) that lawyers as a whole tend to be overconfident in their predictions of favorable outcomes, and, (2) that experienced lawyers do not on average do a better job of predicting outcomes than inexperienced lawyers. Insightful or Wishful (“Overall, lawyers were over-confident in their predictions, and calibration did not increase with years of legal experience”). The professors also found that women lawyers tend to be better at future projection than men, so too did specialists over generalists.

Experience should make lawyers better prognosticators, but it does not. Their ego gets in the way. The average lawyer does not get better at predicting case outcomes with experience because they get over-confident with experience. They remember the victories and rationalize the losses. They delude themselves into thinking that they can control things more than they can.

I have seen this happen in legal practice time and time again. Indeed, as a young lawyer I remember surprising senior attorneys I went up against. They were confident, but wrong. My son is now having the same experience. The best lawyers do not fall into the over confidence trap with age. They encourage their team to point out issues and problems, and to challenge them on strategy and analysis. The best lawyers I know tend to err on the side of caution. They are typically glass half empty types.They remember the times they have been wrong.

How Lawyers Predict The Future

SoothsayerAccurate prediction of future events by lawyers, or anyone for that matter, requires deep understanding of process, rules, and objective analysis. Deep intuitive insights into the people involved also helps. Experience assists too, but only in providing a deep understanding of process and rules, and knowledge of relevant facts in the past and present. Experience alone does not necessarily assist in analysis for the reasons discussed. Effective analysis has to be objective. It has to be uncoupled from personal perspectives and ego inflation.

The best lawyers understand all this, even if they may not be able to articulate it. That is how they are able to consistently and accurately calibrate case outcomes, including, when appropriate, probable losses. They do not take it personally. Accurate future vision requires not only knowledge, but also objectivity, humility, and freedom from ulterior motives. Since most lawyers lack these qualities, especially male lawyers, they end up simply engaging in wishful thinking.

The Insightful or Wishful study seems to have proven this point. (Note my use of the word seems, a typical weasel word that lawyers are trained to use. It is indicative of probability, as opposed to certainty, and protects me from ever being wrong. That way I can maintain my illusion of omnipotence.)

The best lawyers inspire confidence, but are not deluded by it. They are knowledgable and guided by hard reason, coupled with deep intuition into the person or persons whose decisions they are trying to predict. That is often the judge, sometimes a jury too, if the process gets that far (less than 1% of the cases go to trial). It is often opposing counsel or opposing parties, or even individual witnesses in the case.

All of these players have emotions. Unlike Watson, the human lawyers can directly pick up on these emotions. The top lawyers understand the non-verbal flows of energy, the irrational motivations. They can participate in them and influence them.

If lawyers with these skills can also maintain objective reason, then they can become the best in their field. They can become downright uncanny in their ability to both influence and forecast what is likely to happen in a law suit. Too bad so few lawyers are able to attain that kind of extremely high skill level. I think most are held back by an incapacity to temper their emotions with objective ratiocination. The few that can, rarely also have the emphatic, intuitive skills.

Watson as Lawyer Will be a Champion Fortune Teller

Is Watson coming to Legal Jeopardy?

Is Watson coming to Legal Jeopardy?

The combination of impartial reason and intuition can be very powerful, but, as the law professor study shows, impartial reason is a rarity reserved for the top of the profession. These are the attorneys who understand both reason and emotion. They know that the reasonable man is a myth. They understand the personality frailties of being human. Scientific Proof of Law’s Overreliance On Reason: The “Reasonable Man” is Dead, Long Live the Whole Man, Parts OneTwo and Three; and The Psychology of Law and Discovery.

I am speaking about the few lawyers who have human empathy, and are able to overcome their human tendencies towards overconfidence, and are able to look at things impartially, like a computer. Computers lack ego. They have no confidence, no personality, no empathy, no emotions, no intuitions. They are cold and empty, but they are perfect thinking machines. Thus they are the perfect tool to help lawyers become better prognosticators.

This is where Watson the lawyer comes in. Someday soon, say the next ten years, maybe sooner, most lawyers will have access to a Watson-type lawyer in their office. It will provide them with objective data analysis. It will provide clear rational insights into likely litigation outcomes. Then human lawyers can add their uniquely human intuitions, empathy, and emotional insights to this (again ever mindful of overconfidence).

The AI-enhanced analysis will significantly improve legal prognostications. It will level the playing field and up everyone’s game in the world of litigation. I expect it will also have the same quality improvement impact on contract and deal preparations. The use of data analytics to predict the outcome in patent cases is already enjoying remarkable success with a project called Lex Machina. The CEO of Lex Machina, Josh Becker, calls his data analytics company the moneyball of IP litigation. Tam Harbert, Supercharging Patent Lawyers With AI. Here is the Lex Machina description of services:

We mine litigation data, revealing insights never before available about judges, lawyers, parties, and patents, culled from millions of pages of IP litigation information.

Many corporations are already using the Lex Machina’s analytics to help them to select litigation counsel most likely to do well in particular kinds of patent cases, and with particular courts and judges. Law firms are mining the past case data for similar reasons.


Oracle_delphiHere is my prediction for the future of the legal profession. In just a few more years, perhaps longer, the linear, keyword-only evidence searchers will be gone. They will be replaced by multi-modal, predictive coding based evidence searchers. In just a decade, perhaps longer (note weasel word qualifier), all lawyers will be obsolete who are not using the assistance of artificial intelligence and data analytics for general litigation analysis.

Lawyers in the future who overcome their arrogance, their overconfidence, and accept the input and help of Watson-type robot lawyers, will surely succeed. Those who do not, will surely go the way of linear, keyword-only searchers in discovery today. These dinosaurs are already being replaced by AI-enhanced searchers and AI-enhanced reviewers. I could be overconfident, but that is what I am starting to see. It appears to me to be an inevitable trend pulled along by larger forces of technological change. If you think I am missing something, please leave a comment below.

This rapid forced evolution is a good thing for the legal profession. It is good because the quality of legal practice will significantly improve as the ability of lawyers to make more accurate predictions improves. For instance, the justice system will function much more smoothly when it does not have to bear the load of cases that have not been appropriately vetted by lawyers. Fewer frivolous and marginal cases will be filed that have no chance of success, except for in the deluded minds of second rate attorneys. (Yes, that is what I really think.) These poor prognosticators will be aided by robots to finally recognize a hopeless case. That is not to say that good lawyers will avoid taking any high risk cases. I think they should and I believe they will. But the cases will be appropriately vetted with realistic risk-reward analysis. The clients will not be seduced into them with false expectations.


With data analytics unnecessary motions and depositions will be reduced for the same reason. The parties will instead focus on the real issues, the areas where there is bona fide dispute and uncertainty. The Watson type legal robots will help the judges as well. With data analytics and AI, more and more lawyers and judges will be able to follow Rule 1 of the Federal Rules of Civil Procedure. Then just, speedy, and inexpensive litigation will be more than a remote ideal. The AI law robots will make lawyers and judges smart enough to run the judicial system properly.

Artificial intelligence and big data analytics will enable all lawyers to become excellent outcome predictors. It will allow all lawyers to move their everyday practice from art to science, much like predictive coding has already done for legal search.


Get every new post delivered to your Inbox.

Join 3,121 other followers