What Can Happen When Lawyers Over Delegate e-Discovery Preservation and Search to a Client, and Three Kinds of “Ethically Challenged” Lawyers: “Slimy Weasels,” “Gutless,” and “Clueless”

September 21, 2014
Sergeant Schultz of Hogan's Heros

“I see nothing, NOTHING!” Sergeant Schultz

Bad things tend to happen when lawyers delegate e-discovery responsibility to their clients. As all informed lawyers know, lawyers have a duty to actively supervise their client’s preservation. They cannot just turn a blind eye; just send out written notices and forget it. Lawyers have an even higher duty to manage discovery, including search and production of electronic evidence. They cannot just turn e-discovery over to a client and then sign the response to the request for production. The only possible exception proves the rule. If a client has in-house legal counsel, and if they appear of record in the case, and if the in-house counsel signs the discovery response, then, and only then, is outside counsel (somewhat) off the hook. Then they can lay back, a little bit, but, trust me, this almost never happens.

To see a few of the bad things that can happen when lawyers over delegate e-discovery, you have only to look at a new district court opinion in Ohio. Brown v. Tellermate Holdings Ltd., No. 2:11-cv-1122 (S.D. Ohio July 1, 2014) (2014 WL 2987051 ). Severe sanctions were entered against the defendant because its lawyers were too laid back. The attorneys were personally sanctioned too, and ordered to pay the other side’s associated fees and costs.

The attorneys were sanctioned because they did not follow one of the cardinal rules of attorney-client relations in e-discovery, the one I call the Ronald Reagan Rule, as it is based on his famous remark concerning the nuclear arms treaty with the USSR: Trust but verify

The sanctioned attorneys in Brown trusted their client’s representations to them that they had fully preserved, that they had searched for the evidence. Do not get me wrong. There is nothing wrong with trusting your client, and that is not why they were sanctioned. They were sanctioned because they failed to go on to verify. Instead, they just accepted everything they were told with an uncritical eye. According to the author of the Brown opinion, U.S. Magistrate Judge Terence P. Kemp:

… significant problems arose in this case for one overriding reason: counsel fell far short of their obligation to examine critically the information which Tellermate [their client] gave them about the existence and availability of documents requested by the Browns. As a result, they did not produce documents in a timely fashion, made unfounded arguments about their ability and obligation to do so, caused the Browns to file discovery motions to address these issues, and, eventually, produced a key set of documents which were never subject to proper preservation. The question here is not whether this all occurred – clearly, it did – but why it occurred, and what, in fairness, the Court needs to do to address the situation which Tellermate and its attorneys have created.

Id. at pgs. 2-3 (emphasis added).

What is the Worst Kind of Lawyer?

slimy_weasel3Taking reasonable steps to verify can be a sticky situation for some lawyers. This is especially true for ethically challenged lawyers. In my experience lawyers like this generally come in three different varieties, all repugnant. Sometimes the lawyers just do not care about ethics. They are the slimy weasels among us. They can be more difficult to detect than you might think. They sometimes talk the talk, but never walk it, especially when the judge is not looking, or they think they can get away with it. I have run into many slimy weasel lawyers over the years, but still, I like to think they are rare.

cowardOther lawyers actually care about ethics. They know what they are doing is probably wrong, and it bothers them, at least somewhat. They understand their ethical duties, they also understand Rule 26(g), Federal Rules of Civil Procedure, but they just do not have the guts to fulfill their duties. They know its is wrong to simply trust the client’s response of no, we do not have that, but they do it anyway. They are gutless lawyers.

Often the gutless suffer from a combination of weak moral fibre and pocketbook pressures. They lack the economic independence to do the right thing. This is especially true in smaller law firms that are dependent on only a few clients to survive, or in siloed lawyers in a big firm without proper management. Such gutless lawyers may succumb to client pressures to save on fees and just let the client handle e-discovery. I have some empathy for such cowardly lawyers, but no respect. They often are very successful; almost as successful as the slimy weasels types that do not care at all about ethics.

ScarecrowThere is a third kind of lawyer, the ones who do not even know that they have a personal duty as an officer of the court to supervise discovery. They do not know that they have a personal duty in litigation to make reasonable, good faith efforts to try to ensure that evidence is properly preserved and produced. They are clueless lawyers. There are way too many of these brainless scarecrows in our profession.

I do not know which attorneys are worse. The clueless ones who are blissfully ignorant and do not even know that they are breaking bad by total reliance on their clients? Or the ones who know and do it anyway? Among the ones who know better, I am not sure who is worse either. Is it the slimy weasels who put all ethics aside when it comes to discovery, and are not too troubled about it. Or, is it the gutless lawyers, who know better, and do it anyway out of weak moral fortitude, usually amplified by economic pressures. All three of these lawyer types are dangerous, not only to themselves, and their clients, but to the whole legal system. So what do you think? Please fill out the online poll below and tell us which kind of lawyer you think is the worst.


I will not tell you how I voted, but I will share my personal message to each of the three types. There are not many slimy weasels who read my blog, but I suspect there may be a few. Be warned. I do not care how powerful and protected you think you are. If I sniff you out, I will come after you. I fear you not. I will expose you and show no mercy. I will defeat you. But, after the hearing, I will share a drink with some of you. Others I will avoid like the plague. Evil comes in many flavors and degrees too. Some slimy weasel lawyers are charming social engineers, and not all bad. The admissions they sometimes make to try to gain your trust can be especially interesting. I protect the confidentiality of their off-the-record comments, even though I know they would never protect mine. Those are the rules of the road in dancing with the devil.


As to the gutless, and I am pretty sure that a few of my readers fall into that category, although not many. To you I say: grow a spine. Find your inner courage. You cannot take money and things with you when you die. So what if you fail financially? So what if you are not a big success? It is better to sleep well. Do the right thing and you will never regret it. Your family will not starve. Your children will respect you. You will be proud to have them follow in your footsteps, not ashamed. I will not have drinks with gutless lawyers.

As to the clueless, and none of my readers by definition fall into that category, but I have a message for you nonetheless: wake up, your days are numbered. There are at least three kinds of clueless lawyers and my attitude towards each is different. The first kind is so full of themselves that they have no idea they are clueless. I will not have drinks with these egomaniacs. The second type has some idea that they may need to learn more about e-discovery. They may be clueless, but they are starting to realize it. I will share drinks with them. Indeed I will try very hard to awaken them from their ethically challenged slumber. The third kind is like the first, except that they know they are clueless and they are proud of it. They brag about not knowing how to use a computer. I will not have drinks with them. Indeed, I will attack them and their stone walls almost as vigorously as the weasels.

Judges Dislike the Clueless, Gutless, and Slimy Weasels

Judges dislike all three kinds of ethically challenged lawyers. That is why I was not surprised by Judge Kemp’s sanction in Brown of both the defendant and their attorneys. (By the way, I know nothing about defense counsel in this case and have no idea which category, if any, they fall into.) Here is how Judge Kemp begins his 47 page opinion.

There may have been a time in the courts of this country when building stone walls in response to discovery requests, hiding both the information sought and even the facts about its existence, was the norm (although never the proper course of action). Those days have passed. Discovery is, under the Federal Rules of Civil Procedure, intended to be a transparent process. Parties may still resist producing information if it is not relevant, or if it is privileged, or if the burden of producing it outweighs its value. But they may not, by directly misrepresenting the facts about what information they have either possession of or access to, shield documents from discovery by (1) stating falsely, and with reckless disregard for the truth, that there are no more documents, responsive or not, to be produced; or (2) that they cannot obtain access to responsive documents even if they wished to do so. Because that is the essence of what occurred during discovery in this case, the Court has an obligation to right that wrong, and will do so in the form of sanctions authorized by Fed. R. Civ. P. 37.

Take these words to heart. Make all of the attorneys in your firm read them. There are probably a few old school types in your firm where you should post the quote on their office wall, no matter which type they are.

Brown v. Tellermate Holdings Ltd.

Judge_KempThe opinion in Brown v. Tellermate Holdings Ltd., No. 2:11-cv-1122 (S.D. Ohio July 1, 2014) (2014 WL 2987051) by U.S. Magistrate Judge Terence Kemp in Columbus, Ohio, makes it very clear that attorneys are obligated to verify what clients tell them about ESI. Bottom line – the court held that defense counsel in this single plaintiff, age discrimination case:

… had an obligation to do more than issue a general directive to their client to preserve documents which may be relevant to the case. Rather, counsel had an affirmative obligation to speak to the key players at [the defendant] so that counsel and client together could identify, preserve, and search the sources of discoverable information.

Id. at pg. 35.

In Brown the defense counsel relied on representations from their client regarding the existence of performance data within a www.salesforce.com database and the client’s ability to print summary reports. The client’s representations were incorrect and, according to the court, had counsel properly scrutinized the client’s representations, they would have uncovered the inaccuracies.

As mentioned, both defendant and its counsel were sanctioned. The defendant was precluded from using any evidence that would tend to show that the plaintiffs were terminated for performance-related reasons. This is a very serious sanction, which is, in some ways, much worse than an adverse inference instruction. In addition, both the defendant and its counsel were ordered to jointly reimburse plaintiffs the fees and costs they incurred in filing and prosecuting multiple motions to compel various forms of discovery. I hope it is a big number.

The essence of the mistake made by defense counsel in Brown was to trust, but not verify. They simply accepted their client’s statements. They failed to do their own due diligence. Defense counsel aggravated their mistake by a series of over aggressive discovery responses and argumentative positions, including such things as over-designation of AEO confidentiality, a document dump, failure to timely log privileged ESI withheld, and refusal to disclose search methods used.

The missteps of defense counsel are outlined in meticulous detail in this 47 page opinion by Judge Terence Kemp. In addition to the great quotes above, I bring the following quotes to your attention. Still, I urge you to read the whole opinion, and more importantly, to remember its lessons the next time a client does not want you to spend the time and money to do your job and verify what the client says. This opinion is a reminder for all of us to exercise our own due diligence and, at the same time, to cooperate in accord with your professional duties. An unsophisticated client might not always appreciate that approach, but, it is in their best interests, and besides, as lawyers and officers of the court, we have no choice.

[when e-discovery is involved] Counsel still have a duty (perhaps even a heightened duty) to cooperate in the discovery process; to be transparent about what information exists, how it is maintained, and whether and how it can be retrieved; and, above all, to exercise sufficient diligence (even when venturing into unfamiliar territory like ESI) to ensure that all representations made to opposing parties and to the Court are truthful and are based upon a reasonable investigation of the facts.

 Id. at Pg. 3.

As this Opinion and Order will explain, Tellermate’s counsel:

- failed to uncover even the most basic information about an electronically-stored database of information (the “salesforce.com” database);

- as a direct result of that failure, took no steps to preserve the integrity of the information in that database;

- failed to learn of the existence of certain documents about a prior age discrimination charge (the “Frank Mecka matter”) until almost a year after they were requested;

- and, as a result of these failures, made statements to opposing counsel and in oral and written submissions to the Court which were false and misleading, and which had the effect of hampering the Browns’ ability to pursue discovery in a timely and cost-efficient manner (as well as the Court’s ability to resolve this case in the same way).

These are serious matters, and the Court does not reach either its factual or its legal conclusions in this case lightly.

Id. at pg. 4.

In addition to the idea that discovery is broad and is designed to permit parties to obtain enough evidence either to prove their claims or disprove the opposing party’s claim, discovery under the Federal Rules of Civil Procedure has been designed to be a collaborative process. As one Court observed,

It cannot seriously be disputed that compliance with the “spirit and purposes” of these discovery rules requires cooperation by counsel to identify and fulfill legitimate discovery needs, yet avoid seeking discovery the cost and burden of which is disproportionally large to what is at stake in the litigation. Counsel cannot “behave responsively” during discovery unless they do both, which requires cooperation rather than contrariety, communication rather than confrontation.

Mancia v. Mayflower Textile Servs. Co., 253 F.R.D. 354, 357-58 (D. Md. 2008). Such a collaborative approach is completely consistent with a lawyer’s duty to represent his or her client zealously. See Ruiz-Bueno v. Scott, 2013 WL 6055402, *4 (S.D. Ohio Nov. 15, 2013). It also reflects a duty owed to the court system and the litigation process.

Id. at pgs. 28-29. Also see: Losey, R. Mancia v. Mayflower Begins a Pilgrimage to the New World of Cooperation, 10 Sedona Conf. J. 377 (2009 Supp.).

Tellermate, as an entity, knew that every statement it made about its control over, and ability to produce, the salesforce.com records was not true when it was made. It had employees who could have said so – including its salesforce.com administrators – had they simply been asked. Its representations were illogical and were directly contradicted by the Browns, who worked for Tellermate, had salesforce.com accounts, and knew that Tellermate could access those accounts and the information in them. And yet Tellermate’s counsel made these untrue statements repeatedly, in emails, letters, briefs, and during informal conferences with the Court, over a period of months, relenting only when the Court decided that it did not believe what they were saying. This type of behavior violated what has been referred to as “the most fundamental responsibility” of those engaged in discovery, which is “to provide honest, truthful answers in the first place and to supplement or correct a previous disclosure when a party learns that its earlier disclosure was incomplete or incorrect.” Lebron v. Powell, 217 F.R.D. 72, 76 (D.D.C. 2003). “The discovery process created by the Federal Rules of Civil Procedure is premised on the belief or, to be more accurate, requirement that parties who engage in it will truthfully answer their opponents’ discovery requests and  consistently correct and supplement their initial responses.” Id. at 78. That did not happen here.

Id. at pg. 31.

But it is not fair to place the entire blame on Tellermate, even if it must shoulder the ultimate responsibility for not telling counsel what, collectively, it knew or should have known to be the truth about its ability to produce the salesforce.com information. As this Court said in Bratka, in the language quoted above at page 3, counsel cannot simply take a client’s representations about such matters at face value. After all, Rule 26(g) requires counsel to sign discovery responses and to certify their accuracy based on “a reasonable inquiry” into the facts. And as Judge Graham (who is, coincidentally, the District Judge presiding over this case as well, and whose views on the obligations of counsel were certainly available to Ms. O’Neil and Mr. Reich), said in Bratka, 164 F.R.D. at 461:

The Court expects that any trial attorney appearing as counsel of record in this Court who receives a request for production of documents in a case such as this will formulate a plan of action which will ensure full and fair compliance with the request. Such a plan would include communicating with the client to identify the persons having responsibility for the matters which are the subject of the discovery request and all employees likely to have been the authors, recipients or custodians of documents falling within the request. The plan should ensure that all such individuals are contacted and interviewed regarding their knowledge of the existence of any documents covered by the discovery request, and should include steps to ensure that all documents within their knowledge are retrieved. All documents received from the client should be reviewed by counsel to see whether they indicate the existence of other documents not retrieved or the existence of other individuals who might have documents, and there should be appropriate follow up. Of course, the details of an appropriate document search will vary, depending upon the circumstances of the particular case, but in the abstract the Court believes these basic procedures should be employed by any careful and conscientious lawyer in every case.

 Id. at pgs. 32-33.

Like any litigation counsel, Tellermate’s counsel had an obligation to do more than issue a general directive to their client to preserve documents which may be relevant to the case. Rather, counsel had an affirmative obligation to speak to the key players at Tellermate so that counsel and client together could identify, preserve, and search the sources of discoverable information. See Cache La Poudre Feeds, LLC v. Land O’ Lakes, Inc., 244 F.R.D. 614, 629 (D. Colo. 2007). In addition, “counsel cannot turn a blind eye to a procedure that he or she should realize will adversely impact” the search for discovery. Id. Once a “litigation hold” is in place, “a party cannot continue a routine procedure that effectively ensures that potentially relevant and readily available information is no longer ‘reasonably accessible’ under Rule 26(b)(2)(B).” Id.

Id. at pg. 35.

As noted above, Tellermate and its counsel also made false representations to opposing counsel and the Court concerning the existence of documents relating to the Frank Mecka matter. Indeed, at the hearing on the pending motions, Tellermate’s counsel stated that she was unaware of the existence of the great majority of the Frank Mecka documents until almost a year after they were requested. Once again, it is not sufficient to send the discovery request to a client and passively accept whatever documents and information that client chooses to produce in response. See Cache La Poudre Feeds, 244 F.R.D. at 629.

 Id. at pg. 37 (emphasis added).

There are two distinct but related problems with trying to remedy Tellermate’s failings concerning these documents. The first is the extremely serious nature of its, and counsel’s, strenuous efforts to resist production of these documents and the strident posture taken with both opposing counsel and the Court. Perhaps the most distressing aspect of the way in which this was litigated is how firmly and repeatedly counsel represented Tellermate’s inability to produce these documents coupled with the complete absence of Tellermate’s compliance with its obligation to give counsel correct information, and counsel’s complete abdication of the responsibilities so well described by this Court in Bratka. At the end of the day, both Tellermate’s and its counsel’s actions were simply inexcusable, and the Court has no difficulty finding that they were either grossly negligent or willful acts, taken in objective bad faith.

Id. at pg. 43.

The only realistic solution to this problem is to preclude Tellermate from using any evidence which would tend to show that the Browns were terminated for performance-related reasons. … This sanction is commensurate with the harm caused by Tellermate’s discovery failures, and is also warranted to deter other similarly-situated litigants from failing to make basic, reasonable inquiries into the truth of representations they make to the Court, and from failing to take precautions to prevent the spoliation of evidence. It serves the main purposes of Rule 37 sanctions, which are to prevent parties from benefitting from their own misconduct, preserving the integrity of the judicial process, and deterring both the present litigants, and other litigants, from engaging in similar behavior.

Id. at pg. 45.

Of course, it is also appropriate to award attorneys’ fees and costs which the Browns have incurred in connection with moving to compel discovery concerning the salesforce.com documents and the Mecka documents, and those fees and expenses incurred in filing and prosecuting the motion for sanctions and the motion relating to the attorneys-eyes-only documents. … Finally, Tellermate and its counsel shall pay, jointly, the Browns’ reasonable attorneys’ fees and costs incurred in the filing and prosecution of those two motions as well as in the filing of any motions to compel discovery relating to the salesforce.com and Frank Mecka documents.

Id. at pgs. 45-46.

So sayeth the Court.


obligatory iPhone Selfie jazzed up with ink strokes effectsThe defendant’s law firm here did a disservice to their clients by not pushing back, and by instead simply accepting their clients’ report on what relevant ESI they had, or did not have. Defense counsel cannot do that. We have a responsibility to supervise discovery, especially complex e-discovery, and be proactive in ESI preservation. This opinion shows what happens when a firm chooses not to be diligent. The client loses and the lawyers are sanctioned.

Our obligation as attorneys of record does not end with the client’s sending a litigation hold notice. If a client tells us something regarding the existence, or more pointedly, the non-existence, of electronically stored information that does not make sense, or seemingly is contradicted by other evidence, it is critical for an attorney to investigate further. The client may not want you to do that, but it is in the client’s best interests that you do so. The case could depend upon it. So could your license to practice law, not to mention your reputation as a professional. It is never worth it. It is far better to sleep well at night with a clear conscience, even if it sometimes means you lose a client, or are generally not as successful, or rich, as the few ethically challenged lawyers who appear to get away with it.

Caveat Emptor – Beware of Vendor Trickery

September 18, 2014

In a crowd of e-Discovery Vendors, where each claims to have the Best Software


Watch this short video animation below for one answer to that question, and yes, this is somewhat self-promotional, but still true.




Only trust independent expert commentators and peer reviewed scientific experiments.


A full blog on lawyer ethics and an important new case on diligence is coming on this blog soon.

Guest Blog: Talking Turkey

September 7, 2014

Maura-and-Gordon_Aug2014EDITORS NOTE: This is a guest blog by Gordon V. Cormack, Professor, University of Waterloo, and Maura R. Grossman, Of Counsel, Wachtell, Lipton, Rosen & Katz. The views expressed herein are solely those of the authors and should not be attributed to Maura Grossman’s law firm or its clients. 

This guest blog constitutes the first public response by Professor Cormack and Maura Grossman, J.D., Ph.D., to articles published by one vendor, and others, that criticize their work. In the Editor’s opinion the criticisms are replete with misinformation and thus unfair. For background on the Cormack Grossman study in question, Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014, and the Editor’s views on this important research seeLatest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part One and Part Two and Part Three. After remaining silent for some time in the face of constant vendor potshots, Professor Cormack and Dr. Grossman feel that a response is now necessary. They choose to speak at this time in this blog because, in their words:

We would have preferred to address criticism of our work in scientifically recognized venues, such as academic conferences and peer-reviewed journals. Others, however, have chosen to spread disinformation and to engage in disparagement through social media, direct mailings, and professional meetings. We have been asked by a number of people for comment and felt it necessary to respond in this medium.



OrcaTec, the eDiscovery software company started by Herbert L. Roitblat, attributes to us the following words at the top of its home page: “Not surprisingly, costs of predictive coding, even with the use of relatively experienced counsel for machine-learning tasks, are likely to be substantially lower than the costs of human review.” These words are not ours. We neither wrote nor spoke them, although OrcaTec attributes them to our 2011 article in the Richmond Journal of Law and Technology (“JOLT article”).

[Ed. Note: The words were removed shortly after blog was published.]



A series of five OrcaTec blog posts (1, 2, 3, 4, 5) impugning our 2014 articles in SIGIR and Federal Courts Law Review (“2014 FCLR article”) likewise misstates our words, our methods, our motives, and our conclusions. At the same time, the blog posts offer Roitblat’s testimonials—but no scientific evidence—regarding the superiority of his, and OrcaTec’s, approach.

As noted in Wikipedia, a straw man is a common type of argument and is an informal fallacy based on the misrepresentation of an opponent’s argument. To be successful, a straw man argument requires that the audience be ignorant or uninformed of the original argument.”  First and foremost, we urge readers to avoid falling prey to Roitblat’s straw man by familiarizing themselves with our articles and what they actually say, rather than relying on his representations as to what they say.  We stand by what we have written.

Second, we see no reason why readers should accept Roitblat’s untested assertions, absent validation through the scientific method and peer review. For example, Roitblat claims, without providing any scientific support, that:

These claims are testable hypotheses, the formulation of which is the first step in distinguishing science from pseudo-science; but Roitblat declines to take the essential step of putting his hypotheses to the test in controlled studies.

Overall, Roitblat’s OrcaTec blog posts represent a classic example of truthiness. In the following paragraphs, we outline some of the misstatements and fallacious arguments that might leave the reader with the mistaken impression that Roitblat’s conclusions have merit.

With Us or Against Us?

Our JOLT article, which OrcaTec cites approvingly, concludes:

Overall, the myth that exhaustive manual review is the most effective—and therefore, the most defensible—approach to document review is strongly refuted. Technology-assisted review can (and does) yield more accurate results than exhaustive manual review, with much lower effort.  Of course, not all technology-assisted reviews (and not all manual reviews) are created equal. The particular processes found to be superior in this study are both interactive, employing a combination of computer and human input.  While these processes require the review of orders of magnitude fewer documents than exhaustive manual review, neither entails the naïve application of technology absent human judgment. Future work may address which technology-assisted review process(es) will improve most on manual review, not whether technology-assisted review can improve on manual review (emphasis added; original emphasis in bold).

The particular processes shown to be superior, based on analysis of the results of the Interactive Task of the TREC 2009 Legal Track, were an active learning method employed by the University of Waterloo, and a rule-based method employed by H5. Despite the fact that OrcaTec chose not to participate in TREC, and their method—which employs neither active learning nor a rule base—is not one of those shown by our study to be superior, OrcaTec was quick to promote TREC and our JOLT article as scientific evidence for the effectiveness of their method.

Oratec _quote_Grossman

In his OrcaTec blog posts following the publication of our SIGIR and 2014 FCLR articles, however, Roitblat espouses a different view. In Daubert, Rule 26(g) and the eDiscovery Turkey, he states that the TREC 2009 data used in the JOLT and SIGIR studies “cannot be seen as independent in any sense, in that the TREC legal track was overseen by Grossman and Cormack.” Notwithstanding his argumentum ad hominem, the coordinators of the TREC 2009 Legal Track included neither of us.  Cormack was a TREC 2009 participant, who directed the Waterloo effort, while Grossman was a “Topic Authority,” who neither knew Cormack at the time, nor had any role in assessing the Waterloo effort. It was not until 2010, that Cormack and Grossman became Legal Track coordinators.

TREC Overview

Roitblat’s change of perspective perhaps owes to the fact that our SIGIR article is critical of random training for technology-assisted review (“TAR”), and our 2014 FCLR article is critical of “eRecall,” both methods advanced by Roitblat and employed by OrcaTec. But nothing about TREC 2009 or our JOLT study has changed in the intervening years, and the OrcaTec site continues—even at the time of this writing—to (mis)quote our work as evidence of OrcaTec’s effectiveness, despite Roitblat’s insistence that OrcaTec bears no resemblance to anything we have tested or found to be effective. The continuous active learning (“CAL”) system we tested in our SIGIR study, however, does resemble the Waterloo system shown to be more effective than manual review in our JOLT study. If OrcaTec bears no resemblance to the CAL system—or indeed, to any of the others we have tested—on what basis has OrcaTec cited TREC 2009 and our JOLT study in support of the proposition that their TAR tool works?

Apples v. Oranges

gaincurveContrary to the aphorism, “you can’t compare apples to oranges,” you certainly can, provided that you use a common measure like weight in pounds, price in dollars per pound, or food energy in Calories. Roitblat, in comparing his unpublished results to our peer-reviewed results, compares the shininess of an apple in gloss units with the sweetness of an orange in percent sucrose equivalent. The graph above, reproduced from the first of the five Roitblat blogs, shows three dots placed by Roitblat over four “gain curves” from our SIGIR article. Roitblat states (emphasis added): 

The x-axis shows the number of training documents that were reviewed. The y-axis shows the level of Recall obtained.

This may be true for Roitblat’s dots, but for our gain curves, on which his dots are superimposed, the x-axis shows the total number of documents reviewed, including both the training and review efforts combined.  Dots on a graph reflecting one measure, placed on top of curves reflecting a different measure, convey no more information than paintball splats.

paintball3For OrcaTec’s method, the number of training documents is tiny compared to the number of documents identified for subsequent review. Small wonder the dots are so far to the left. For a valid comparison, Roitblat would have to move his dots way to the right to account for the documents subject to subsequent review, which he has disregarded. Roitblat does not disclose the number of documents identified for review in the matters reflected by his three dots. We do know, however, that in the Global Aerospace case, OrcaTec was reported to achieve 81% recall with 5,000 training documents, consistent with the placement of Roitblat’s green dot. We also know that roughly 173,000 documents were identified for second-pass review. Therefore, in an apples-to-apples comparison with CAL, a dot properly representing Global Aerospace would be at the same height as the green dot, but 173,000 places farther to the right—far beyond the right edge of Roitblat’s graph.

Of course, even if one were to compare using a common measure, there would be little point, due to the number of uncontrolled differences between the situations from which the dots and gain curves were derived. Only a valid, controlled comparison can convey any information about the relative effectiveness of the two approaches.

Fool’s Gold?

In The Science of Comparing Learning Protocols—Blog Post II on the Cormack & Grossman Article, Roitblat seeks to discredit our SIGIR study so as to exempt OrcaTec from its findings. He misrepresents the context of our words in the highlighted quote below, claiming that they pertain to the “gold standard” we used for evaluation:

Here I want to focus on how the true set, the so-called “gold standard” was derived for [four of the eight] matters [Cormack and Grossman] present. They say that for the “true” responsiveness values “for the legal-matter-derived tasks, we used the coding rendered by the first-pass reviewer in the course of the review. Documents that were never seen by the first-pass reviewer (because they were never identified as potentially responsive) were deemed to be coded as non-responsive” (emphasis added).

As may be seen from our SIGIR article at page 155, the words quoted above do not refer to the gold standard at all, but to a deliberately imperfect “training standard” used to simulate human review. Our gold standard used a statistical sampling technique for the entire collection known as the Horvitz-Thompson estimator; a technique that has gained widespread acceptance in the scientific community since its publication, in 1952, in the Journal of the American Statistical Association.

Apparently, to bolster his claims, Roitblat also provides a column of numbers titled “Precision,” on the right side of the table reproduced below.


We have no idea where these numbers came from—since we did not report precision in our SIGIR article—but if these numbers are intended to reflect the precision achieved by the CAL process at 90% recall, they are simply wrong. The correct numbers may be derived from the information provided in Table 1 (at page 155) and Figure 1 (at page 157) of our SIGIR article.

While we make no claim that our study is without limitations (see Section 7.5 at page 161 of our SIGIR article), Roitblat’s special pleading regarding the real or imagined limitations of our study provides no support for his claim that random training (using the OrcaTec tool in particular) achieves superior results to active learning. If Roitblat believes that a different study would show a contrary result to ours, he should conduct such a study, and submit the results for peer review.

Outcomnes_Toolkit_GrossmanAlthough we have been described by Roitblat as “CAR vendors” with a “vested interest in making their algorithm appear better than others,” we have made freely available our TAR Evaluation Toolkit, which contains the apparatus we used to conduct our SIGIR study, including the support vector machine (“SVM”) learning algorithm, the simulation tools, and four of the eight datasets. Researchers are invited to reproduce our results—indeed, we hope, to improve on them—by exploring other learning algorithms, protocols, datasets, and review tasks. In fact, in our SIGIR article at page 161, we wrote: 

There is no reason to presume that the CAL results described here represent the best that can be achieved. Any number of feature engineering methods, learning algorithms, training protocols, and search strategies might yield substantive improvements in the future.

Roitblat could easily use our toolkit to test his claims, but he has declined to do so, and has declined to make the OrcaTec tool available for this purpose. We encourage other service providers to use the toolkit to evaluate their TAR tools, and we encourage their clients to insist that they do, or to conduct or commission their own tests. The question of whether Vendor X’s tool outperforms the free software we have made available is a hypothesis that may be tested, not only for OrcaTec, but for every vendor.

Since SIGIR, we have expanded our study to include the 103 topics of the RCV1-v2 dataset, with prevalences ranging from 0.0006% (5 relevant documents in 804,414) to 47.4% (381,000 relevant documents in 804,414). We used the SVMlight tool and word-based tf-idf tokenization strategy that the RCV1-v2 authors found to be most effective. We used the topic descriptions, provided with the dataset, as keyword “seed queries.” We used the independent relevance assessments, also provided with the dataset, as both the training and gold standards. The results—on 103 topics—tell the same story as our SIGIR paper, and will appear—once peer reviewed—in a forthcoming publication.

Straw Turkey

Straw_turkeyWe were dumbfounded by Roitblat’s characterization of our 2014 FCLR article:

Schieneman and Gricks argue that one should measure the outcome of eDiscovery efforts to assess their reasonableness, and Grossman and Cormack argue that such measurement is unnecessary under certain conditions.

What we actually wrote was:

[Schieneman and Gricks’] exclusive focus on a particular statistical test, applied to a single phase of a review effort, does not provide adequate assurance of a reasonable production, and may be unduly burdensome. Validation should consider all available evidence concerning the effectiveness of the end-to-end review process, including prior scientific evaluation of the TAR method, its proper application by qualified individuals, and proportionate post hoc sampling for confirmation purposes (emphasis added).

Roitblat doubles down on his strawman, asserting that we eschew all measurement, insisting that our metaphor of cooking a turkey is inconsistent with his false characterization of our position. We have never said—nor do we believe—that measurement is unnecessary for TAR. In addition to pointing out the necessity of ensuring that the method is sound and is properly applied by qualified individuals, we state (at page 312 of our 2014 FCLR article) that it is necessary to ensure “that readily observable evidence—both statistical and non-statistical—is consistent with the proper functioning of the method.”

The turkey-cooking metaphor appears at pages 301-302 of our 2014 FCLR article:

When cooking a turkey, one can be reasonably certain that it is done, and hence free from salmonella, when it reaches a temperature of at least 165 degrees throughout. One can be reasonably sure it has reached a temperature of at least 165 degrees throughout by cooking it for a specific amount of time, depending on the oven temperature, the weight of the turkey, and whether the turkey is initially frozen, refrigerated, or at room temperature. Alternatively, when one believes that the turkey is ready for consumption, one may probe the turkey with a thermometer at various places. Both of these approaches have been validated by biological, medical, and epidemiological evidence. Cooking a turkey requires adherence, by a competent cook, to a recipe that is known to work, while observing that tools like the oven, timer, and thermometer appear to behave properly, and that the appearance, aroma, and texture of the turkey turn out as expected. The totality of the evidence—vetting the method in advance, competently and diligently applying the method, and monitoring observable phenomena following the application of the method—supports the reasonable conclusion that dinner is ready.

Roitblat reproduces our story, and then argues that it is inconsistent with his mischaracterization of our position:

They argue that we do not need to measure the temperature of the turkey in order to cook it properly, that we can be reasonably sure if we roast a turkey of a specific weight and starting temperature for a specific time at a specific oven temperature. This example is actually contrary to their position. Instead of one measure, using a meat thermometer to assess directly the final temperature of the meat, their example calls on four measures: roasting time, oven temperature, turkey weight, and the bird’s starting temperature to guess at how it will turn out. . . .  To be consistent with their argument, they would have to claim that we would not have to measure anything, provided that we had a scientific study of our oven and a qualified chef to oversee the cooking process.

Cooked_TurkeyIn our story, the turkey chef would need to ensure—through measurement and other observations—that the turkey was properly cooked, in order to avoid the risk of food poisoning. The weight of most turkeys sold in the U.S. is readily observable on the FDA label because it has been measured by the packer, and it is reasonable to trust that information. At the same time, a competent chef could reasonably be expected to notice if the label information were preposterous; for example, six pounds for a full-sized turkey. If the label were missing, nothing we have ever said would even remotely suggest that the chef should refrain from weighing the turkey with a kitchen scale—assuming one were available—or even a bathroom scale, if the alternative was for everyone to go hungry. Similarly, if the turkey were taken from a functioning refrigerator, and were free of ice, a competent chef would know the starting temperature with a margin of error that is inconsequential to the cooking time. Any functioning oven has a thermostat that measures and regulates its temperature. It is hard to imagine our chef having no ready access to some sort of timepiece with which to measure cooking time. Moreover, many birds come with a built-in gizmo that measures the turkey’s temperature and pops up when the temperature is somewhat more than 165 degrees. It does not display the temperature at all, let alone with a margin of error and confidence level, but it can still provide reassurance that the turkey is done. We have never suggested that the chef should refrain from using the gizmo, but if it pops up after one hour, or the turkey has been cooking for seven hours and it still has not popped up, they should not ignore the other evidence. ThermoProbeAnd, if the gizmo is missing when the turkey is unwrapped, our chef can still cook dinner without running out to buy a laboratory thermometer. The bottom line is that there are many sources of evidence—statistical and otherwise—that can tell us whether a TAR process has been reasonable.

Your Mileage May Vary

Crash_testRoitblat would have us believe that science has no role to play in determining which TAR methods work, and which do not. In his fourth blog post, Daubert, Rule 26(g) and the eDiscovery Turkey, he argues that there are too many “[s]ources of variability in the eDiscovery process”; that every matter and every collection is different, and that “[t]he system’s performance in a ‘scientific study’ provides no information about any of these sources of variability. . . .” The same argument could be made about crash testing or EPA fuel economy ratings, since every accident, every car, every road, and every driver is also different.

The EPA’s infamous disclaimer, “your mileage may vary,” captures the fact that it is impossible to predict with certainty the fuel consumption of a given trip. But it would be very difficult indeed to find a trip for which a Toyota Prius consumed more fuel than a Hummer H1. And it would be a very good bet that, for your next trip, you would need less gas if you chose the Prius.

Manufacturers generally do not like controlled comparisons, because there are so few winners and so many also-rans. So it is with automobiles, and so it is with eDiscovery software. On the other hand, controlled comparisons help consumers and the courts to determine which TAR tools are reliable.

We have identified more than 100 instances—using different data collections with different prevalences, different learning algorithms, and different feature engineering methods—in which controlled comparison demonstrates that continuous active learning outperforms simple passive learning, and none in which simple passive learning prevails. Neither Roitblat, nor anyone else that we are aware of, has yet identified an instance in which OrcaTec prevails, in a controlled comparison, over the CAL implementation in our toolkit.


In his fifth blog post, Daubert, Rule 26(g) and the eDiscovery Turkey: Tasting the eDiscovery Turkey, Part 2, Roitblat first claims that “[g]ood estimates of Recall can be obtained by evaluating a few hundred documents rather than the many thousands that could be needed for traditional measures of Recall,” but later admits that eRecall is a biased estimate of recall, “like a clock that runs a little fast or slow.” Roitblat further admits, “eRecall has a larger confidence interval than directly measured Recall because it involves the ratio of two random samples.” Roitblat then wonders “why [we] think that it is necessary to assume that the two measures [eRecall and the “direct method” of estimating recall] have the same confidence interval [(i.e., margin of error)].”

Our assumption came from representations made by Roitblat in Measurement in eDiscovery—A Technical White Paper:

Rather than exhaustively assessing a large random sample of thousands of documents [as required by the direct method], with the attendant variability of using multiple reviewers, we can obtain similar results by taking advantage of the fact that we have identified putatively responsive and putatively non-responsive documents. We use that information and the constraints inherent in the contingency table to evaluate the effectiveness of our process. Estimating Recall from Elusion can be called eRecall (emphasis added).

Our “mistake” was in taking Roitblat’s use of “similar results” to imply that an estimate of recall using eRecall would have a similar accuracy, margin of error, and confidence level to one obtained by the direct method; that is, unbiased, with a margin of error of ±5%, and a confidence level of 95%.

eRecall misses this mark by a long shot. If you set the confidence level to 95%, the margin of error achieved by eRecall is vastly larger than ±5%. Alternatively, if you set the margin of error to ±5%, the confidence level is vastly inferior to 95%, as illustrated below.

Table 2 at page 309 of our 2014 FCLR article (reproduced below) shows the result of repeatedly using eRecall, the direct method, and other methods to estimate recall for a review known to have achieved 75% recall and 83% precision, from a collection with 1% prevalence.


To achieve a margin of error of ±5%, at the 95% confidence level, the estimate must fall between 70% and 80% (±5% of the true value) at least 95% of the time. From the fourth column of the table one can see that the direct method falls within this range 97.5% of the time, exceeding the standard for 95% confidence. eRecall, on the other hand, falls within this range a mere 8.9% of the time. If the recall estimate had been drawn at random from a hat containing all estimates from 0% to 100%, the result would have fallen within the required range 10% of the time—more often than eRecall. Therefore, for this review, eRecall provides an estimate that is no better than chance.

Missed_targetHow large does the margin of error need to be for eRecall to achieve a 95% confidence level? The fifth and sixth columns of the table show that one would need to enlarge the target range to include all values between 0% and 100%, for eRecall to be able to hit the target 95% of the time. In other words, eRecall provides no information whatsoever about the true recall of this review, at the 95% confidence level. On the other hand, one could narrow the target range to include only the values between 70.6% and 79.2%, and the direct method would still hit it 95% of the time, consistent with a margin of error slightly better than ±5%, at the 95% confidence level.

In short, the direct method provides a valid—albeit burdensome—estimate of recall, and eRecall does not.


Roitblat repeatedly puts words in our mouths to attack positions we do not hold in order to advance his position that one should employ OrcaTec’s software and accept—without any scientific evidence—an unsound estimate of its effectiveness. Ironically, one of the positions that Roitblat falsely attributes to us is that one should not measure anything. Yet, we have spent the better part of the last five years doing quantitative research—measuring—TAR methods.

The Future

We are convinced that sound quantitative evaluation is essential to inform the choice of tools and methods for TAR, to inform the determination of what is reasonable and proportionate, and to drive improvements in the state of the art. We hope that our studies so far—and our approach, as embodied in our TAR Evaluation Toolkit—will inspire others, as we have been inspired, to seek even more effective and more efficient approaches to TAR, and better methods to validate those approaches through scientific inquiry.

Our next steps will be to expand the range of datasets, learning algorithms, and protocols we investigate, as well as to investigate the impact of human factors, stopping criteria, and measures of success. We hope that information retrieval researchers, service providers, and consumers will join us in our quest, by using our toolkit, by allowing us to evaluate their efforts using our toolkit, or by conducting scientific studies of their own.

Should Lawyers Be Big Data Cops?

September 1, 2014

Police_Cartoon_haltMany police departments are using big data analytics to predict where crime is likely to take place and prevent it. Should lawyers do the same to predict and stop illegal, non-criminal activities? This is not the job of police, but should it be the job of lawyers? We already have the technology to do this, but should we? Should lawyers be big data cops? Does anyone even want that?

Crime Prevention by Data Analytics is Already in Use by Many Police Departments

precrimeThe NY Times reported on this back in 2011 when it was relatively new: Sending the Police Before There’s a Crime. The Times reported how the Santa Cruz California police were using data analysis to predict where burglaries and other crimes might take place and to deploy police officers accordingly:

The arrests were routine. Two women were taken into custody after they were discovered peering into cars in a downtown parking garage in Santa Cruz, Calif. One woman was found to have outstanding warrants; the other was carrying illegal drugs.

But the presence of the police officers in the garage that Friday afternoon in July was anything but ordinary: They were directed to the parking structure by a computer program that had predicted that car burglaries were especially likely there that day.

The Times reported that several cities were already using data analysis to try to systematically anticipate when and where crimes will occur, including the Chicago Police Department. Chicago created a predictive analytics unit back in 2010.

This trend is growing and precrime detection technologies are now used by many police departments around the world, including the Department of Homeland Security, not to mention the NSA analytics of metadata. See eg The Minority Report: Using Predictive Analytics to prevent the crime from happening in the first place! (IBM); In Hot Pursuit of Numbers to Ward Off Crime (NY Times); Police embracing tech that predicts crimes (CNN); U.S. Cities Relying on Precog Software to Predict Murder (Wired). The analytics are already pretty good at predicting places and times where cars will be stolen, houses robbed and people mugged.

Abig_brotherlthough these programs help improve efficient crime fighting, they are not without serious privacy and due process critics. Imagine the potential abuses if an evil Big Brother government was not only watching you, but could arrest you based on computer predictions of what you might do. Although no one is arresting people yet for what they might do as in the Minority Report, they are subjecting people to significantly increased scrutiny, even home visits. See eg. Professor Elizabeth Joh, Policing by Numbers: Big Data and the Fourth Amendment; Professor Brandon Garrett, Big Data and Due ProcessThe minority report: Chicago’s new police computer predicts crimes, but is it racist? (The Verge, 2014); Eric Holder Warns About America’s Disturbing Attempts at Precrime. Do we really want to give computers, and the people who operate them, that much power? Does the Constitution as now written even allow that?

Should Lawyers Detect and Stop Law Suits Before They Happen?

Police_SWATShould lawyers follow our police departments and use data analytics to predict and stop illegal, but non-criminal activities? The police will not do it. It is beyond their jurisdiction. Their job is to fight crime, not torts, not breach of contract, nor the tens of thousand of other civil wrongs that people and corporations sue each other about every day. Should lawyers do it? Is that the next step for the plaintiff’s bar? Is that the next step for corporate defense lawyers? For corporate compliance lawyers?  For the Civil Division of the Department of Justice? How serious is the potential loss in privacy and other rights to go that route? What other risks do we take in using our new found predictive coding skills in this way?

There are millions of civil wrongs committed each year that are beyond the purview of the criminal justice system. Many of them cause disputes, and many of these disputes in turn lead to state and federal litigation. Evidence of these illegal activities is present in the both public and private data. Should lawyers mine this data to look for civil wrongs? Should the civil justice system include prevention? Should lawyers not only bring and defend law suits, but also prevent them?

robo_cop_RalphThis is not the future we are talking about here. The necessary software and search skills already exist to do this. Lawyers with big data skills can already detect and prevent breach of contract, torts, and statutory violations, if they have access to the data. It is already possible for skilled lawyers to detect and stop these illegal activities before damages are caused, before disputes arise, before law suits are filed. Lawyers with artificial intelligence enhanced evidence search skills can already do this.

I have written about this several times before and even coined a word for this legal service. I call it “PreSuit.” It is a play off the term PreCrime from the Minority Report movie. I have built a website that provides an overview on how these services can be performed. Some lawyers have even begun rendering such services. But should they? Some lawyers, myself included, know how to use existing predictive coding software to mine data and make predictions as to where illegal activities are likely to take place. We know how to use this predictive technology to intervene to prevent such illegal activity. But should we?


Just because new technology empowers us to do new things, does not mean we should. Perhaps we should refrain from becoming big data cops? We do not need the extra work. No one is clamoring for this new service. Should we build a new bomb just because we can?

Do we really want to empower an elite group of technology enhanced lawyers in this way? After all, society has gotten along just fine for centuries using traditional civil dispute resolution procedures. We have gotten along just fine by using a court system that imposes after-the-fact damages and injunctions to provide redress for civil wrongs. Should we really turn the civil justice system on its head by detecting the wrongs in advance and avoiding them?

Is it really in the best interest of society for lawyers to be big data cops? Or anyone else for that matter? Is it in the best interests of corporate world to have this kind of private police action? Is it in the best interest of lawyers? The public? What are the privacy and due process ramifications?

Some Preliminary Thoughts

Ralph LoseyI do not have any answers on this yet. It is too early in my own analysis to say for sure. These kind of complex constitutional issues require a lot of thought and discussions. All sides should be heard. I would like to hear what others have to say about this before I start reaching any conclusions. I look forward to hearing your public and private comments. I do, however, have a few preliminary thoughts and predictions to start the discussion. Some are serious, some are just designed to be thought-provoking. You figure out which are which. If you quote me, please remember to include this disclaimer. None of these thoughts are yet firm convictions, nor certain predictions. I may change my mind on all of this as my understanding improves. As a better Ralph than I once said: “A foolish consistency is the hobgoblin of little minds.”

First of all, there is no current demand for this service by the people who need it the most, large corporations. They may never want this, even though such opposition is irrational. It would, after all, reduce litigation costs and make their company more profitable. I am not sure why, and do not think it is as simple as some would say, that they just want to hide their illegal activities. Let me tell you an experience from my 34 years as a litigator that may shed some light on this. This is an experience that I know is common with many litigators. It has to do with the relationship between lawyers and management in most large companies.

Occasionally during a case I would become aware of a business practice in my client corporation that should obviously be changed. Typically it was a business practice that created or at least contributed to the law suit I just defended. The practice was not blatantly illegal, but was a grey-area. The case had shown that it was stupid and should be changed, if for no other reason than to prevent another case like that from happening. Since I had just seen the train wreck in slow motion, and knew full well how much it had cost the company, mostly in my fees, I thought I would help the company to prevent it from happening again. I would make a recommendation as to what should be changed and why. Sometimes I would explain in detail how the change would have prevented the litigation I just finished. I would explain how a change in the business practice would save the company money.

bored_yawn_obamaI have done this several times as a litigator at other firms before going to my current firm where I only do e-discovery. Do you know what kind of reaction I got? Nothing. No response at all, except perhaps a bored, polite thanks. I doubt my lessons learned memos were even read. I was, after all, just an unknown, young partner in a Floriduh law firm. I was not pointing out an illegal practice, nor one that had to be changed to avoid illegal activities. I was just pointing out a very ill-advised practice. I have had occasions to point out illegal activities too, in fact this is a more frequent occurrence, and there the response is much different. I was not ignored. I was told this would be changed. Sometimes I was asked to assist in that change. But when it came to recommendations to change something not outright illegal, suggestions to improve business practices, the response was totally different. Crickets. Just crickets. And big yawns. When will lawyers learn their place?

A couple of times I talked to in-house counsel about this, and tried to enlist their support to get the legal, but stupid, business practice changed. They would usually agree with me, full-heartedly, on the stupid part, after all they had seen the train wreck too. But they were cynical. They would explain that no one in upper management would listen to them. I am speaking about large corporations, ones with big bureaucracies. It may be better in small companies. In large companies in-house would express frustration. They knew the law department had far less juice than most others in the company. (Only the poor records department, or compliance department, if there is one, typically gets less respect than legal.) Many other parts of a company actually generate revenue, or at least provide cool toys that management wants, such as IT. All Legal does is spend money and aggravate everyone. The department that usually has the most juice in a company is sales, and they are the ones with most of the questionable practices. They are focused on money-making, not abstractions like legal compliance and dispute avoidance. Bottom line, in my experience upper management is not interested in hearing the opinions of lawyers, especially outside counsel, on what they should do differently.

Based on this experience I do not think the idea of lawyers as analytic cops to prevent illegal activities will get much traction with upper management. They do not want a lawyer in the room. It would stifle their creativity, their independent management acumen. They see all lawyers as nay sayers, deal breakers. Listen to lawyers and you’ll get paralysis by analysis. No, I do not see any welcome sign appearing for lawyers as big data cops, even if you present chart after chart as to how much data, time and frustration you will save the company in litigation avoidance. Of course, I never was much of a salesman. I’m just a lawyer who follows the hacker way of management (an iterative, pragmatic, action-based approach, which is the polar opposite of paralysis by analysis). So maybe some vendor salesmen out there will be able to sell the PreSuit concept, but not lawyers, at least not me.


I have tried all year. I have talked about this idea at several events. I have written about it, and created the PreSuit website with details. Do you know how many companies have responded? How many have expressed at least some interest in the possibility of reducing litigation costs by data analytics? Build it and they will come, they say. Not in my experience. I’ve built it and no one has come. There has been no response at all. Weeds are starting to grow on this field of dreams. Oh well. I’m a golfer. I’m used to disappointment.

This is probably just as well because reduction of litigation is not really in the best interests of the legal profession. After all, most law firms make most of their money in litigation. Lawyers should refuse to be big data cops and should let the CEOs carry on in ignorant bliss. Let them continue to function with eyes closed and spawn expensive litigation for corporate counsel to defend and for plaintiff’s counsel to get rich on. The litigation system works fine for the lawyers, and for the courts and judges too. Why muck up a big money generating machine by avoiding the disputes that the keep whole thing running? Especially when no one wants that.

Great-Depression_LitigatorsAll of the established powers want to leave things just the way they are. Can you imagine the devastating economic impact a fifty percent reduction in litigation would cause on the legal system? On lawyers everywhere? Both plaintiff’s and defendant’s bars? Hundreds of thousands of lawyers and support staff  would be out of work. No. This will be ignored, and if not ignored, attacked as radical, new, unproven, and perhaps most effective of all, as dangerous to privacy rights and due process. The privacy anti-big-brother groups will, for once, join forces with corporate America. Protect the workers they will say. Unions everywhere will oppose PreSuit. Labor and management will finally have an issue they can agree upon. Only a few high-tech lawyers will oppose them, and they are way outnumbered, especially in the legal profession.

No, I predict this will never be adopted voluntarily, nor will it ever be required by legislation. The politicians of today do not lead, they follow. The only thing I see now that will cause people to want to avoid litigation, to use data analytics to detect and prevent disputes, is the collapse, or near-collapse, of our current system of civil litigation. Lawyers as big data cops will only come out of desperation. This might happen sooner than you think.

There is another way of course. True leadership could come from the new ranks of corporate America. They could see the enlightened self-interest of PreSuit litigation avoidance. They could understand the value of data analytics and value of compliance. This may not come from our current generation old-school leaders, they barely know what data analytics is anyway. But maybe it will come from the next wave of leaders. There is always hope that the necessary changes will be made out of intelligence, not crises. If history is any guide, this is unlikely, but not impossible.

privacy-vs-googleOn the other hand, maybe this is benevolent neglect. Maybe the refusal to adopt these new technologies is for the best. Maybe the power to predict civil wrongs would be abused by a small technical elite of e-discovery lawyer cops. Maybe it would go to their head, and before you know it, their heavy hands would descend to rob all employees of their last fragments of privacy. Maybe innovation would be stifled by the fear that new creative actions might be seen as a precursor to illegal activities. This chilling effect could cause everyone to just play it safe.

The next generation of Steve Jobs would never arise in conditions such as this. They would instead come from the last remaining countries that still maintained a heavy litigation load. They would arise in cultures that still allow the workforce to do as it damn well pleases, and just let the courts sort it all out later. Legal smegal, just get the job done. Maybe expensive chaos is the best incubator we have for creative genius? Maybe it is best to keep lawyers out of the boardroom? Much less give them a badge and let them police anything. It is better to keep data analytics in Sales where it belongs. Let us know what our customers are doing and thinking, but keep a blind eye to ourself. That way we can do what we want.


I always end my blogs with a conclusion. But not this time. I have no conclusions yet. This could go either way. This game is too close to call. We are still in the early innings yet. Who knows? A few star CEOs may come out of the cornfields yet. Then we could find out fast whether PreSuit is a good thing. A few test cases should flush out the facts, good and bad.

Browning Marean: The Life and Death of a Great Lawyer

August 24, 2014

Browning-in-2013Browning Marean passed away on August 22, 2014. His death is a tremendous loss to the e-discovery community. For details on his life, career, and final days of struggle, I suggest you read the blog by his long time close friend, Craig Ball, and also see Browning Marean: A Remembrance by Tom O’Connor. I grieve his passing and feel compelled to share some personal insights, if nothing else to help me to cope with this loss. Browning was always so encouraging and helpful. Such a good friend and colleague. Everyone who knows him understands what I mean. To those who did not have that chance, let me share a few tales of this wise and funny, yet very serious man.

I first met Browning Marean at a Kroll Ontrack sponsored CLE in Atlanta in 2006. Craig Ball and Browning were traveling the country that year spreading the word of electronic discovery at Kroll events. What a powerful and persuasive team they were. I loved Marean’s wit and humor immediately and, like so many others that Browning met, we quickly became friends. I am happy to say that my son, Adam, also had the chance to become Browning’s friend. He was always so encouraging of the next generation, and of all newcomers to e-discovery of any age who were willing to spend the time to seriously study this area of law.

Browning MareanBrowning, by his words, his personal encouragement, and his example, helped inspire me to put aside my litigation practice in 2006 to devote myself full time to e-discovery. I wanted to be like Marean. He was so smart, yet so deft of touch, so full of wit and charm. He was successful, yet unlike others who have enjoyed the pinnacle of the legal profession, he was not full of himself. He was full of fun and life. Above all, Browning loved to laugh. That is how I will remember him. Browning made me, and everyone else laugh too. What a gift he had.

Browning taught me, and many others, so much about so many things. He not only taught me about the finer points of e-discovery, but also how to handle senior status by specializing in e-discovery in a big firm. He showed it could be done, that the firm would benefit immensely, and that you could have a good time closing out your career that way.

Browning MareanBrowning Marian also taught me the ins and outs of what he called the rubber chicken circuit of CLEs. I had a chance to work with him on several occasions. I saw how masterfully he handled every event he chaired, and how he kept everything on time and everyone in their place. He ran a tight ship, which, as an old Navy man, is an expression that Browning would approve of. The man with the funny name kept the ship sailing on time and on course, but he did so with a light touch, and a smile, that I have never seen anyone else equal. Browning Marian was a great role model, with shoes too big for anyone to fill.

Browning was the best ambassador of electronic discovery that I have ever met. His dedication to e-discovery law and teaching is unrivaled by anyone. Browning travelled the globe for over ten years teaching tens of thousands of lawyers, judges, and technologists. He personally touched thousands. Browning Marean was truly one of the great men of the law in the Twenty-First Century.

In closing, I offer a blog I did five years ago featuring Browning and his good friend, Tom O’Connor. Browning was always so encouraging of my blog, and especially liked it when I dared to be controversial, attack the powerful, and still do so with a bit of humor and satire. That is the kind of thing that Browning liked. He was a charming rascal at heart, proponent of the little man, and tireless champion of the cause of justice.


 The New Tonight Show Starring Browning Marean and Tom O’Connor

June 6, 2009

The e-discovery version of the Tonight Show with dueling hosts Browning Marean and Thomas J. O’Connor is the visual theme for my blog this week. Browning, the head of DLA Piper’s e-discovery practice group, plays the role of Jay Leno, and, of course, O’Connor, the Director of the Legal Electronic Document Institute, plays Conan O’Brien. My role is the stammering stand by guest and sometimes also the Office Space employee, Milton Waddams.

The Tonight Show starring Browning and Tom

Yes, this means I have submitted to yet another e-discovery talk show interview. Who knew there were so many? Browning and Tom are well known experts in keeping e-discovery entertaining. Since they have both run out of things to say on their own, they now go around interviewing all of the e-discovery nerds in the known universe. Having by now already talked to all of the really important people, or been turned down (the vast majority), they finally got around to me, something of a low point. But not to worry, next week they have a really good show lined up – an interview with Laura Zubulake’s cat!

They call these audio webcasts the e-Discovery Zone, no doubt because their guests feel like they’ve wandered into the Twilight Zone. This questionable enterprise is sponsored by Techlaw Solutions, although I have no idea why. We had a good time talking, so this interview went on for more than a hour. If you are a real glutton for punishment, go here to listen to the full audio, streamed or downloaded. Alternatively, read on for what I think are the best parts, which, of course, means the drastically shortened and edited parts that make me look good. Also, as I have done before in such interviews, most famously in the brutal Mark Mack interview, I once again share a few of my <Secret Thoughts> to try to make the reading slightly less boring.

The Ratio of People to Cake is Too Big

O’CONNOR: Hello and welcome to the latest edition of the E-Discovery Zone. This is Tom O’Connor along with Browning Marean of DLA Piper and a very special guest today, Ralph Losey of Akerman Senterfitt. Many of you probably read Ralph’s blog or have seen him speaking so we’re very, very pleased to have him today. Welcome, Ralph.

Ralph in Milton in the great movie Office Space who never seems to get his piece of cake. Of course, he who laughs last, laughs best.LOSEY: My pleasure to be here. You two are my favorite guys, so this should be a lot of fun. <Secret Thought: I was promised a piece of cake, and “the last time I didn’t receive a piece. And I was told…”>

O’CONNOR: Great. Well, I’d like to start talking right away about your blog. As we were recounting offline before we started here, I just came out of a hearing that involved someone who didn’t seem to have a great grasp of obligations under the federal Rules, specifically regarding litigation holds in preserving data. And Ralph, you just had a post that you wrote earlier this week specifically about the ethical obligation to know about e-discovery. So I’d really like to have you talk about that a little bit because it’s just so fresh in my mind.

LOSEY: Some people think maybe I’m exaggerating when I say that the problem of competence is reaching a near ethical crisis level. But what you were describing earlier Tom, which we can’t really talk about because it’s a pending matter, just confirms it. Those of us who are in the field dealing with these issues every day know all too well that there’s just a huge lack of information and training by many of the attorneys that are specializing in trial work and dispute resolution. They are still pretending like they’re living in a paper world and they’re not getting any training in law school on this except for a very few schools, maybe 5%. I’m proud to be part of that 5% that is teaching it in law school. <Go Gators!> But in most schools they’re not getting the training. If they’re learning anything about e-discovery, it’s from their own law firms.  Most law firms are not the size of Browning’s and mine and they don’t really have the resources or training to teach it. <Blind leading the blind.> So, it’s a matter of lawyers learning on the job or maybe by catching a one-hour CLE.

The bottom line is, the training is insufficient. For this reason attorneys are not doing a competent job and not fulfilling that very important dictate – an ethical dictate – to perform their job with reasonable competence. They are also failing in their duty to be diligent because they really don’t know what to do to be diligent. I think it’s become, at this point, a serious problem. …

Browning's head on the body of Jay Leno - bad deal for Jay, but at least his chin is smaller.

MAREAN: You know, Ralph, I hearken back to Legal Tech and Judge Facciola’s extraordinary keynote on the third day, and was struck by the passion that he demonstrated in that keynote, again, talking about attorney competence. I was thinking somewhat about, you know, how do we effect change in the legal profession? And sometimes it’s a carrot and sometimes it’s a stick. What I’m wondering is, whether or not malpractice insurers are going to perhaps use a stick of premium and really start to do underwriting due diligence on a law firms’ ability to do electronic discovery. It seems to me that there is going to be the possibility of malpractice suits arising where outside counsel or inside counsel – but again from our standpoint, outside counsel – are held to malpractice cases. I don’t know whether that is something that’s going to change the profession or not. …

No Shortcuts To Competence

O’CONNOR: How do we solve this? I mean, I’m always astonished. It seems like there is just a plethora of webinars and conferences and articles. It astonishes me that people don’t know some of the basics about e-discovery because it seems like there’s educational opportunities everywhere. So how do we solve this problem?

LOSEY: Well I’ve been thinking about this a lot and talking about it. I know I haven’t been talking about it for as long as you two. <Who has? I mean you two guys are really old!> But I’ve been writing about it quite a bit lately and the answer is education, but a different kind of education. We’ve got to do things differently than we’ve been doing it because it’s not working. <The flood of technology and information is moving far faster than the current lame CLEs being offered.> There’s the ever-increasing volume of information and ever-increasing complexity of the systems and information, so that, you know, a year ago we weren’t worried about Twitter – that wasn’t part of the scene. Now it’s taken off. Two years ago you really weren’t worried so much about social networking. Now that’s really exploding, such that every housewife pretty much has it and every employee has it. The systems just keep getting more and more complicated – mergers and acquisitions. Your average company now is just a patchwork of IT systems that are hard for the specialist to understand, much less a general attorney to come in and understand. So, I think we’re losing the battle.

We’ve got to start thinking out of the box <Oh brother, did I really just say that?> and come up with different solutions to what we have been offering so far, the CLE for an hour, or even the day-long CLE. I think Georgetown is an example of taking the lead to go into the one week intensive program, which I had the opportunity to participate in. I think it was just February of this year where it was eight hours a day, every day, and then the fifth day I had the opportunity to be a tester. So I spent all day long – me and Ann Kershaw and Sherry Harris – we divided up into three groups, and we tested these folks to see how much they learned. They did pretty good, really.

Conan Obrien with Tom Oconnor's face

O’CONNOR: Not to, you know, cast any aspersions on what they were doing, but at 50 people at a time, we’re going to need 10 of those a year, right?

LOSEY: Well, yeah. <There goes your invitation to Georgetown.>

O’CONNOR: The law schools seem to be the answer.

LOSEY: It wasn’t cheap and it was limited. It was deliberately capped at I think 40 or 50, as are the classes that Bill Hamilton and I teach at the University of Florida. We capped ‘em at 40 and we turned away students, and it filled up within an hour because the students get it. They see an opportunity here to use their skills, and in today’s marketplace, any edge you can get to help you to get a job, or get ahead if you have a job, is something they’re looking for.

Milton Waddams in his basement office in Office Space: "Mr. Lumbergh told me to talk to payroll and then payroll told me to talk to Mr. Lumbergh and I still haven't received my paycheck and he took my stapler and he never brought it back and then they moved my desk to storage room B and there was garbage on it... "Maybe a fringe benefit of this recession/depression we’re suffering through is that people are now going to be more motivated to take the time to really dig deep and start learning this. Frankly, some students don’t have a job, some attorneys are out of work. <We are all turning into Milton Waddams, the character in the Office Space movie, fearing another downsize move to the office in the basement.> They’re going to have the time to do it, time that they might not have had in a better economy.

Testing Competence

O’CONNOR: Browning, you mentioned Judge Facciola, I know during that – I attended that same presentation, and he gave a not so thinly veiled reference to perhaps we need to have some sort of testing requirement. He seemed to, as I recall, say that he didn’t feel the law schools were really picking up the slack the way they should.

MAREAN: Well, you know, I wonder – Ralph and I had the privilege of attending the Second International Litigation Support Conference, I think, in Washington a week or so ago, and one of the things that struck me there is that law firms, to the extent that they have litigation support groups within the law firm, that in fact that is the source of the most practical knowledge for dealing with e-discovery issues. They, in fact, get it and can be of tremendous help in guiding the attorneys, if the attorneys will turn to them in a timely fashion, to deal with such things that lawyers aren’t very well equipped to deal with – such as form or forms of production. …

O’CONNOR: And so given all that – and I guess this comes back to the point I raised earlier – we seem to have a number of resources out there. Why do we still have, as Ralph said, this critical mass of folks who are ignorant? And as I recall Judge Facciola saying, it is not because they’re not intelligent, it’s because they’re – I believe the word he used was – obstinate. They’re simply not availing themselves of these resources.

LOSEY: A lot of it has to do with who does the law attract, what kind of person is screened in the LSATs, the admissions. We’re not attracting people that are oriented to computers. Math and science majors typically don’t go to law school. They go to med school or they go to engineering school. That’s part of it. Law schools need to change their admission and, number one, they need to start teaching it. I mean, University of Florida, Georgetown – these are rare exceptions. Even Georgetown only teaches it once a year. University of Florida, at least, we’re teaching it every semester now.

MAREAN: Ralph, tell us a little bit – I know you and Bill Hamilton are involved down in Florida – tell us a little bit about that curriculum, what kind of a curriculum have you put together and the like.

tests and examsLOSEY: … There is competency testing in law school. That’s the beauty of it. The final exam I gave them was pretty darn hard. As a matter of fact, it was only slightly simpler than the exam I dreamed up for the Georgetown experts who were, you know, some of them 10, 20, 30 year lawyers. It was basically the same test, a little less complicated, and they had just three hours to write the answer out. We tested the full EDRM model, one through nine. They actually started on two, identification, preservation, collection – those first three, and then our last question was on what we’ve learned from this fact scenario. How would you recommend that the IT and Information Management Systems be redesigned? These were challenging questions, that I am sure 95% of the litigation attorneys in America wouldn’t know how to answer correctly. I can tell you that all of the student answers were good. Some of the answers were fantastic! <The Book Award for best student this semester was awarded to two students: Jason Pill and Johann Van Lierop. Congratulations!> …

These are all smart people. They respond to training. But this is intensive training – I estimate it would take 200-250 hours over our 4½ month semester of study and work to get to this point. Two hundred and fifty hours, okay, in a four month period. This is not happening in the CLE programs. We’re not getting that kind of commitment and intensity, and so we’re getting superficial learning. And to be honest – because, you know, I’m not connected with any vendor so I can be a little controversial – most of the CLEs I see that are vendor-sponsored, they’re “scare you into hiring us” type CLEs. <The “pay to play” type CLEs are even worse. No bona fide subject matter expert ever pays to teach. The ones you see at these events are mostly just salesman trying to hustle in-house counsel. They know enough to be dangerous and make a boring speech.>  Lawyers are getting sick of that. Lawyers tell me, “I’m tired of these e-discovery CLEs. I don’t learn anything practical. I just learn that I don’t know what the hell I’m doing and I should be scared.” Of course, what they would like is a magic pill to easily learn all of the practical stuff. That’s the first problem. There is no shortcut. It takes time and effort and practice and more practice.

MAREAN: Well your comment reminds me of Malcolm Gladwell’s Outliers book which I’ve picked up and I’ve only read some reviews of, but where he talks about what does it take for some people to be successful. <He’s talking about my article.> I think he was using Joe Flom at the Skadden Arps firm that, you know, how many hours does it take to become an expert? And I do agree, I think that we absolutely need to be spending a lot more time. But Ralph, to your earlier point, I think this is a wonderful opportunity for somebody coming into a firm to really spend the time and become the go-to person in this area. Talk about making yourself valuable to the firm even at a young age, to me it’s one of the most obvious routes open today.

LOSEY: It really is. It’s a great opportunity. I’m finding that the young people get it much quicker. They already know all the basics that you and I know because we’ve been doing it for years, but that a lot of  people our age don’t. So, it’s a quicker learn for them. It still takes 200-250 hours to get the basics. Malcolm Gladwell cited the scientific studies that weren’t about getting the basics, they were about attaining a level of mastery where you really could teach this stuff. They found it takes 10,000 hours. That’s five years, maybe 10 years, depending on how much time you devote to it. So it takes a lot of time. How many masters of discovery are there really that can teach this stuff? And so, that’s the problem. We’ve got to – everybody’s got to raise their game up. Those of us that know something need to be sure that we’re doing legitimate education and we’re really helping the rest of The Bar, our brothers and sisters that are struggling with this, to really understand it. That’s the solution. It’s not, well, you need to understand enough to know you’ll never learn how to do this crap so you better hire us, which unfortunately is a lot of what goes on. We don’t do that, but we’ve all seen it done. …

Judges Are Smart

O’CONNOR: That does make an interesting point, thought, Browning which is, if we don’t think the attorneys are being educated, are the judges being educated?

MAREAN: Well I pick up on Ralph’s point and Ralph, I think that Bray & Gillespie case was out of the federal Court in Orlando, and I was struck by the thoughtfulness of the magistrate judge in that opinion which, by the way, really does loop back to issues of competence and sort of getting with the program of discovery, but I found her opinion – and I was really not familiar with that magistrate judge before, but I thought she put out a very thoughtful opinion.

LOSEY: Well yes, I know Judge Karla Spaulding pretty well. I have been practicing here my whole career and Judge Spaulding has been here a long time. She is not a computer hobbyist like me, she’s not a techie, but she’s a smart person – all the federal judges are smart. And she is very diligent. She just dug in there and worked very hard, had two evidentiary hearings to get to the bottom of things when she saw the smokescreens and the lawyers saying different things. She really worked hard. And it shows that if you’re diligent and put in an enormous – I have no idea how many hours that she and her team of law clerks put into it, but I’m sure it was very substantial. Not many judges will take the time to do that.

We can’t expect to find hero judges like that willing to do it all the time. But it does show that people of above average intelligence, which all of our federal judges are like that, can sort through it and figure it out. They can hear expert testimony on both sides and figure it out. But the truth is, most judges don’t have the time necessary to dig into it like that, or maybe they just don’t have the inclination to do it, in which case I think the solution is a special master. I really think that’s part of the answer, if the parties are in a difficult situation and the magistrate may not be willing or able to take the time to do that, or it may take them a year to do that, then the parties ought to consider agreeing upon a special master that has particular training and expertise in the area of e-discovery and come up with a quicker, possibly more just ruling for them. …

Crystal Ball Gazing Five Years Into The Future

ESCHER famous etching of a man gazing into a crystal ball ruined by putting Losey's face into itMAREAN: Well Ralph, let’s assume that we’re now five years hence, it’s May 21st, 2014, what do you see will have changed in the next five years?

LOSEY: I think I’ve lost some weight and am in better condition.

MAREAN: A consummation devoutly to be wished.

LOSEY: Yeah but I’m an optimist. No, I think what’s going to happen is, we’re going to see some big players come in. Somebody’s going to step up to the plate and we’re going to get some real intensive training. We’re going to get competency certification, and I doubt it will be the local Bars because they just take too long. Most state Bars do have certification programs in different areas, but it’s going to take, I think, longer than five years. … I don’t think we’ll be there yet with certification from the state Bars. What do you think Marean?

MAREAN: I think they’re going to view it as too narrow and not really pick up on Judge Scheindlin’s comment that it’s not just  “e-discovery,” it’s now just “discovery.”

LOSEY: I think Judge Scheindlin’s remark has been fairly criticized. I don’t know, maybe it was you Tom who pointed out that it is not “discovery” because they’re still depositions, there’s interrogatories, you know, there’s some stuff. But it is, I think, document discovery or better said discovery of writings, which has always been critical to the outcome of most civil cases. What were the parties writing? E-discovery is really discovery of writings because there’s so few paper writings – original paper writings – nowadays that you might as well discount them as of marginal importance. The discovery of writings today is what e-discovery is all about. Any lawyer who has a case where what the people wrote is important needs to know e-discovery.

They need to know on a couple of levels: there’s a base competency level, that they need to learn to handle the small cases; and, then there’s more advanced for the bigger and more complicated cases. I think we’re going to have to see training, better training and certification on these two levels. I don’t think it will come from the state Bars, and I also don’t think the law schools will move that quick, although we’re going to  see some leaders and schools like Georgetown that are figuring out that this is very good. We’re going to get more schools, but will Harvard be offering e-discovery in five years? Maybe that will be the time they first offer it, I guess, Harvard and Yale, and the other top 25. It will probably take them that long before they hold their noses and deal with something so practical and narrow, but that’s the attitude you’re getting from academia. So the solution is not from the Bar, it’s not from academia, I don’t think, with all due respect to Kroll, that their two-day certification program is really what we’re talking about here either, so I’m not sure a vendor is going to do it, but I think somebody needs to. …

So that’s the training part of it. The other thing we’re going to see in five years, though, to change the subject a little bit, is we’re going to see improved software. I don’t think we’ll have the magic button yet, but I’ve been talking to vendors a lot, I know you guys talk to vendors a lot too, and I said, you know, I want to see the random button on your software henceforth. It’s not there yet. But in five years every program is going to have a random button where you do random sampling. Random sampling is going to be commonplace instead of “Wow, that’s such an exotic idea,” which, believe it or not, is the reaction today of most vendors. In five years it’s going to be common. Also, in five years it’s going to be the exception, rather than the rule, to negotiate key words in the blind and then run them. I think in five years we can hope the Bar will get away from that and we will have testing and sampling as part of everybody’s normal way of search. …

O’CONNOR: Right. Well we’re starting to run out of time here. Any last thoughts or topics that either of you would like to cover?

LOSEY: There’s one thing we haven’t mentioned and it goes to Browning’s excellent five-year question, and that is, I’m hoping that in five years the Sedona Cooperation Proclamation is going to be not only widely known, but followed. Again, it’s going to come out of money concerns, cost driven; the clients are going to stop paying lawyers just to jerk around with hide the ball. They’re only going to pay them to argue over the merits of the case and argue over the legal implication of the facts, and not argue over whether they should get the facts or not. I think we’re going to see, out of necessity, lawyers being more cooperative in the area of discovery,including e-discovery. So this may be a hope more than a prediction but, you know, I’ll go out on a limb. It’s a prediction. It’s not just a hope because we cannot afford business as usual in discovery.

MAREAN: Well absolutely. I’m in to that and I think getting attorneys to read 26(G) and most states courts or most states have similar type rules about what it means when we sign our name to a pleading and what that certification carries with it. I think putting teeth into 26(G) – the tool is there. The interesting thing will be whether or not the judiciary decides to give it the emphasis that I think it needs.

LOSEY: You’re right. We’re full circle back to the ethics, which is so important. It’s competence and also cooperation – just following the rules. Federal Procedure Rule number one: quick, speedy, inexpensive, just adjudication. We’ve got to get back to that, otherwise we’re going to have people fleeing the public justice system into the private system of arbitration. Judge Facciola is very concerned about that. I am too. I like public adjudication. I would rather not see everything go into private arbitration, although there’s a place for that, and that means we’ve got to have discovery of all kinds be affordable.

O’CONNOR: Right. Well as always, Ralph, it’s great speaking with you and hearing your thoughts. Once again, I want to recommend to everybody that they take a look at Ralph’s blog, the e-Discovery Team …

Bates stampLOSEY: I hate to interrupt somebody endorsing me <Boy am I stupid or what?>, but while you’re at it, check out the HASH article because I know that’s something Tom and Browning – the three of us are really big advocates of, using the HASH algorithm instead of Bates numbers. We can’t go through an interview together, guys, without at least saying the HASH word once.

O’CONNOR: Stop using your Bates stamper. Browning, do you still have the old Bates stamp machine in your desk?

MAREAN: I do indeed, and people will come in and say, “What’s that?”


Get every new post delivered to your Inbox.

Join 3,560 other followers