Robophobia: Great New Law Review Article – Part 2

May 26, 2022
Professor Andrew Woods

This article is Part Two of my review of Robophobia by Professor Andrew Woods. See here for Part 1.

I want to start off Part 2 with a quote from Andrew Woods in the Introduction to his article, Robophobia, 93 U. Colo. L. Rev. 51  (Winter, 2022). Footnotes omitted.

Deciding where to deploy machine decision-makers is one of the most important policy questions of our time. The crucial question is not whether an algorithm has any flaws, but whether it outperforms current methods used to accomplish a task. Yet this view runs counter to the prevailing reactions to the introduction of algorithms in public life and in legal scholarship. Rather than engage in a rational calculation of who performs a task better, we place unreasonably high demands on robots. This is   robophobia – a bias against robots, algorithms, and other nonhuman deciders. 

Robophobia is pervasive. In healthcare, patients prefer human diagnoses to computerized diagnoses, even when they are told that the computer is more effective.  In litigation, lawyers are reluctant to rely on – and juries seem suspicious of – [*56] computer-generated discovery results, even when they have been proven to be more accurate than human discovery results. . . .

In many different domains, algorithms are simply better at performing a given task than people. Algorithms outperform humans at discrete tasks in clinical health, psychology, hiring and admissions, and much more. Yet in setting after setting, we regularly prefer worse-performing humans to a robot alternative, often at an extreme cost. 

Woods, Id. at pgs. 55-56

Bias Against AI in Electronic Discovery

Electronic discovery is a good example of the regular preference of worse-performing humans to a robot alternative, often at an extreme cost. There can be no question now that any decent computer assisted method will significantly outperform human review. We have made great progress in the law through the outstanding leadership of many lawyers and scientists in the field of ediscovery, but there is still a long way to go to convince non-specialists. Professor Woods understands this well and cites many of the leading legal experts on this topic at footnotes 137 to 148. Even though I am not included in his footnotes of authorities (what do you expect, the article was written by a mere human, not an AI), I reproduce them below in the order cited as a grateful shout-out to my esteemed colleagues.

  • Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient than Exhaustive Manual Review, 17 Rich. J.L. & Tech. 1 (2011).
  • Sam Skolnik, Lawyers Aren’t Taking Full Advantage of AI Tools, Survey Shows, Bloomberg L. (May 14, 2019) (reporting results of a survey of 487 lawyers finding that lawyers have not well utilized useful new tools).
  • Moore v. Publicis Groupe, 287 F.R.D. 182, 191 (S.D.N.Y. 2012) (“Computer-assisted review appears to be better than the available alternatives, and thus should be used in appropriate cases.”) Judge Andrew Peck.
  • Bob Ambrogi, Latest ABA Technology Survey Provides Insights on E-Discovery Trends, Catalyst: E-Discovery Search Blog (Nov. 10, 2016) (noting that “firms are failing to use advanced e-discovery technologies or even any e-discovery technology”).
  • Doug Austin, Announcing the State of the Industry Report 2021, eDiscovery Today (Jan. 5, 2021),
  • David C. Blair & M. E. Maron, An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System, 28 Commc’ns ACM 289 (1985).
  • Thomas E. Stevens & Wayne C. Matus, Gaining a Comparative Advantage in the Process, Nat’l L.J. (Aug. 25, 2008) (describing a “general reluctance by counsel to rely on anything but what they perceive to be the most defensible positions in electronic discovery, even if those solutions do not hold up any sort of honest analysis of cost or quality”).
  • Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 127 (S.D.N.Y. 2015). Judge Andrew Peck.
  • See The Sedona Conference, The Sedona Conference Best Practices Commentary on the Use of Search & Information Retrieval Methods in E-Discovery, 15 Sedona Conf. J. 217, 235-26 (2014) (“Some litigators continue to primarily rely upon manual review of information as part of their review process. Principal rationales [include] . . . the perception that there is a lack of scientific validity of search technologies necessary to defend against a court challenge . . . .”).
  • Doug Austin, Learning to Trust TAR as Much as Keyword Search: eDiscovery Best Practices, eDiscovery Today (June 28, 2021).
  • Robert Ambrogi, Fear Not, Lawyers, AI Is Not Your Enemy, Above Law (Oct. 30, 2017).

Robophobia Article Is A First

Robophobia is the first piece of legal scholarship to address our misjudgment of algorithms head-on. Professor Woods makes this assertion up front and I believe it. The Article catalogs different ways that we now misjudge poor algorithms. The evidence of our robophobia is overwhelming, but, before Professor Woods work, it had all been in silos and was not seriously considered. He is the first to bring it all together and consider the legal implications.

His article goes on to suggests several reforms, also a first. But before I get to that, a more detailed overview is in order. The Article is in six parts. Part I provides several examples of robophobia. Although a long list, he says it is far from exhaustive. Part II distinguishes different types of robophobia. Part III considers potential explanations for robophobia. Part IV makes a strong, balanced case for being wary of machine decision-makers, including our inclination to, in some situations, over rely on machines. Part V outlines the components of his case against robophobia. The concluding Part VI offers “tentative policy prescriptions for encouraging rational thinking – and policy making – when it comes to nonhuman deciders.

Part II of the Article – Types of Robophobia

Professor Woods identifies five different types of robophobia.

  • Elevated Performance Standards: we expect algorithms to greatly outperform the human alternatives and often demand perfection.
  • Elevated Process Standards: we demand algorithms explain their decision-making processes clearly and fully; the reasoning must be plain and understandable to human reviewers.
  • Harsher Judgments: algorithmic mistakes are routinely judges more severely than human errors. A corollary of elevated performance standards.
  • Distrust: our confidence in automated decisions is week and fragile. Would you rather get into an empty AI Uber, or one driven by a scruffy looking human?
  • Prioritizing Human Decisions: We must keep “humans in the loop” and give more weight to human input than algorithmic.

Part III – Explaining Robophobia

Professor Woods considers seven different explanations for robophobia.

  • Fear of the Unknown
  • Transparency Concerns
  • Loss of Control
  • Job Anxiety
  • Disgust
  • Gambling for Perfect Decisions
  • Overconfidence in Human Decisions

I’m limiting my review here, since the explanations for most of these should be obvious by now and I want to limit the length of my blog. But the disgust explanation was not one I expected and a short quote by Andrew Woods might be helpful, along with the robot photo I added.

Uncannily Creepy Robot

[T]he more that robots become humanlike, the more they can trigger feelings of disgust. In the 1970s, roboticist Masahiro Mori hypothesized that people would be more willing to accept robots as the machines became more humanlike, but only up to a point, and then human acceptance of nearly-human robots would decline.[227] This decline has been called the “uncanny valley,” and it has turned out to be a profound insight about how humans react to nonhuman agents. This means that as robots take the place of humans with increasing frequency—companion robots for the elderly, sex robots for the lonely, doctor robots for the sick—reports of robots’ uncanny features will likely increase.

For interesting background on the uncanny valley, see these You Tube videos and experience robot disgust for yourself. Uncanny Valley by Popular Science 2008 (old, but pretty disgusting). Here’s a more recent and detailed one, pretty good, by a popular twenty-something with pink hair. Why is this image creepy? by TUV 2022.

Parts IV and V – The Cases For and Against Robophobia

Part IV lays out all the good reasons to be suspect of delegating decision to algorithms. Part V is the new counter-argument, one we have not heard before, why robophobia is bad for us. This is probably the heart of the article and suggest you read this part for sure.

Here is a good quote at the end of Part IV to put the pro versus anti-robot positions into perspective:

Pro-robot bias is no better than antirobot bias. If we are inclined both to over- and underrely on robots, then we need to correct both problems—the human fear of robots is one piece of the larger puzzle of how robots and humans should coexist. The regulatory challenge vis-à-vis human-robot interactions then is not merely minimizing one problem or the other but rather making a rational assessment of the risks and rewards offered by nonhuman decision-makers. This requires a clear sense of the key variables along which to evaluate decision-makers.

In the first two paragraphs of Part V of his article Professor Woods deftly summarizes the case against robophobia.

We are irrational in our embrace of technology, which is driven more by intuition than reasoned debate. Sensible policy will only come from a thoughtful and deliberate—and perhaps counterintuitive—approach to integrating robots into our society. This is a point about the policymaking process as much as it is about the policies themselves. And at the moment, we are getting it wrong—most especially with the important policy choice of where to transfer control from a human decider to a robot decider.

Specifically, in most domains, we should accept much more risk from algorithms than we currently do. We should assess their performance comparatively—usually by comparing robots to the human decider they would replace—and we should care about rates of improvement. This means we should embrace robot decision-makers whenever they are better than human decision-makers. We should even embrace robot decision-makers when they are less effective than humans, as long as we have a high level of confidence that they will soon become better than humans. Implicit in this framing is a rejection of deontological claims—some would say a “right”—to having humans do certain tasks instead of robots.[255] But, this is not to say that we should prefer robots to humans in general. Indeed, we must be just as vigilant about the risks of irrationally preferring robots over humans, which can be just as harmful.[256]

The concluding Part Three of my review of Robophobia is coming soon. In the meantime, take a break and think about Professor Woods policy-based perspective. That is something practicing lawyers like me do not do often enough. Also, it is of value to consider Andrew’s reference to “deontology“, not a word previously in my vocabulary. It is a good ethics term to pick up. Thank you Immanuel Kant.

Cautionary Tale from Brooklyn: Search Terms ‘Designed To Fail’

October 20, 2019

Every lawyer who thinks e-discovery is not important, that you can just delegate it to a vendor, should read Abbott Laboratories, et al. v. Adelphia Supply USA, et al., No. 15 CV 5826 (CBA) (LB) (E.D.N.Y. May 2, 2019). This opinion in a trademark case in Brooklyn District Court (shown here) emphasizes, once again, that e-discovery can be outcome-determinative. If you mess it up, you can doom your case. If a lawyer wants to litigate today, they either have to spend the substantial time it takes to learn the many intricacies of e-discovery, or associate with a specialist who does. The Abbott Labs case shows how easily a law suit can be won or lost on e-discovery alone. Here the numbers did not add up, key custodians were omitted and guessed keywords were used, keywords so bad that opposing counsel called them designed to fail. The defendants reacted by firing their lawyers and blaming everything on them, but the court did not buy it. Instead, discovery fraud was found and judgment was entered for the plaintiff.

Magistrate Judge Lois Bloom (shown right) begins the Opinion by noting that the plaintiff’s motion for case ending sanctions “… presents a cautionary tale about how not to conduct discovery in federal court.” The issues started when defendant made its first electronic document production. The Electronically Stored Information was all produced in paper, as Judge Bloom explained “in hard copy, scanning them all together, and producing them as a single, 1941-page PDF file.” Opinion pg. 3. This is not what the plaintiff Abbott Labs wanted. After Abbott sought relief from the court the defendants on March 24, 2017 were ordered  to “produce an electronic copy of the 2014 emails (1,941 pages)” including metadata. Defendant then “electronically produced 4,074 pages of responsive documents on April 5, 2017.” Note how the page count went from 1,942 to 4,074. There was no explanation of this page count discrepancy, the first of many, but the evidence helped Abbott justify a new product counterfeiting action (Abbott II) where the court ordered a seizure of defendant’s email server. That’s were the fun started. As Judge Bloom put it:

Once plaintiffs had seized H&H’s email server, plaintiffs had the proverbial smoking gun and raised its concerns anew that defendants had failed to comply with the Court’s Order to produce responsive documents in the instant action (hereinafter “Abbott I”). On July 12, 2017, the Court ordered the H&H defendants to “re-run the document search outlined in the Court’s January 17 and January 21 Orders,” “produce the documents from the re-run search to Abbott,” and to produce “an affidavit of someone with personal knowledge” regarding alleged technical errors that affected the production.³ Pursuant to the Court’s July 12, 2017 Order to re-run the search, The H&H defendants produced 3,569 responsive documents.

Opinion pg. 4 (citations to record omitted).

Too Late For Vendor Help and a Search Strategy Designed to Fail

After the seizure order in Abbott II, and after Abbott Labs again raised issues regarding defendants’ original production, Judge Bloom ordered the defendants to re-run the original search. Defendants then retained the services of an outside vendor, Transperfect, to re-run the original search for them. In supposed compliance with that order, the defendants, aka H&H, then produced 3,569 documents. Id. at 8. Defendants also filed an affidavit by Joseph Pochron, Director in the Forensic Technology and Consulting Division at Transperfect (“Pochron Decl.”) to try to help their case. It did not work. According to Judge Bloom the Pochron Decl. states:

… that H&H utilized an email archiving system called Barracuda and that there are two types of Barracuda accounts, Administrator and Auditor. Pochron Decl. ¶ 13. Pochron’s declaration states that the H&H employee who ran the original search, Andrew Sweet, H&H’s general manager, used the Auditor account to run the original search (“Sweet search”). Id. at ¶ 19. When Mr. Pochron replicated the Sweet search using the Auditor account, he obtained 1,540 responsive emails. Id. at ¶ 22. When Mr. Pochron replicated the Sweet search using the Administrator account, he obtained 1,737 responsive emails. Id. Thus, Mr. Pochron attests that 197 messages were not viewable to Mr. Sweet when the original production was made. Id. Plaintiffs state that they have excluded those 197 messages, deemed technical errors, from their instant motion for sanctions. Plaintiffs’ Memorandum of Law at 9; Waters Decl. ¶ 8. However, even when those 197 messages are excluded, defendants’ numbers do not add up. In fact, H&H has repeatedly given plaintiffs and the Court different numbers that do not add up.

Moreover, plaintiffs argue that the H&H defendants purposely used search terms designed to fail, such as “International” and “FreeStyle,” whereas H&H’s internal systems used item numbers and other abbreviations such as “INT” and “INTE” for International and “FRL” and “FSL” for FreeStyle. Plaintiff’s Memorandum of Law at 10–11. Plaintiffs posit that defendants purposely designed and ran the “extremely limited search” which they knew would fail to capture responsive documents …

Opinion pgs. 8-9 (emphasis by bold added). “Search terms designed to fail.” This is the first time I have ever seen such a phrase in a judicial opinion. Is purposefully stupid keyword search yet another bad faith litigation tactic by unscrupulous attorneys and litigants? Or is this just another example of dangerous incompetence? Judge Bloom was not buying the ‘big oops” theory, especially considering the ever-changing numbers of relevant documents found. It looked to her, and me too, that this search strategy was intentionally design to fail, that it was all a shell-game.

This is the wake-up call for all litigators, especially those who do not specialize in e-discovery. Your search strategy had better make sense. Search terms must be designed (and tested) to succeed, not fail! This is not just incompetence.

The Thin Line Between Gross Negligence and Bad Faith

The e-discovery searches you run are important. The “mistakes” made here led to a default judgment. That is the way it is in federal court today. If you think otherwise, that e-discovery is not that important, that you can just hire a vendor and throw stupid keywords at it, then your head is dangerously stuck in the sand. Look around. There are many cases like Abbott Laboratories, et al. v. Adelphia Supply USA.

I say “mistakes” made here in quotes because it was obvious to Judge Bloom that these were not mistakes at all, this was fraud on the court.

E-Discovery is about evidence. About truth. You cannot play games. Either take it seriously and do it right, do it ethically, do it competently; or go home and get out. Retire already. Discovery gamesmanship and lawyer bumbling are no longer tolerated in federal court. The legal profession has no room for dinosaurs like that.

Abbott Labs responded the way they should, the way you should always expect in a situation like this:

Plaintiffs move for case ending sanctions under Federal Rule of Civil Procedure 37 and invoke the Court’s inherent power to hold defendants in default for perpetrating a fraud upon the Court. Plaintiffs move to strike the H&H defendants’ pleadings, to enter a default judgment against them, and for an order directing defendants to pay plaintiffs’ attorney’s fees and costs, for investigating and litigating defendants’ discovery fraud.


Rule 37(e) was revised in 2015 to make clear that gross negligence alone does not justify a case-ending sanction, that you must prove bad faith. This change should not provide the incompetent with much comfort. As this case shows, the difference between mistake and intent can be a very thin line. Do your numbers add up? Can you explain what you did and why you did it? Did you use good search terms? Did you search all of the key custodians? Or did you just take the ESI the client handed to you and say thank you very much? Did you look with a blind eye? Even if bad faith under Rule 37 is not proven, the court may still find the whole process stinks of fraud and use the court’s inherent powers to sanction misconduct.

As Judge Bloom went on to explain:

Under Rule 37, plaintiffs’ request for sanctions would be limited to my January 17, 2017 and January 27, 2017 Orders which directed defendants to produce documents as set forth therein. While sanctions under Rule 37 would be proper under these circumstances, defendants’ misconduct herein is more egregious and goes well beyond defendants’ failure to comply with the Court’s January 2017 discovery orders. . . .  Rather than viewing the H&H defendants’ failure to comply with the Court’s January 2017 Orders in isolation, plaintiffs’ motion is more properly considered in the context of the Court’s broader inherent power, because such power “extends to a full range of litigation abuses,” most importantly, to fraud upon the court.

Opinion pg. 5.

Judge Bloom went on the explain further the “fraud on the court” and defendant’s e-discovery conduct.

A fraud upon the court occurs where it is established by clear and convincing evidence “that a party has set in motion some unconscionable scheme calculated to interfere with the judicial system’s ability impartially to adjudicate a matter by . . . unfairly hampering the presentation of the opposing party’s claim or defense.” New York Credit & Fin. Mgmt. Grp. v. Parson Ctr. Pharmacy, Inc., 432 Fed. Appx. 25 (2d Cir. 2011) (summary order) (quoting Scholastic, Inc. v. Stouffer, 221 F. Supp. 2d 425, 439 (S.D.N.Y. 2002))

Opinion pgs. 5-6 (subsequent string cites omitted).

Kill All The Lawyers

The defendants here tried to defend by firing and blaming their lawyers. That kind of Shakespearean sentiment is what you should expect when you represent people like that. They will turn on you. They will use you for their nefarious ends, then lose you. Kill you if they could.

Judge Bloom, who was herself a lawyer before becoming a judge, explained the blame-game defendants tried to pull in her court.

Regarding plaintiffs’ assertion that defendants designed and used search terms to fail, defendants proffer that their former counsel, Mr. Yert, formulated and directed the use of the search terms. Id. at 15. The H&H defendants state that “any problems with the search terms was the result of H&H’s good faith reliance on counsel who . . . decided to use parameters that were less robust than those later used[.]” Id. at 18. The H&H defendants further state that the Sweet search results were limited because of Mr. Yert’s incompetence. Id.

Opinion pg. 9.

Specifically defendants alleged:

… the original search parameters were determined by Mr. Yert and that he “relied on Mr. Yert’s expertise as counsel to direct the parameters and methods for a proper search that would fulfill the Court’s Order.” Sweet Decl. ¶ 3–4.  As will be discussed below, the crux of defendants’ arguments throughout their opposition to the instant motion seeks to lay blame on Mr. Yert for their actions; however, defendants cannot absolve themselves of liability here by shifting blame to their former counsel.

Opinion pg. 11.

Here is how Judge Bloom responded to this “blame the lawyers” defense:

Defendants’ attempt to lay blame on former counsel regarding the design and use of search terms is equally unavailing. It is undisputed that numerous responsive documents were not produced by the H&H defendants that should have been produced. Defendants’ prior counsel conceded as much. See generally plaintiffs’ Ex. B, Tr. Of July 11, 2017 telephone conference.

Mr. Yert was asked at his deposition about the terms that H&H used to identify their products and he testified as follows:

Q. Tell me about the general discussions you had with the client in terms of what informed you what search terms you should be using.

A. Those were the terms consistently used by H&H to identify the particular product.

Q. So the client told you that FreeStyle and International are the terms they consistently used to refer to International FreeStyle test strips; is that correct?

A. That’s what I recall.

Q. Did the client tell you that they used the abbreviation FSL to refer to FreeStyle?

A. I don’t recall.

Q. If they had told you that, you would have included that as a search term, correct?

A. I don’t recall if it was or was not included as a search term, sir.

Opinion pgs. 10-11.

The next time you are asked to dream up keywords for searches to find your client’s relevant evidence, remember this case, remember this deposition. Do not simply use keywords that the client suggests, as the attorneys did here. Do not simply use keywords. As I have written here many, many times before, there is a lot more to electronic evidence search and review than keywords. This is the Twenty First Century. You should be using AI, specifically active machine learning, aka Predictive Coding.

You need an expert to help you and you need them at the start of a case, not after sanctions motions.

Judge Lois Bloom went on to explain that, even if defendant’s story of innocent reliance on it lawyers was true:

It has long been held that a client-principal is “bound by the acts of his lawyer agent.” Id. (quoting Link v. Wabash RR. Co., 370 U.S. 626, 634 (1962)). As the Second Circuit stated, “even innocent clients may not benefit from the fraud of their attorney.” Id. . . .

However, notwithstanding defendants’ assertion that the search terms “FreeStyle” and “International” were used in lieu of more comprehensive search terms at the behest of Mr. Yert, it is undisputed that Mr. Sweet, H&H’s general manager, knew that H&H used abbreviations for these terms. Mr. Sweet admitted this at his deposition. See Sweet Dep. 81:2-81:24, Mar. 13, 2018. . . . The Court need not speculate as to why defendants did not use these search terms to comply with defendants’ obligation to produce pursuant to the Court’s Order. Mr. Sweet, by his own admission, states that “on several occasions he contacted Mr. Yert with specific questions about whether to include certain emails in production.” Sweet Decl. ¶ 7. It is inconceivable that H&H’s General Manager, who worked closely with Mr. Yert to respond to the Court’s Order, never mentioned that spelling out the terms used, “International” and “FreeStyle”, would not capture the documents in H&H’s email system. Mr. Sweet knew that H&H was required to produce documents regarding International FreeStyle test strips, regardless of whether H&H’s documents spelled out or abbreviated the terms. Had plaintiffs not seized H&H’s email server in the counterfeiting action, plaintiffs would have never known that defendants failed to produce a trove of responsive documents. H&H would have gotten away with it.

Opinion pgs. 12-13.

Defendants also failed to produce any documents by three custodians Holland Trading, Howard Goldman, and Lori Goldman. Again, they tried to blame that omission on their attorney, who they claim directed the search. Oh yeah, for sure. To me he looks like a mere stooge, a tool of unscrupulous litigants. Judge Bloom did not accept that defense either, holding:

While defendants’ effort to shift blame to Mr. Yert is unconvincing at best, even if defendants’ effort could be credited, counsel’s actions, even if they were found to be negligent, would not shield the H&H defendants from responsibility for their bad faith conduct.

Opinion pgs. 19-20. Then Judge Bloom went on to cite the record at length, including the depositions and affidavits of the attorneys involved, to expose this blame game as a sham. The order then concludes on this point holding:

There is no credible explanation for why the Holland Trading, Howard Goldman, and Lori Goldman documents were not produced except that the documents were willfully withheld. Defendants’ explanation that there were no documents withheld, then that any documents that weren’t produced were due to technical glitches, then that the documents didn’t appear in Mr. Sweet’s original search, then that if documents were intentionally removed, they were removed per Mr. Yert’s instructions cannot all be true. The H&H defendants have always had one more excuse up their sleeve in this “series of episodes of nonfeasance,” which amounts to “deliberate tactical intransigence.” Cine, 602 F.2d at 1067. In light of the H&H defendants’ ever-changing explanations as to the withheld documents, Mr. Sweet’s inconsistent testimony, and assertions of former counsel, the Court finds that the H&H defendants have calculatedly attempted to manipulate the judicial process. See Penthouse, 663 F.2d 376–390 (affirming entry of default where plaintiffs disobeyed an “order to produce in full all of [their] financial statements,” engaged in “prolonged and vexatious obstruction of discovery with respect to closely related and highly relevant records,” and gave “false testimony and representations that [financial records] did not exist.”).

Opinion pgs. 22-23.

The plaintiff, Abbott Labs, went on to argue that “the withheld documents freed David Gulas to commit perjury at his deposition. The Court agrees.” Id. at 24. The Truth has a way of finding itself out, especially with competent counsel on the other side and a good judge.

With this evidence the Court concluded the only adequate sanction was a default judgment in plaintiff’s favor. Message to spoliating defendants, game over, you lose.

Based on the full record of the case, there is clear and convincing evidence that defendants have perpetrated a fraud upon the court. Defendants’ initial conduct of formulating search terms designed to fail in deliberate disregard of the lawful orders of the Court allowed H&H to purposely withhold responsive documents, including the Holland Trading, Howard Goldman, and Lori Goldman documents. Defendants proffered inconsistent positions with three successive counsel as to why the documents were withheld. Mr. Sweet’s testimony is clearly inconsistent if not perjured from his deposition to his declaration in opposition to the instant motion. Mr. Goldman’s deposition testimony is evasive and self-serving at best. Finally, Mr. Gulas’ deposition testimony is clearly perjured. Had plaintiffs never seized H&H’s server pursuant to the Court’s Order in the counterfeiting case, H&H would have gotten away with their fraud upon this Court. H&H only complied with the Court’s orders and their discovery obligations when their backs were against the wall. Their email server had been seized. There was no longer an escape from responsibility for their bad faith conduct. This is, again, similar to Cerruti, where the “defendants did not withdraw the [false] documents on their own. Rather, they waited until the falsity of the documents had been detected.” Cerruti.,169 F.R.D. at 583. But for being caught in a web of irrefutable evidence, H&H would have profited from their misconduct. . . .

The Court finds that the H&H defendants have committed a fraud upon the court, and that the harshest sanction is warranted. Therefore, plaintiffs’ motion for sanctions should be granted and a default judgment should be entered against H&H Wholesale Services, Inc., Howard Goldman, and Lori Goldman.


Attorneys of record sign responses under Rule 26(g) to requests for production, not the client. That is because the rules require them to control the discovery efforts of their clients. That means the attorney’s neck is on the line. Rule 26(g) does not allow you to just take a client’s word for it. Verify. Supervise. The numbers should add up. The search terms, if used, should be designed and tested to succeed, not fail. This is your response, not the client’s. You determine the search method, in consultation with the client for sure, but not by “just following orders.” You must see everything, not nothing. If you see no email from key custodians, dig deeper and ask why. Do this at the beginning of the case. Get vendor help before you start discovery, not after you fail. Apparently the original defense attorneys here did just what they were asked, they went along with the client. Look where it got them. Fired and deposed. Default judgment entered. Cautionary tale indeed.



%d bloggers like this: