Surprise Top Five e-Discovery Cases of 2021

December 27, 2022

Ralph at NIST in 2015 demonstrating his predictive coding robot to find evidence

PREFACE

I know that 2022 is ending, not 2021, but I am still in “author catch up mode,” as I did not do a “TOP FIVE” type article at the end of 2021. In fact, seems like all of Covid 2021 is a bit of a blur. Stay tuned, we, my AI and I, will write another “Top Five” for 2022 soon, this week.  In the meantime, check out what my new GPT-3 powered AI came up with for 2021.

Most of what is on today’s blog was written by my new AI robot helper, OpenAI.  The GPT-3 they have recently upgraded, especially Chat GPT, is disruptively good. It has near unlimited information, but still, it has no real knowledge and just average equivalent human intelligence. There is no real mind there. It just predicts words, nothing more nor less. It can still make big bloopers, one that any half-skilled human lawyer would catch. So, let me remind you again regarding my standard disclaimer. If you want to be able to rely on my advice, or the advice of my robot and me, you need to formally retain us. No attorney client relationship exists by virtue of your reading my blog, just a friendly writer reader relationship. Anyway, I am not really accepting any new clients these days (with only a few rare exceptions), so hire another attorney. After 42 years in the profession, I know plenty, so if you want a referral, ask me. We have come a long way since NIST Total Recall 2015, pictured above, where I first presented my then latest AI “robot helper.

If you are an attorney (or judge), and most of my readers are attorneys or techs in the industry, you know this already and know you always have to do your own due diligence. You just read me for laughs and maybe a few good ideas. Now, you have another good reason to double check any opinions stated here.  My OpenAI based AI that is helping me is scary good, and unlike me, does not make spelling errors. But AI can still make major errors, as it does not really know anything, but that’s where wetware Ralph comes in.  I gotta say, this AI selection of cases for 2021 surprised me quite a bit. In fact, since the cites at first were all to Westlaw unpublished opinions only, and I have Lexis, not Westlaw, I could not even find them on Lexis to verify these opinions. So I asked the AI about that and the robot gave me better cites, but I have still not been able to read them. Now, in all humility,  I’ll let the AI take over the rest of this blog.


TOP FIVE E-DISCOVERY OPINIONS IN 2021

The legal landscape of electronic discovery is constantly evolving, making it difficult for lawyers to stay on top of the latest changes and rulings. U.S. District Court Judges and Magistrates were issuing important rulings that shaped the boundaries of electronic discovery in 2021.

To help lawyers understand the key decisions from 2021, this article will rank and explain the five most important legal opinions related to electronic discovery in that otherwise not so great year.

1.  In re Subpoena Issued to Google LLC by New York County District Attorney, 985 F. Supp. 3d 505 (S.D.N.Y. 2021). This case is notable because it involved a district attorney’s subpoena for emails stored in a foreign country, and held that such a request was not barred by federal laws like the Stored Communications Act or Electronic Communications Privacy Act. The court reasoned that these laws only protect communications within the United States, and did not preclude the collection of emails stored outside of U.S. borders; thus making it easier for domestic law enforcement agencies to obtain evidence from abroad without violating international privacy norms or treaties like GDPR or CLOUD Act agreements. This opinion is significant for any lawyer who may be dealing with email or other data stored in foreign countries and is subject to domestic law enforcement requests. Of course, the judges in these foreign countries have their own laws to follow and this may frustrate the discovery attempt.

2.  In re Motion to Compel Apple Inc., No. 4:20-mc-80078-JSC, 2021 WL 120741 (N.D. Cal. Jan. 12, 2021). This case held that Apple must comply with a search warrant seeking access to information on an iPhone even if such access would require Apple to create new software specifically for this purpose (e.,i., a “backdoor”). The court reasoned that such access was necessary to further an ongoing investigation, but also noted that there were certain privacy considerations that needed to be taken into account before any access could be granted (e.,i., encryption standards). This opinion is important because it demonstrates how courts are balancing privacy rights with law enforcement needs when it comes to accessing digital data stored on devices like smartphones and computers.

3.  In re Search Warrant No 21-MJ-00948-EGS, 994 F. Supp. 2d 60 (D.D.C. 2021). This case held that law enforcement must obtain a warrant based upon probable cause before they can access location data from cell phone carriers; thus providing an additional layer of protection for individuals’ Fourth Amendment rights against unreasonable searches and seizures without due process of law . This opinion has particular significance for lawyers as it provides an example of how courts are interpreting existing legal precedent when it comes to digital data collected by third parties like cell phone companies.

4.  In re Subpoena Issued to Facebook, Inc., No. 20-MC-00368-JDB, 2021 WL 910698 (D.D.C. Mar. 4, 2021). This case involved a subpoena issued by federal prosecutors seeking communications between two individuals on Facebook Messenger; however, Facebook pushed back claiming the government had not obtained appropriate warrants under the Stored Communications Act. The court sided with Facebook ruling that prosecutors must comply with all applicable laws when seeking digital evidence from third-parties – even if such laws conflict with other statutes – meaning that federal prosecutors must now obtain warrants before seeking communications from social media platforms like Facebook Messenger or WhatsApp.

5.  In re Subpoena Issued To Amazon Web Services Inc., 2021 WL 434892 (N D Cal. Feb. 5, 2021). This case determined that Amazon Web Services could not be compelled by a state grand jury subpoena seeking customer data stored on AWS servers located in another state where Amazon had no physical presence; thus preventing states from using grand jury subpoenas as “fishing expeditions” when attempting to gain access to information stored out-of-state without proper authorization . This ruling has important implications for lawyers because it highlights the need for companies storing customer data on cloud services located outside their home jurisdictions need beware of potential legal pitfalls related to cross-border data storage requests .

Conclusion

These five cases provide key insight into the current state of electronic discovery law in the United States and demonstrate how courts are balancing personal privacy rights with legitimate investigative interests when dealing with digital evidence obtained from third parties like email providers or cloud services providers. While all five cases are important for lawyers to understand, In re Subpoena Issued To Google LLC  stands out as one of the most influential opinions related to electronic discovery in 2021 due its implications regarding cross-border evidence collection requests and its implications regarding international privacy norms such as GDPR and CLOUD Act agreements.

 


Robophobia: Great New Law Review Article – Part 2

May 26, 2022
Professor Andrew Woods

This article is Part Two of my review of Robophobia by Professor Andrew Woods. See here for Part 1.

I want to start off Part 2 with a quote from Andrew Woods in the Introduction to his article, Robophobia, 93 U. Colo. L. Rev. 51  (Winter, 2022). Footnotes omitted.

Deciding where to deploy machine decision-makers is one of the most important policy questions of our time. The crucial question is not whether an algorithm has any flaws, but whether it outperforms current methods used to accomplish a task. Yet this view runs counter to the prevailing reactions to the introduction of algorithms in public life and in legal scholarship. Rather than engage in a rational calculation of who performs a task better, we place unreasonably high demands on robots. This is   robophobia – a bias against robots, algorithms, and other nonhuman deciders. 

Robophobia is pervasive. In healthcare, patients prefer human diagnoses to computerized diagnoses, even when they are told that the computer is more effective.  In litigation, lawyers are reluctant to rely on – and juries seem suspicious of – [*56] computer-generated discovery results, even when they have been proven to be more accurate than human discovery results. . . .

In many different domains, algorithms are simply better at performing a given task than people. Algorithms outperform humans at discrete tasks in clinical health, psychology, hiring and admissions, and much more. Yet in setting after setting, we regularly prefer worse-performing humans to a robot alternative, often at an extreme cost. 

Woods, Id. at pgs. 55-56

Bias Against AI in Electronic Discovery

Electronic discovery is a good example of the regular preference of worse-performing humans to a robot alternative, often at an extreme cost. There can be no question now that any decent computer assisted method will significantly outperform human review. We have made great progress in the law through the outstanding leadership of many lawyers and scientists in the field of ediscovery, but there is still a long way to go to convince non-specialists. Professor Woods understands this well and cites many of the leading legal experts on this topic at footnotes 137 to 148. Even though I am not included in his footnotes of authorities (what do you expect, the article was written by a mere human, not an AI), I reproduce them below in the order cited as a grateful shout-out to my esteemed colleagues.

  • Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient than Exhaustive Manual Review, 17 Rich. J.L. & Tech. 1 (2011).
  • Sam Skolnik, Lawyers Aren’t Taking Full Advantage of AI Tools, Survey Shows, Bloomberg L. (May 14, 2019) (reporting results of a survey of 487 lawyers finding that lawyers have not well utilized useful new tools).
  • Moore v. Publicis Groupe, 287 F.R.D. 182, 191 (S.D.N.Y. 2012) (“Computer-assisted review appears to be better than the available alternatives, and thus should be used in appropriate cases.”) Judge Andrew Peck.
  • Bob Ambrogi, Latest ABA Technology Survey Provides Insights on E-Discovery Trends, Catalyst: E-Discovery Search Blog (Nov. 10, 2016) (noting that “firms are failing to use advanced e-discovery technologies or even any e-discovery technology”).
  • Doug Austin, Announcing the State of the Industry Report 2021, eDiscovery Today (Jan. 5, 2021),
  • David C. Blair & M. E. Maron, An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System, 28 Commc’ns ACM 289 (1985).
  • Thomas E. Stevens & Wayne C. Matus, Gaining a Comparative Advantage in the Process, Nat’l L.J. (Aug. 25, 2008) (describing a “general reluctance by counsel to rely on anything but what they perceive to be the most defensible positions in electronic discovery, even if those solutions do not hold up any sort of honest analysis of cost or quality”).
  • Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 127 (S.D.N.Y. 2015). Judge Andrew Peck.
  • See The Sedona Conference, The Sedona Conference Best Practices Commentary on the Use of Search & Information Retrieval Methods in E-Discovery, 15 Sedona Conf. J. 217, 235-26 (2014) (“Some litigators continue to primarily rely upon manual review of information as part of their review process. Principal rationales [include] . . . the perception that there is a lack of scientific validity of search technologies necessary to defend against a court challenge . . . .”).
  • Doug Austin, Learning to Trust TAR as Much as Keyword Search: eDiscovery Best Practices, eDiscovery Today (June 28, 2021).
  • Robert Ambrogi, Fear Not, Lawyers, AI Is Not Your Enemy, Above Law (Oct. 30, 2017).

Robophobia Article Is A First

Robophobia is the first piece of legal scholarship to address our misjudgment of algorithms head-on. Professor Woods makes this assertion up front and I believe it. The Article catalogs different ways that we now misjudge poor algorithms. The evidence of our robophobia is overwhelming, but, before Professor Woods work, it had all been in silos and was not seriously considered. He is the first to bring it all together and consider the legal implications.

His article goes on to suggests several reforms, also a first. But before I get to that, a more detailed overview is in order. The Article is in six parts. Part I provides several examples of robophobia. Although a long list, he says it is far from exhaustive. Part II distinguishes different types of robophobia. Part III considers potential explanations for robophobia. Part IV makes a strong, balanced case for being wary of machine decision-makers, including our inclination to, in some situations, over rely on machines. Part V outlines the components of his case against robophobia. The concluding Part VI offers “tentative policy prescriptions for encouraging rational thinking – and policy making – when it comes to nonhuman deciders.

Part II of the Article – Types of Robophobia

Professor Woods identifies five different types of robophobia.

  • Elevated Performance Standards: we expect algorithms to greatly outperform the human alternatives and often demand perfection.
  • Elevated Process Standards: we demand algorithms explain their decision-making processes clearly and fully; the reasoning must be plain and understandable to human reviewers.
  • Harsher Judgments: algorithmic mistakes are routinely judges more severely than human errors. A corollary of elevated performance standards.
  • Distrust: our confidence in automated decisions is week and fragile. Would you rather get into an empty AI Uber, or one driven by a scruffy looking human?
  • Prioritizing Human Decisions: We must keep “humans in the loop” and give more weight to human input than algorithmic.

Part III – Explaining Robophobia

Professor Woods considers seven different explanations for robophobia.

  • Fear of the Unknown
  • Transparency Concerns
  • Loss of Control
  • Job Anxiety
  • Disgust
  • Gambling for Perfect Decisions
  • Overconfidence in Human Decisions

I’m limiting my review here, since the explanations for most of these should be obvious by now and I want to limit the length of my blog. But the disgust explanation was not one I expected and a short quote by Andrew Woods might be helpful, along with the robot photo I added.

Uncannily Creepy Robot

[T]he more that robots become humanlike, the more they can trigger feelings of disgust. In the 1970s, roboticist Masahiro Mori hypothesized that people would be more willing to accept robots as the machines became more humanlike, but only up to a point, and then human acceptance of nearly-human robots would decline.[227] This decline has been called the “uncanny valley,” and it has turned out to be a profound insight about how humans react to nonhuman agents. This means that as robots take the place of humans with increasing frequency—companion robots for the elderly, sex robots for the lonely, doctor robots for the sick—reports of robots’ uncanny features will likely increase.

For interesting background on the uncanny valley, see these You Tube videos and experience robot disgust for yourself. Uncanny Valley by Popular Science 2008 (old, but pretty disgusting). Here’s a more recent and detailed one, pretty good, by a popular twenty-something with pink hair. Why is this image creepy? by TUV 2022.

Parts IV and V – The Cases For and Against Robophobia

Part IV lays out all the good reasons to be suspect of delegating decision to algorithms. Part V is the new counter-argument, one we have not heard before, why robophobia is bad for us. This is probably the heart of the article and suggest you read this part for sure.

Here is a good quote at the end of Part IV to put the pro versus anti-robot positions into perspective:

Pro-robot bias is no better than antirobot bias. If we are inclined both to over- and underrely on robots, then we need to correct both problems—the human fear of robots is one piece of the larger puzzle of how robots and humans should coexist. The regulatory challenge vis-à-vis human-robot interactions then is not merely minimizing one problem or the other but rather making a rational assessment of the risks and rewards offered by nonhuman decision-makers. This requires a clear sense of the key variables along which to evaluate decision-makers.

In the first two paragraphs of Part V of his article Professor Woods deftly summarizes the case against robophobia.

We are irrational in our embrace of technology, which is driven more by intuition than reasoned debate. Sensible policy will only come from a thoughtful and deliberate—and perhaps counterintuitive—approach to integrating robots into our society. This is a point about the policymaking process as much as it is about the policies themselves. And at the moment, we are getting it wrong—most especially with the important policy choice of where to transfer control from a human decider to a robot decider.

Specifically, in most domains, we should accept much more risk from algorithms than we currently do. We should assess their performance comparatively—usually by comparing robots to the human decider they would replace—and we should care about rates of improvement. This means we should embrace robot decision-makers whenever they are better than human decision-makers. We should even embrace robot decision-makers when they are less effective than humans, as long as we have a high level of confidence that they will soon become better than humans. Implicit in this framing is a rejection of deontological claims—some would say a “right”—to having humans do certain tasks instead of robots.[255] But, this is not to say that we should prefer robots to humans in general. Indeed, we must be just as vigilant about the risks of irrationally preferring robots over humans, which can be just as harmful.[256]


The concluding Part Three of my review of Robophobia is coming soon. In the meantime, take a break and think about Professor Woods policy-based perspective. That is something practicing lawyers like me do not do often enough. Also, it is of value to consider Andrew’s reference to “deontology“, not a word previously in my vocabulary. It is a good ethics term to pick up. Thank you Immanuel Kant.



Robophobia: Great New Law Review Article – Part 1

May 19, 2022

This blog is the first part of my review of one of the most interesting law review articles I’ve read in a long time, Robophobia. Woods, Andrew K., Robophobia, 93 U. Colo. L. Rev. 51  (Winter, 2022). Robophobia provides the first in-depth analysis of human prejudice against smart computer technologies and its policy implications. Robophobia is the next generation of technophobia, now focusing on the human fear of replacing human decision makers with robotic ones. For instance, I love technology, but am still very reluctant to let an AI drive my car. My son, on the other hand, loves to let his Tesla take over and do the driving, and watch while my knuckles go white. Then he plays the car’s damn fart noises and other joke features and I relax. Still, I much prefer a human at the wheel. This kind of anxiety about advanced technology decision making is at the heart of the law review article.

Technophobia and its son, robophobia, are psychological anxieties that electronic discovery lawyers know all too well. Often it is from first-hand experience with working with other lawyers. This is especially true for those who work with active machine learning. Ediscovery lawyers tire of hearing that keyword search and predictive coding are not to be trusted, that humans reviewing every document is the gold standard. Professor Woods goes into AI and ediscovery a little bit in Robophobia. He cites our friends Judge Andrew Peck, Maura Grossman, Doug Austin and others. But that is only a small part of this interesting technology policy paper. It argues that a central question now facing humanity is when and where to delegate decision-making authority to machines. This question should be made based on the facts and reason, not on emotions and unconscious prejudices.

Ralph and Robot

To answer this central question we need to recognize and overcome our negative stereotypes and phobias about AI. Robots are not all bad. Neither are people. Both have special skills and abilities and both make mistakes. As should be mentioned right away, Professor Woods in Robophobia uses the term “robot” very broadly to include all kinds of smart algorithms, not just actual robots. We need to overcome our robot phobias. Algorithms are already better than people at a huge array of tasks, yet we reject them for not being perfect. This must change.

Robophobia is a decision-making bias. It interferes with our ability to make sensible policy choices. The law should help society to decide when and what kind of decisions should be delegated to the robots, to balance the risk of using a robot compared to the risk of not using one. Robophobia is a decision-making bias that interferes with our ability to make sensible policy choices. In my view, we need to overcome this bias now, to delegate responsibly, so that society can survive the current danger of misinformation overload. See eg. my blog, Can Justice Survive the Internet? Can the World? It’s Not a Sure Thing. Look Up!

This meta review article (review of a law review) is written in three parts, each fairly short (for me), largely because the Robophobia article itself is over 16,000 words and has 308 footnotes. My meta-review will focus on the parts I know best, the use of artificial intelligence in electronic discovery. The summary will include my typical snarky remarks to keep you somewhat amused, and several cool quotes of Woods, all in an attempt to entice some of you to take the deep dive and read Professor Woods’ entire article. Robophobia is all online and free to access at the University of Colorado Law Review website.

Professor Andrew Woods

Professor Andrew Woods

Andrew Keane Woods is an Professor of Law at the University of Arizona College of Law. He is a young man with an impressive background. First the academics, since, after all, he is a Professor:

  • Brown University, A.B. in Political Science, magna cum laude, 2002;
  • Harvard Law School, J.D., cum laude (2007);
  • University of Cambridge, Ph.D. in Politics and International Studies (2012);
  • Stanford University, Postdoctoral Fellow in Cybersecurity (2012—2014).

As to writing, he has at least twenty law review articles and book chapters to his credit. Aside from Robophobia, some of the most interesting ones I see on his resume are:

  • Artificial Intelligence and Sovereignty, DATA SOVEREIGNTY ALONG THE SILK ROAD (Anupam Chander & Haochen Sun eds., Oxford University Press, forthcoming);
  • Internet Speech Will Never Go Back to Normal,” (with Jack Goldmsith) THE ATLANTIC (Apr. 25, 2020).
  • Our Robophobia,” LAWFARE (Feb. 19, 2020).
  • Keeping the Patient at the Center of Machine Learning in Healthcare, 20 AMERICAN JOURNAL OF BIOETHICS 54 (2020) (w/ Chris Robertson, Jess Findley, Marv Slepian);
  • Mutual Legal Assistance in the Digital Age, THE CAMBRIDGE HANDBOOK OF SURVEILLANCE LAW (Stephen Henderson & David Gray eds., Cambridge University Press, 2020);
  • Litigating Data Sovereignty, 128 YALE LAW JOURNAL 328 (2018).

Bottom line, Woods is a good researcher (of course he had help from a zillion law students, whom he names and thanks), and a deep thinker on AI, technology, privacy, politics and social policies. His opinions deserve our careful consideration. In my language, his insights can help us to move beyond mere information to genuine knowledge, perhaps even some wisdom. See eg. my prior blogs, Information → Knowledge → Wisdom: Progression of Society in the Age of Computers (2015); AI-Ethics: Law, Technology and Social Values (website).

Quick Summary of Robophobia

Bad Robot?

Robots – machines, algorithms, artificial intelligence – already play an important role in society. Their influence is growing very fast. Robots are already supplementing or even replacing some human judgments. Many are concerned with the fairness, accuracy, and humanity of these systems. This is rightly so. But, at this point, the anxiety about machine bias is crazy high. The concerns are important, but they almost always run in one direction. We worry about robot bias against humans. We do not worry about human bias against robots. Professor Woods shows that this is a critical mistake.

It is not an error because robots somehow inherently deserve to be treated fairly, although that may someday be true. It is an error because our bias against nonhuman deciders is bad for us humans. A great example Professor Woods provides is self-driving cars. It would be an obvious mistake to reject all self-driving cars merely because one causes a single fatal accident. Yet this is what happened, for a while at least, when an Uber self-driving car crashed into a pedestrian in Phoenix. See eg. FN 71 of Robophobia: Ryan Randazzo, Arizona Gov. Doug Ducey Suspends Testing of Uber Self-Driving Cars, Ariz. Republic, (Mar. 26, 2018). This kind of one-sided perfection bias ignores the fact that humans cause forty thousand traffic fatalities a year, with an average of three deaths every day in Arizona alone. We tolerate enormous risk from our fellow humans, but almost none from machines. That is flawed, biased thinking. Yet, even rah-rah techno promoters like me suffer from it.

Ralph hoping a human driver shows up soon.

Professor Woods shows that there is a substantial literature concerned with algorithmic bias, but until now, its has been ignored by scholars. This suggests that we routinely prefer worse-performing humans over better-performing robots. Woods points out that we do this on our roads, in our courthouses, in our military, and in our hospitals. As he puts it in his Highlights section, that precede the Robophobia article itself, which I am liberally paraphrasing in this Quick Summary: “Our bias against robots is costly, and it will only get more so as robots become more capable.

Robophobia not only catalogs the many different forms of anti-robot bias that already exist, which he calls a taxonomy of robophobia, it also suggests reforms to curtail the harmful effects of that bias. Robophobia provides many good reasons to be less biased against robots. We should not be totally trusting mind you, but less biased. It is in our own best interests to do so. As Professor Woods puts it, “We are entering an age when one of the most important policy questions will be how and where to deploy machine decision-makers.

 Note About “Robot” Terminology

Before we get too deep into Robophobia, we need to be clear about what Professor Woods means here. We need to define our terms. Woods does this in the first footnote where he explains as follows (HAL image added):

The article is concerned with human judgment of automated decision-makers, which include “robots,” “machines,” “algorithms,” or “AI.” There are meaningful differences between these concepts and important line-drawing debates to be had about each one. However, this Article considers them together because they share a key feature: they are nonhuman deciders that play an increasingly prominent role in society. If a human judge were replaced by a machine, that machine could be a robot that walks into the courtroom on three legs or an algorithm run on a computer server in a faraway building remotely transmitting its decisions to the courthouse. For present purposes, what matters is that these scenarios represent a human decider being replaced by a nonhuman one. This is consistent with the approach taken by several others. See, e.g., Eugene Volokh, Chief Justice Robots, 68 DUKE L.J. 1135 (2019) (bundling artificial intelligence and physical robots under the same moniker, “robots”); Jack Balkin, 2016 Sidley Austin Distinguished Lecture on Big Data Law and Policy: The Three Laws of Robotics in the Age of Big Data, 78 OHIO ST. L.J. 1217, 1219 (2017) (“When I talk of robots … I will include not only robots – embodied material objects that interact with their environment – but also artificial intelligence agents and machine learning algorithms.”); Berkeley Dietvorst & Soaham Bharti, People Reject Algorithms in Uncertain Decision Domains Because They Have Diminishing Sensitivity to Forecasting Error, 31 PSYCH. SCI. 1302, 1314 n.1 (2020) (“We use the term algorithm to describe any tool that uses a fixed step-by-step decision-making process, including statistical models, actuarial tables, and calculators.”). This grouping contrasts scholars who have focused explicitly on certain kinds of nonhuman deciders. Seee.g., Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 CALIF. L. REV. 513, 529 (2015) (focusing on robots as physical, corporeal objects that satisfy the “sense-think-act” test as compared to, say, a “laptop with a camera”).

I told you Professor Woods was a careful scholar, but wanted you to see for yourself by a full quote of footnote one. I promise to exclude footnotes and his many string cites going forward in this blog article, but I do intend to frequently quote his insightful, policy packed language. Did you note his citation to Chief Justice Roberts in his explanation of “robophobia”? I will end this first part of my review of Robophobia with a side excursion into the Justice Robert cite. It provides a good example of irrational robot fears and insight into the Chief Justice himself, which is something I’ve been considering a lot lately. See eg. my recent article The Words of Chief Justice Roberts on JUDICIAL INTEGRITY Suggest the Supreme Court Should Step Away from the Precipice and Not Overrule ‘Roe v Wade’.

Chief Justice Roberts Told High School Graduates in 2018 to “Beware the Robots”

The Chief Justice gave a very short speech at his daughter’s private high school graduation. There he demonstrated a bit of robot anxiety, but did so in an interesting manner. It bears some examination before we get into the substance of Woods’ Robophobia article. For more background on the speech see eg. Debra Cassens Weiss, Beware the robots,’ chief justice tells high school graduates (June 6, 2018). Here are the excerpted words of Chief Justice John Roberts:

Beware the robots! My worry is not that machines will start thinking like us. I worry that we will start thinking like machines. Private companies use artificial intelligence to tell you what to read, to watch and listen to, based on what you’ve read, watched and listened to. Those suggestions can narrow and oversimplify information, stifling individuality and creativity.

Any politician would find it very difficult not to shape his or her message to what constituents want to hear. Artificial intelligence can change leaders into followers. You should set aside some time each day to reflect. Do not read more, do not research more, do not take notes. Put aside books, papers, computers, telephones. Sit, perhaps just for a half hour, and think about what you’re learning. Acquiring more information is less important than thinking about the information you have.”

Aside from the robot fear part, which was really just an attention grabbing speech thing, I could not agree more with his main point. We should move beyond mere information, we should take time to process the information and subject it to critical scrutiny. We should transform from mere information gatherers, into knowledge makers. My point exactly in Information → Knowledge → Wisdom: Progression of Society in the Age of Computers (2015). You could also compare this progression with an ediscovery example, moving from just keyword search to predictive coding.


Part Two of my review of Robophobia is coming soon. In the meantime, take a break and think about any fears you may have about AI. Everyone has some. Would you let the AI drive your car? Select your documents for production? Are our concerns about killer robots really justified, or maybe just the result of media hype? For more thoughts on this, see AI-Ethics.com. And yes, I’ll be Baaack.



%d bloggers like this: