Abraham Lincoln, America’s First Tech-Lawyer

February 19, 2023
Close up of Lincoln's face on April 10, 1865

Abraham Lincoln was born on February 12, 1809. He was probably our greatest President. Putting aside the tears honest Abe would likely shed over the political scene today, it is good to remember Lincoln as an exemplar of a U.S. lawyer. All lawyers would benefit from emulating aspects of his Nineteenth Century legal practice and Twenty First Century thoughts on technology. He was honest, diligent, a deep thinker and ethical. He did not need to be lectured on Cooperation and Rule 1. He also did not need to be told to embrace technology, not hide from it. In fact, he was a prominent Tech-Lawyer of his day, well known for his speaking abilities on the subject.

He was also a man with a sense of humor who knew how to enjoy himself. I think he would have approved of the video below. I made this of him using GPT technologies to express one of my life mottoes, inspired by him. He is a personal hero. Did you know he had a high pitched voice? Here I try to imitate what he might have sounded like. There are no recordings of his speech, just written accounts.

Lincoln in his lawyer phase

 

Near the end of his legal career Abe was busy pushing technology and his vision of the future. Sound familiar dear readers? It should. Many of you are like that. I know I am.

Lincoln Was a Technophile

Lincoln was as obsessed with the latest inventions and advances in technology as any techno-geek e-discovery lawyer alive today. The latest things in Lincoln’s day were mechanical devices of all kinds, typically steam-powered, and the early electromagnetic devices, then primarily the telegraph. Indeed, the first electronic transmission from a flying machine, a balloon, was a telegraph sent from inventor Thaddeus Lowe to President Lincoln on June 16, 1861. Unlike Lincoln’s generals, he quickly realized the military potential of flying machines and created an Aeronautics Corps for the Army, appointing Professor Lowe as its chief. See Bruce, Robert V., Abraham Lincoln and the Tools of War. Below is a copy of a handwritten note by Lincoln introducing Lowe to General Scott.

Lincoln's handwritten introduction of Professor Lowe

At the height of his legal career, Lincoln’s biggest clients were the Googles of his day, namely the railroad companies with their incredible new locomotives. These newly rich, super-technology corporations dreamed of uniting the new world with a cross-country grid of high speed transportation. Little noticed today is one of Lincoln’s proudest achievements as President, the enactment of legislation that funded these dreams, the Pacific Railway Act of 1862. The intercontinental railroad did unite the new world, much like the Internet and airlines today are uniting the whole world. A lawyer as obsessed with telegraphs and connectivity as Lincoln was would surely have been an early adopter of the Internet and an enthusiast of electronic discovery.  See: Abraham Lincoln: A Technology Leader of His Time (U.S. News & World Report, 2/11/09). No doubt he would be using Chat GPT to help with his mundane paperwork (but not his speeches).

Abraham Lincoln loved technology and loved to think and talk about the big picture of technology, of how it is used to advance the dreams of Man. In fact, Lincoln gave several public lectures on technology, having nothing to do with law or politics. The first such lecture known today was delivered on April 6, 1858, before the Young Men’s Association in Bloomington, Illinois, and was entitled “Lecture on Discoveries and Inventions.” In this lecture, he traced the progress of mankind through its inventions, starting with Adam and Eve and the invention of the fig leaf for clothing. I imagine that if he were giving this speech today (and I’m willing to try to replicate it should I be so invited) he would end with AI and blockchain.

In Lincoln’s next and last lecture series first delivered on February 11, 1859, known as “Second Lecture on Discoveries and Inventions,” Lincoln used fewer biblical references, but concentrated instead on communication. For history buffs, see the complete copy of Lincoln’s Second Lecture, which, in my opinion, is much better than the first. Here are a few excerpts from this little known lecture:

The great difference between Young America and Old Fogy, is the result of Discoveries, Inventions, and Improvements. These, in turn, are the result of observation, reflection and experiment.

Writing – the art of communicating thoughts to the mind, through the eye – is the great invention of the world. Great in the astonishing range of analysis and combination which necessarily underlies the most crude and general conception of it, great, very great in enabling us to converse with the dead, the absent, and the unborn, at all distances of time and of space; and great, not only in its direct benefits, but greatest help, to all other inventions.

I have already intimated my opinion that in the world’s history, certain inventions and discoveries occurred, of peculiar value, on account of their great efficiency in facilitating all other inventions and discoveries. Of these were the arts of writing and of printing – the discovery of America, and the introduction of Patent-laws.

Can there be any doubt that the lawyer who wrote these words would instantly “get” the significance of the total transformation of writing, “the great invention of the world,” from tangible paper form, to intangible, digital form?  Can there be any doubt that a lawyer like this would understand the importance of the Internet, the invention that unites the world in a web of inter-connective writing, where each person may be a printer and instantly disseminate their ideas “at all distances of time and of space?”

Lincoln standing by his generals in the field; close up

Abraham Lincoln did not just have a passing interest in new technologies. He was obsessed with it, like most good e-discovery lawyers are today. In the worst days of the Civil War, the one thing that could still bring Lincoln joy was his talks with the one true scientist then residing in Washington, D.C., the first director of the Smithsonian Institution, Dr. Joseph Henry, a specialist in light and electricity. Despite the fact that Henry’s political views were anti-emancipation and virtually pro-secession, Lincoln would sneak over to the Smithsonian every chance he could get to talk to Dr. Henry. Lincoln told the journalist, Charles Carleton Coffin:

My visits to the Smithsonian, to Dr. Henry, and his able lieutenant, Professor Baird, are the chief recreations of my life…These men are missionaries to excite scientific research and promote scientific knowledge. The country has no more faithful servants, though it may have to wait another century to appreciate the value of their labors.

Bruce, Lincoln and the Tools of War, p. 219.

Lincoln was no mere poser about technology and inventions. He walked his talk and railed against the Old Fogies who opposed technology. Lincoln was known to be willing to meet with every crackpot inventor who came to Washington during the war and claimed to have a new invention that could save the Union. Lincoln would talk to most of them and quickly separate the wheat from the chaff. As mentioned, he recognized the potential importance of aircraft to the military and forced the army to fund Professor Lowe’s wild-eyed dreams of aerial reconnaissance. He also recognized another inventor and insisted, over much opposition, that the army adopt his new invention: Dr. Richard Gatling. His improved version of the machine gun began to be used by the army in 1864, and before that, the Gattling guns that Lincoln funded are credited with defending the New York Times from an invasion by “anti-draft, anti-negro mobs” that roamed New York City in mid-July 1863. Bruce, Lincoln and the Tools of War, p. 142.

As final proof that Lincoln was one of the preeminent technology lawyers of his day, and if he were alive today, surely would be again, I offer the little known fact that Abraham Lincoln is the only President in United States history to have been issued a patent. He patented an invention for “Buoying Vessels Over Shoals.” It is U.S. Patent Number 6,469, issued on May 22, 1849. I could only find the patent on the USPTO web, where it is not celebrated and is hard to read. So as my small contribution to Lincoln memorabilia in the bicentennial year of 2009, I offer the complete copy below of Abraham Lincoln’s three page patent. You should be able to click on the images with your browser to enlarge and download.

Lincoln Patent Pg. 1
Lincoln Patent Pg. 2
Lincoln Patent Pg. 3 (Drawings)


The invention consisted of a set of bellows attached to the hull of a ship just below the water line. After reaching a shallow place, the bellows were to be filled with air that buoyed the vessel higher, making it float higher and off the river shoals. The patent application was accompanied with a wooden model depicting the invention. Lincoln whittled the model with his own hands. It is on display at the Smithsonian and is shown below.

Lincoln Hand-Carved Wooden Model of Patent

Lincoln Filing Invention at Patent Office (fictionalized depiction)

Conclusion

On President’s Day 2023 it is worth recalling the long, prestigious pedigree of Law and Technology in America. Lincoln is a symbol of freedom, emancipation. He is also a symbol of Law and Technology.  If Abe were alive today, I have no doubt he would be, among other things, a leader of Law and Technology.

Stand tall friends. We walk in long shadows and, like Lincoln, we shall overcome the hardships we face. As Abe himself was fond of saying: down with the Old Fogies; it is young America’s destiny to embrace change and lead the world into the future. Let us lead with the honesty and integrity of Abraham Lincoln. Nothing less is acceptable.


Robophobia: Great New Law Review Article – Part 2

May 26, 2022
Professor Andrew Woods

This article is Part Two of my review of Robophobia by Professor Andrew Woods. See here for Part 1.

I want to start off Part 2 with a quote from Andrew Woods in the Introduction to his article, Robophobia, 93 U. Colo. L. Rev. 51  (Winter, 2022). Footnotes omitted.

Deciding where to deploy machine decision-makers is one of the most important policy questions of our time. The crucial question is not whether an algorithm has any flaws, but whether it outperforms current methods used to accomplish a task. Yet this view runs counter to the prevailing reactions to the introduction of algorithms in public life and in legal scholarship. Rather than engage in a rational calculation of who performs a task better, we place unreasonably high demands on robots. This is   robophobia – a bias against robots, algorithms, and other nonhuman deciders. 

Robophobia is pervasive. In healthcare, patients prefer human diagnoses to computerized diagnoses, even when they are told that the computer is more effective.  In litigation, lawyers are reluctant to rely on – and juries seem suspicious of – [*56] computer-generated discovery results, even when they have been proven to be more accurate than human discovery results. . . .

In many different domains, algorithms are simply better at performing a given task than people. Algorithms outperform humans at discrete tasks in clinical health, psychology, hiring and admissions, and much more. Yet in setting after setting, we regularly prefer worse-performing humans to a robot alternative, often at an extreme cost. 

Woods, Id. at pgs. 55-56

Bias Against AI in Electronic Discovery

Electronic discovery is a good example of the regular preference of worse-performing humans to a robot alternative, often at an extreme cost. There can be no question now that any decent computer assisted method will significantly outperform human review. We have made great progress in the law through the outstanding leadership of many lawyers and scientists in the field of ediscovery, but there is still a long way to go to convince non-specialists. Professor Woods understands this well and cites many of the leading legal experts on this topic at footnotes 137 to 148. Even though I am not included in his footnotes of authorities (what do you expect, the article was written by a mere human, not an AI), I reproduce them below in the order cited as a grateful shout-out to my esteemed colleagues.

  • Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient than Exhaustive Manual Review, 17 Rich. J.L. & Tech. 1 (2011).
  • Sam Skolnik, Lawyers Aren’t Taking Full Advantage of AI Tools, Survey Shows, Bloomberg L. (May 14, 2019) (reporting results of a survey of 487 lawyers finding that lawyers have not well utilized useful new tools).
  • Moore v. Publicis Groupe, 287 F.R.D. 182, 191 (S.D.N.Y. 2012) (“Computer-assisted review appears to be better than the available alternatives, and thus should be used in appropriate cases.”) Judge Andrew Peck.
  • Bob Ambrogi, Latest ABA Technology Survey Provides Insights on E-Discovery Trends, Catalyst: E-Discovery Search Blog (Nov. 10, 2016) (noting that “firms are failing to use advanced e-discovery technologies or even any e-discovery technology”).
  • Doug Austin, Announcing the State of the Industry Report 2021, eDiscovery Today (Jan. 5, 2021),
  • David C. Blair & M. E. Maron, An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System, 28 Commc’ns ACM 289 (1985).
  • Thomas E. Stevens & Wayne C. Matus, Gaining a Comparative Advantage in the Process, Nat’l L.J. (Aug. 25, 2008) (describing a “general reluctance by counsel to rely on anything but what they perceive to be the most defensible positions in electronic discovery, even if those solutions do not hold up any sort of honest analysis of cost or quality”).
  • Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 127 (S.D.N.Y. 2015). Judge Andrew Peck.
  • See The Sedona Conference, The Sedona Conference Best Practices Commentary on the Use of Search & Information Retrieval Methods in E-Discovery, 15 Sedona Conf. J. 217, 235-26 (2014) (“Some litigators continue to primarily rely upon manual review of information as part of their review process. Principal rationales [include] . . . the perception that there is a lack of scientific validity of search technologies necessary to defend against a court challenge . . . .”).
  • Doug Austin, Learning to Trust TAR as Much as Keyword Search: eDiscovery Best Practices, eDiscovery Today (June 28, 2021).
  • Robert Ambrogi, Fear Not, Lawyers, AI Is Not Your Enemy, Above Law (Oct. 30, 2017).

Robophobia Article Is A First

Robophobia is the first piece of legal scholarship to address our misjudgment of algorithms head-on. Professor Woods makes this assertion up front and I believe it. The Article catalogs different ways that we now misjudge poor algorithms. The evidence of our robophobia is overwhelming, but, before Professor Woods work, it had all been in silos and was not seriously considered. He is the first to bring it all together and consider the legal implications.

His article goes on to suggests several reforms, also a first. But before I get to that, a more detailed overview is in order. The Article is in six parts. Part I provides several examples of robophobia. Although a long list, he says it is far from exhaustive. Part II distinguishes different types of robophobia. Part III considers potential explanations for robophobia. Part IV makes a strong, balanced case for being wary of machine decision-makers, including our inclination to, in some situations, over rely on machines. Part V outlines the components of his case against robophobia. The concluding Part VI offers “tentative policy prescriptions for encouraging rational thinking – and policy making – when it comes to nonhuman deciders.

Part II of the Article – Types of Robophobia

Professor Woods identifies five different types of robophobia.

  • Elevated Performance Standards: we expect algorithms to greatly outperform the human alternatives and often demand perfection.
  • Elevated Process Standards: we demand algorithms explain their decision-making processes clearly and fully; the reasoning must be plain and understandable to human reviewers.
  • Harsher Judgments: algorithmic mistakes are routinely judges more severely than human errors. A corollary of elevated performance standards.
  • Distrust: our confidence in automated decisions is week and fragile. Would you rather get into an empty AI Uber, or one driven by a scruffy looking human?
  • Prioritizing Human Decisions: We must keep “humans in the loop” and give more weight to human input than algorithmic.

Part III – Explaining Robophobia

Professor Woods considers seven different explanations for robophobia.

  • Fear of the Unknown
  • Transparency Concerns
  • Loss of Control
  • Job Anxiety
  • Disgust
  • Gambling for Perfect Decisions
  • Overconfidence in Human Decisions

I’m limiting my review here, since the explanations for most of these should be obvious by now and I want to limit the length of my blog. But the disgust explanation was not one I expected and a short quote by Andrew Woods might be helpful, along with the robot photo I added.

Uncannily Creepy Robot

[T]he more that robots become humanlike, the more they can trigger feelings of disgust. In the 1970s, roboticist Masahiro Mori hypothesized that people would be more willing to accept robots as the machines became more humanlike, but only up to a point, and then human acceptance of nearly-human robots would decline.[227] This decline has been called the “uncanny valley,” and it has turned out to be a profound insight about how humans react to nonhuman agents. This means that as robots take the place of humans with increasing frequency—companion robots for the elderly, sex robots for the lonely, doctor robots for the sick—reports of robots’ uncanny features will likely increase.

For interesting background on the uncanny valley, see these You Tube videos and experience robot disgust for yourself. Uncanny Valley by Popular Science 2008 (old, but pretty disgusting). Here’s a more recent and detailed one, pretty good, by a popular twenty-something with pink hair. Why is this image creepy? by TUV 2022.

Parts IV and V – The Cases For and Against Robophobia

Part IV lays out all the good reasons to be suspect of delegating decision to algorithms. Part V is the new counter-argument, one we have not heard before, why robophobia is bad for us. This is probably the heart of the article and suggest you read this part for sure.

Here is a good quote at the end of Part IV to put the pro versus anti-robot positions into perspective:

Pro-robot bias is no better than antirobot bias. If we are inclined both to over- and underrely on robots, then we need to correct both problems—the human fear of robots is one piece of the larger puzzle of how robots and humans should coexist. The regulatory challenge vis-à-vis human-robot interactions then is not merely minimizing one problem or the other but rather making a rational assessment of the risks and rewards offered by nonhuman decision-makers. This requires a clear sense of the key variables along which to evaluate decision-makers.

In the first two paragraphs of Part V of his article Professor Woods deftly summarizes the case against robophobia.

We are irrational in our embrace of technology, which is driven more by intuition than reasoned debate. Sensible policy will only come from a thoughtful and deliberate—and perhaps counterintuitive—approach to integrating robots into our society. This is a point about the policymaking process as much as it is about the policies themselves. And at the moment, we are getting it wrong—most especially with the important policy choice of where to transfer control from a human decider to a robot decider.

Specifically, in most domains, we should accept much more risk from algorithms than we currently do. We should assess their performance comparatively—usually by comparing robots to the human decider they would replace—and we should care about rates of improvement. This means we should embrace robot decision-makers whenever they are better than human decision-makers. We should even embrace robot decision-makers when they are less effective than humans, as long as we have a high level of confidence that they will soon become better than humans. Implicit in this framing is a rejection of deontological claims—some would say a “right”—to having humans do certain tasks instead of robots.[255] But, this is not to say that we should prefer robots to humans in general. Indeed, we must be just as vigilant about the risks of irrationally preferring robots over humans, which can be just as harmful.[256]


The concluding Part Three of my review of Robophobia is coming soon. In the meantime, take a break and think about Professor Woods policy-based perspective. That is something practicing lawyers like me do not do often enough. Also, it is of value to consider Andrew’s reference to “deontology“, not a word previously in my vocabulary. It is a good ethics term to pick up. Thank you Immanuel Kant.



Robophobia: Great New Law Review Article – Part 1

May 19, 2022

This blog is the first part of my review of one of the most interesting law review articles I’ve read in a long time, Robophobia. Woods, Andrew K., Robophobia, 93 U. Colo. L. Rev. 51  (Winter, 2022). Robophobia provides the first in-depth analysis of human prejudice against smart computer technologies and its policy implications. Robophobia is the next generation of technophobia, now focusing on the human fear of replacing human decision makers with robotic ones. For instance, I love technology, but am still very reluctant to let an AI drive my car. My son, on the other hand, loves to let his Tesla take over and do the driving, and watch while my knuckles go white. Then he plays the car’s damn fart noises and other joke features and I relax. Still, I much prefer a human at the wheel. This kind of anxiety about advanced technology decision making is at the heart of the law review article.

Technophobia and its son, robophobia, are psychological anxieties that electronic discovery lawyers know all too well. Often it is from first-hand experience with working with other lawyers. This is especially true for those who work with active machine learning. Ediscovery lawyers tire of hearing that keyword search and predictive coding are not to be trusted, that humans reviewing every document is the gold standard. Professor Woods goes into AI and ediscovery a little bit in Robophobia. He cites our friends Judge Andrew Peck, Maura Grossman, Doug Austin and others. But that is only a small part of this interesting technology policy paper. It argues that a central question now facing humanity is when and where to delegate decision-making authority to machines. This question should be made based on the facts and reason, not on emotions and unconscious prejudices.

Ralph and Robot

To answer this central question we need to recognize and overcome our negative stereotypes and phobias about AI. Robots are not all bad. Neither are people. Both have special skills and abilities and both make mistakes. As should be mentioned right away, Professor Woods in Robophobia uses the term “robot” very broadly to include all kinds of smart algorithms, not just actual robots. We need to overcome our robot phobias. Algorithms are already better than people at a huge array of tasks, yet we reject them for not being perfect. This must change.

Robophobia is a decision-making bias. It interferes with our ability to make sensible policy choices. The law should help society to decide when and what kind of decisions should be delegated to the robots, to balance the risk of using a robot compared to the risk of not using one. Robophobia is a decision-making bias that interferes with our ability to make sensible policy choices. In my view, we need to overcome this bias now, to delegate responsibly, so that society can survive the current danger of misinformation overload. See eg. my blog, Can Justice Survive the Internet? Can the World? It’s Not a Sure Thing. Look Up!

This meta review article (review of a law review) is written in three parts, each fairly short (for me), largely because the Robophobia article itself is over 16,000 words and has 308 footnotes. My meta-review will focus on the parts I know best, the use of artificial intelligence in electronic discovery. The summary will include my typical snarky remarks to keep you somewhat amused, and several cool quotes of Woods, all in an attempt to entice some of you to take the deep dive and read Professor Woods’ entire article. Robophobia is all online and free to access at the University of Colorado Law Review website.

Professor Andrew Woods

Professor Andrew Woods

Andrew Keane Woods is an Professor of Law at the University of Arizona College of Law. He is a young man with an impressive background. First the academics, since, after all, he is a Professor:

  • Brown University, A.B. in Political Science, magna cum laude, 2002;
  • Harvard Law School, J.D., cum laude (2007);
  • University of Cambridge, Ph.D. in Politics and International Studies (2012);
  • Stanford University, Postdoctoral Fellow in Cybersecurity (2012—2014).

As to writing, he has at least twenty law review articles and book chapters to his credit. Aside from Robophobia, some of the most interesting ones I see on his resume are:

  • Artificial Intelligence and Sovereignty, DATA SOVEREIGNTY ALONG THE SILK ROAD (Anupam Chander & Haochen Sun eds., Oxford University Press, forthcoming);
  • Internet Speech Will Never Go Back to Normal,” (with Jack Goldmsith) THE ATLANTIC (Apr. 25, 2020).
  • Our Robophobia,” LAWFARE (Feb. 19, 2020).
  • Keeping the Patient at the Center of Machine Learning in Healthcare, 20 AMERICAN JOURNAL OF BIOETHICS 54 (2020) (w/ Chris Robertson, Jess Findley, Marv Slepian);
  • Mutual Legal Assistance in the Digital Age, THE CAMBRIDGE HANDBOOK OF SURVEILLANCE LAW (Stephen Henderson & David Gray eds., Cambridge University Press, 2020);
  • Litigating Data Sovereignty, 128 YALE LAW JOURNAL 328 (2018).

Bottom line, Woods is a good researcher (of course he had help from a zillion law students, whom he names and thanks), and a deep thinker on AI, technology, privacy, politics and social policies. His opinions deserve our careful consideration. In my language, his insights can help us to move beyond mere information to genuine knowledge, perhaps even some wisdom. See eg. my prior blogs, Information → Knowledge → Wisdom: Progression of Society in the Age of Computers (2015); AI-Ethics: Law, Technology and Social Values (website).

Quick Summary of Robophobia

Bad Robot?

Robots – machines, algorithms, artificial intelligence – already play an important role in society. Their influence is growing very fast. Robots are already supplementing or even replacing some human judgments. Many are concerned with the fairness, accuracy, and humanity of these systems. This is rightly so. But, at this point, the anxiety about machine bias is crazy high. The concerns are important, but they almost always run in one direction. We worry about robot bias against humans. We do not worry about human bias against robots. Professor Woods shows that this is a critical mistake.

It is not an error because robots somehow inherently deserve to be treated fairly, although that may someday be true. It is an error because our bias against nonhuman deciders is bad for us humans. A great example Professor Woods provides is self-driving cars. It would be an obvious mistake to reject all self-driving cars merely because one causes a single fatal accident. Yet this is what happened, for a while at least, when an Uber self-driving car crashed into a pedestrian in Phoenix. See eg. FN 71 of Robophobia: Ryan Randazzo, Arizona Gov. Doug Ducey Suspends Testing of Uber Self-Driving Cars, Ariz. Republic, (Mar. 26, 2018). This kind of one-sided perfection bias ignores the fact that humans cause forty thousand traffic fatalities a year, with an average of three deaths every day in Arizona alone. We tolerate enormous risk from our fellow humans, but almost none from machines. That is flawed, biased thinking. Yet, even rah-rah techno promoters like me suffer from it.

Ralph hoping a human driver shows up soon.

Professor Woods shows that there is a substantial literature concerned with algorithmic bias, but until now, its has been ignored by scholars. This suggests that we routinely prefer worse-performing humans over better-performing robots. Woods points out that we do this on our roads, in our courthouses, in our military, and in our hospitals. As he puts it in his Highlights section, that precede the Robophobia article itself, which I am liberally paraphrasing in this Quick Summary: “Our bias against robots is costly, and it will only get more so as robots become more capable.

Robophobia not only catalogs the many different forms of anti-robot bias that already exist, which he calls a taxonomy of robophobia, it also suggests reforms to curtail the harmful effects of that bias. Robophobia provides many good reasons to be less biased against robots. We should not be totally trusting mind you, but less biased. It is in our own best interests to do so. As Professor Woods puts it, “We are entering an age when one of the most important policy questions will be how and where to deploy machine decision-makers.

 Note About “Robot” Terminology

Before we get too deep into Robophobia, we need to be clear about what Professor Woods means here. We need to define our terms. Woods does this in the first footnote where he explains as follows (HAL image added):

The article is concerned with human judgment of automated decision-makers, which include “robots,” “machines,” “algorithms,” or “AI.” There are meaningful differences between these concepts and important line-drawing debates to be had about each one. However, this Article considers them together because they share a key feature: they are nonhuman deciders that play an increasingly prominent role in society. If a human judge were replaced by a machine, that machine could be a robot that walks into the courtroom on three legs or an algorithm run on a computer server in a faraway building remotely transmitting its decisions to the courthouse. For present purposes, what matters is that these scenarios represent a human decider being replaced by a nonhuman one. This is consistent with the approach taken by several others. See, e.g., Eugene Volokh, Chief Justice Robots, 68 DUKE L.J. 1135 (2019) (bundling artificial intelligence and physical robots under the same moniker, “robots”); Jack Balkin, 2016 Sidley Austin Distinguished Lecture on Big Data Law and Policy: The Three Laws of Robotics in the Age of Big Data, 78 OHIO ST. L.J. 1217, 1219 (2017) (“When I talk of robots … I will include not only robots – embodied material objects that interact with their environment – but also artificial intelligence agents and machine learning algorithms.”); Berkeley Dietvorst & Soaham Bharti, People Reject Algorithms in Uncertain Decision Domains Because They Have Diminishing Sensitivity to Forecasting Error, 31 PSYCH. SCI. 1302, 1314 n.1 (2020) (“We use the term algorithm to describe any tool that uses a fixed step-by-step decision-making process, including statistical models, actuarial tables, and calculators.”). This grouping contrasts scholars who have focused explicitly on certain kinds of nonhuman deciders. Seee.g., Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 CALIF. L. REV. 513, 529 (2015) (focusing on robots as physical, corporeal objects that satisfy the “sense-think-act” test as compared to, say, a “laptop with a camera”).

I told you Professor Woods was a careful scholar, but wanted you to see for yourself by a full quote of footnote one. I promise to exclude footnotes and his many string cites going forward in this blog article, but I do intend to frequently quote his insightful, policy packed language. Did you note his citation to Chief Justice Roberts in his explanation of “robophobia”? I will end this first part of my review of Robophobia with a side excursion into the Justice Robert cite. It provides a good example of irrational robot fears and insight into the Chief Justice himself, which is something I’ve been considering a lot lately. See eg. my recent article The Words of Chief Justice Roberts on JUDICIAL INTEGRITY Suggest the Supreme Court Should Step Away from the Precipice and Not Overrule ‘Roe v Wade’.

Chief Justice Roberts Told High School Graduates in 2018 to “Beware the Robots”

The Chief Justice gave a very short speech at his daughter’s private high school graduation. There he demonstrated a bit of robot anxiety, but did so in an interesting manner. It bears some examination before we get into the substance of Woods’ Robophobia article. For more background on the speech see eg. Debra Cassens Weiss, Beware the robots,’ chief justice tells high school graduates (June 6, 2018). Here are the excerpted words of Chief Justice John Roberts:

Beware the robots! My worry is not that machines will start thinking like us. I worry that we will start thinking like machines. Private companies use artificial intelligence to tell you what to read, to watch and listen to, based on what you’ve read, watched and listened to. Those suggestions can narrow and oversimplify information, stifling individuality and creativity.

Any politician would find it very difficult not to shape his or her message to what constituents want to hear. Artificial intelligence can change leaders into followers. You should set aside some time each day to reflect. Do not read more, do not research more, do not take notes. Put aside books, papers, computers, telephones. Sit, perhaps just for a half hour, and think about what you’re learning. Acquiring more information is less important than thinking about the information you have.”

Aside from the robot fear part, which was really just an attention grabbing speech thing, I could not agree more with his main point. We should move beyond mere information, we should take time to process the information and subject it to critical scrutiny. We should transform from mere information gatherers, into knowledge makers. My point exactly in Information → Knowledge → Wisdom: Progression of Society in the Age of Computers (2015). You could also compare this progression with an ediscovery example, moving from just keyword search to predictive coding.


Part Two of my review of Robophobia is coming soon. In the meantime, take a break and think about any fears you may have about AI. Everyone has some. Would you let the AI drive your car? Select your documents for production? Are our concerns about killer robots really justified, or maybe just the result of media hype? For more thoughts on this, see AI-Ethics.com. And yes, I’ll be Baaack.


Purple Rain of Sanctions Falls on the Record Company in the “Prince Case” for their Intentional Destruction of Text Messages

March 10, 2019

Honey, I know, I know
I know times are changing
It’s time we all reach out
For something new, that means you too.
Purple Rain
PRINCE Rogers Nelson
1958-2016

I know, I know, it used to be good enough just to save the relevant emails and ESI on company computers. Not any more. Times are changing. Important business is now conducted by phone text and other messages. It’s time we all reach out and save something new, save the texts, save the phones. That directive applies to everyone, that means you too. Prince record company executives recently found that out the hard way in District Court in Minneapolis. Paisley Park Enters. v. Boxill, No. 0:17-cv-01212, (D. Minn., 3/5/19) (copy here: Prince_Discovery_Order).

United States Magistrate Judge Tony N. Leung sanctioned the record company defendant and its two top executives in a suit over the posthumous release of Prince’s “Deliverance” album. They were sanctioned because the plaintiff, the Prince Estate via Paisley Park, proved that the defendant executives intentionally destroyed text messages about the album. They denied bad intent and claim they did what they thought the law required, save the emails and office computer data. Defendants claimed they provided discovery from other sources of ESI, including their work computers, cooperated with a forensic data firm to ensure Plaintiffs obtained everything they sought, but, they further argue that Plaintiffs never asked to inspect their cell phones during this process. They claimed they did not know they also had to preserve their text messages.

One is reminded of the first verse to Purple Rain:
I never meant to cause you any sorrow
I never meant to cause you any pain
I only wanted to one time to see you laughing
I only wanted to see you
Laughing in the purple rain

Judge Tony Leung was not laughing, purple rain or not. He did not believe defendants’ good faith intent argument. He was no more impressed by their “times are changing,” “we didn’t know” argument than Prince was in Purple Rain. In today’s world preservation of email is not enough. If text messages are how people did business, which was the case in Paisley Park, then these messages must also be preserved. As Judge Leung put it:

In the contemporary world of communications, even leaving out the potential and reality of finding the modern-day litigation equivalent of a “smoking gun” in text messages, e-mails, and possibly other social media, the Court is baffled as to how Defendants can reasonably claim to believe that their text messages would be immune from discovery.

Perhaps what really got to the judge was that these record executives not only the deleted the texts, they wiped the phones and then they threw them away. This was all before suit was filed, but they knew full well at the time that the Estate was going to sue them for copyright violations.  As Judge Leung explained (emphasis added):  “An e-discovery lawyer for Plaintiffs’ law firm indicates that had Staley and Wilson not wiped and discarded their phones, it might have been possible to recover the deleted messages. (ECF No. 387, p. 2).” (Note: this is the first time I can recall this expression “e-discovery lawyer”  being used in an opinion.)

Text Message Spoliation Law

Judge Leung provides a good summary of the law.

The Federal Rules of Civil Procedure require that parties take reasonable steps to preserve ESI that is relevant to litigation. Fed. R. Civ. P. 37(e). The Court may sanction a party for failure for failure to do so, provided that the lost ESI cannot be restored or replaced through additional discovery. Id. Rule 37(e) makes two types of sanctions available to the Court. Under Rule 37(e)(1), if the adverse party has suffered prejudice from the spoliation of evidence, the Court may order whatever sanctions are necessary to cure the prejudice. But under Rule 37(e)(2), if the Court finds that the party “acted with the intent to deprive another party of the information’s use in the litigation,” the Court may order more severe sanctions, including a presumption that the lost information was unfavorable to the party or an instruction to the jury that it “may or must presume the information was unfavorable to the party.” The Court may also sanction a party for failing to obey a discovery order. Fed. R. Civ. P. 37(b). Sanctions available under Rule 37(b) include an order directing that certain designated facts be taken as established for purposes of the action, payment of reasonable expenses, and civil contempt of court.

Pgs.6-7

There is no doubt that Staley and Wilson are the types of persons likely to have relevant information, given their status as principals of RMA and owners of Deliverance. Nor can there be any reasonable dispute as to the fact that their text messages were likely to contain information relevant to this litigation. In fact, Boxill and other third parties produced text messages that they sent to or received from Staley and Wilson. Neither party disputes that those text messages were relevant to this litigation. Thus, the RMA Defendants were required to take reasonable steps to preserve Staley and Wilson’s text messages.

The RMA Defendants did not do so. First, Staley and Wilson did not suspend the auto-erase function on their phones. Nor did they put in place a litigation hold to ensure that they preserved text messages. The principles of the “standard reasonableness framework” require a party to “suspend its routine document retention/destruction policy and put in place a ‘litigation hold’ to ensure the preservation of relevant documents.” Steves and Sons, Inc. v. JELD-WEN, Inc., 327 F.R.D. 96, 108 (E.D. Va. 2018) (citation and internal quotation marks omitted). It takes, at most, only a few minutes to disengage the auto-delete function on a cell phone. It is apparent, based on Staley’s affidavit, that he and Wilson could have taken advantage of relatively simple options to ensure that their text messages were backed up to cloud storage. (ECF No. 395, pp. 7-9). These processes would have cost the RMA Defendants little, particularly in comparison to the importance of the issues at stake and the amount in controversy here. Failure to follow the simple steps detailed above alone is sufficient to show that Defendants acted unreasonably.

Pgs. 8-9

But that is not all the RMA Defendants did and did not do. Most troubling of all, they wiped and destroyed their phones after Deliverance and RMA had been sued, and, in the second instance for Wilson, after the Court ordered the parties to preserve all relevant electronic information, after the parties had entered into an agreement regarding the preservation and production of ESI, and after Plaintiffs had sent Defendants a letter alerting them to the fact they needed to produce their text messages. As Plaintiffs note, had Staley and Wilson not destroyed their phones, it is possible that Plaintiffs might have been able to recover the missing text messages by use of the “cloud” function or through consultation with a software expert. But the content will never be known because of Staley and Wilson’s intentional acts. The RMA Defendants’ failure to even consider whether Staley and Wilson’s phones might have discoverable information before destroying them was completely unreasonable. This is even more egregious because litigation had already commenced.

Pg. 9

It is obvious, based on text messages that other parties produced in this litigation, that Staley and Wilson used their personal cell phones to conduct the business of RMA and Deliverance. It is not Plaintiffs’ responsibility to question why RMA Defendants did not produce any text messages; in fact, it would be reasonable for Plaintiffs to assume that Defendants’ failure to do so was on account of the fact that no such text messages existed. This is because the RMA Defendants are the only ones who would know the extent that they used their personal cell phones for RMA and Deliverance business at the time they knew or should have reasonably known that litigation was not just possible, but likely, or after Plaintiffs filed suit or served their discovery requests.

Furthermore, the RMA Defendants do not get to select what evidence they want to produce, or from what sources. They must produce all responsive documents or seek relief from the court. See Fed. R. Civ. P. 26(c) (outlining process for obtaining protective order).

Pg. 12

Having concluded that the RMA Defendants did not take reasonable steps to preserve and in fact intended to destroy relevant ESI, the Court must next consider whether the lost ESI can be restored or replaced from any other source. Fed. R. Civ. P. 37(e).

Pg. 13

While it is true that Plaintiffs have obtained text messages that Boxill and other parties sent to or received from Staley and Wilson, that does not mean that all responsive text messages have been recovered or that a complete record of those conversations is available. In particular, because Wilson and Staley wiped and destroyed their phones, Plaintiffs are unable to recover text messages that the two individuals sent only to each other. Nor can they recover text messages that Staley and Wilson sent to third parties to whom Plaintiff did not send Rule 45 subpoenas (likely because they were not aware that Wilson or Staley communicated with those persons). The RMA Defendants do not dispute that text messages sent between Staley and Wilson are no longer recoverable.  . . .

At most, Plaintiffs now can obtain only “scattershot texts and [e-mails],” rather than “a complete record of defendants’ written communications from defendants themselves.” First Fin. Sec., Inc. v. Lee, No. 14 cv-1843, 2016 WL 881003 *5 (D. Minn. Mar. 8, 2016). The Court therefore finds that the missing text messages cannot be replaced or restored by other sources.

Pgs. 13-14

There is no doubt that Plaintiffs are prejudiced by the loss of the text messages. Prejudice exists when spoliation prohibits a party from presenting evidence that is relevant to its underlying case. Victor Stanley, 269 F.R.D. at 532. As set forth above, in the Court’s discussion regarding their ability to replace or restore the missing information, Plaintiffs are left with an incomplete record of the communications that Defendants had with both each other and third parties. Neither the Court nor Plaintiffs can know what ESI has been lost or how significant that ESI was to this litigation. The RMA Defendants’ claim that no prejudice has occurred is “wholly unconvincing,” given that “it is impossible to determine precisely what the destroyed documents contained or how severely the unavailability of these documents might have prejudiced [Plaintiffs’] ability to prove the claims set forth in [their] Complaint.” Telectron, Inc. v. Overhead Door Corp., 116 F.R.D. 107, 110 (S.D. Fl. 1987); see also Multifeeder Tech., Inc. v. British Confectionary Co. Ltd, No. 09-cv-1090, 2012 WL 4128385 *23 (D. Minn. Apr. 26, 2012) (finding prejudice because Court will never know what ESI was destroyed and because it was undisputed that destroying parties had access to relevant information), report and recommendation adopted in part and rejected in part by 2012 WL 4135848 (D. Minn. Sept. 18, 2012). Plaintiffs are now forced to go to already existing discovery and attempt to piece together what information might have been contained in those messages, thereby increasing their costs and expenses. Sanctions are therefore appropriate under Rule 37(e)(1).

Sanctions are also appropriate under Rule 37(e)(2) because the Court finds that the RMA Defendants acted with the intent to deprive Plaintiffs of the evidence. “Intent rarely is proved by direct evidence, and a district court has substantial leeway to determine intent through consideration of circumstantial evidence, witness credibility, motives of the witnesses in a particular case, and other factors.” Morris v. Union Pacific R.R., 373 F.3d 896, 901 (8th Cir. 2004). There need not be a “smoking gun” to prove intent. Auer v. City of Minot, 896 F.3d 854, 858 (8th Cir. 2018). But there must be evidence of “a serious and specific sort of culpability” regarding the loss of the relevant ESI. Id.

Pgs. 15-16

The Court can draw only one conclusion from this set of circumstances: that they acted with the intent to deprive Plaintiffs from using this information. Rule 37(e)(2) sanctions are particularly appropriate as to Wilson, RMA, and Deliverance for this reason as well.

Pg. 17

The Court believes that Plaintiffs’ request for an order presuming the evidence destroyed was unfavorable to the RMA Defendants and/or for an adverse inference instruction may well be justified. But given the fact that discovery is still on-going, the record is not yet closed, and the case is still some time from trial, the Court believes it more appropriate to defer consideration of those sanctions to a later date, closer to trial. See Monarch Fire Protection Dist. v. Freedom Consulting & Auditing Servs., Inc., 644 F.3d 633, 639 (8th Cir. 2011) (holding that it is not an abuse of discretion to defer sanction considerations until trial). At that point, the trial judge will have the benefit of the entire record and supplemental briefing from the parties regarding the parameters of any such instruction or presumption.

The Court will, however, order the RMA Defendants to pay monetary sanctions pursuant to Rules 37(b), and 37(e) and the Court’s pretrial scheduling orders.

Pgs. 18-19

The Court will therefore order, pursuant to Rules 37(b)(2)(C), 37(e)(1), and 37(e)(2) and the Court’s pretrial scheduling orders, the RMA Defendants to pay reasonable expenses, including attorney’s fees and costs, that Plaintiffs incurred as a result of the RMA Defendants’ misconduct. The Court will order Plaintiffs to file a submission with the Court detailing such expenses and allow the RMA Defendants the opportunity to respond to that submission. In addition, pursuant to Rule 37(e)(2) and the Court’s pretrial scheduling order, the Court will also order the RMA Defendants to pay into the Court a fine of $10,000. fn3 This amount is due within 90 days of the date of this Order.

Pg. 20

Let’s close with the original music of Prince.

 

 


%d bloggers like this: