Examining a Leaked Criminal Warrant for Apple iCloud Data in a High Profile Case – Part One

June 21, 2022

Inadvertently Disclosed Warrant Application Against Apple in a Criminal Investigation Against Retired Marine General Reveals Latest DOJ Search Procedures, the Dangers of Pacer and Too Much Court Record Transparency, and Much More- Part One

According to a June 7, 2022, New York Times report:

John R. Allen

Federal prosecutors have obtained records indicating that John R. Allen, the retired four-star Marine general who commanded all American troops in Afghanistan and now heads a venerable Washington think tank, secretly lobbied for the government of Qatar, lied to investigators about his role and tried to withhold evidence sought by a federal subpoena, according to court documents.

The court records are the latest evidence of a broad investigation by the Justice Department and F.B.I. into the influence that wealthy Arab nations like Qatar, the United Arab Emirates and Saudi Arabia wield in Washington.

The records about General Allen were filed in April in Federal District Court in Central California in an application for a warrant to search General Allen’s electronic communications.

NYT 6/07/22, Mark Mazzetti and David D. Kirkpatrick

I do not know what records federal prosecutors have obtained, and have no opinion regarding the guilt or innocence of the accused, aside to note that all persons must be presumed innocent, until proven guilty in a court of law, and that I personally hope John Allen is not guilty. There has been no judgment by a judge or jury, but there has been significant evidence presented against the retired General.

Application For Warrant To Seize Digital Evidence From Apple

The evidence was presented to the Central District Court in California, as part of the government’s seventy seven page application for a warrant. The Court Clerk, for some reason, did not keep have this Application under seal, did not keep the Application secret. Somehow the Associated Press International found out about it being in clear and open view on Pacer, downloaded it and quickly published it. This disclosure is, we understand, alleged to have been a mistake. That is suspicious in view of the current political climate and fact that retried General Allen was, until the release of this warrant, the President of the Brookings Institution, which is generally considered a liberal organization. More on that later.

That application for warrant, formally entitled APPLICATION FOR A WARRANT BY TELEPHONE OR OTHER RELIABLE ELECTRONIC MEANS (“Application”), will be discussed here as part of the discussion of the ESI discovery procedures, especially the search procedure laid out in the Application. The facts of the case and questions surrounding the disclosure of this secret Application are secondary, but will not be ignored. Both are disturbing, especially the content of the FBI agent’s sworn affidavit that supported the Application. This bite of the Apple may very well be sour. I hope not.

Basic Introduction to Criminal eDiscovery by Tom O’Conner

The Application reveals the procedures now used by the DOJ to obtain a person’s private communications from a provider, in this case Apple, as part of a criminal investigation. For excellent background on ediscovery in criminal cases, see the five-part article by Tom O’Connor published by Cloud Nine: part one, part two, part three, part four and part five. Also see: David Horrigan’s good article, At the Border and Beyond: e-Discovery Aspects of Criminal Matters and Investigations (Relativity Blog, 2/16/17).

Tom O’Connor

By the way, I seem to recall from socializing with Tom O’Connor at a CLE someplace, many years ago, that Tom, a huge Rolling Stones fan by the way, got his start in e-discovery as a technical advisor in criminal matters. He told me about one case in particular where he helped the public defenders in Seattle on ediscovery issues in a notorious motorcycle gang case. Tom may look like Santa now, but he is tough and has been around eDiscovery longer than most. About the only lawyer I know with as much criminal e-discovery experience, including from the other side of the bench, is another good friend, retired judge Ron Hedges. See eg. Ron’s recent criminal law articles, Despite Rulings, 4th Amendment Battles over GeoFence Warrants are Far from Over, Legaltech news, May 2022; and Hot Topics for ESI in Criminal Matters, Criminal Justice 43 (ABA Crim. Section: Fall 2016).

In part three of Tom O’Connor’s mentioned article, he explains the basics of applications a/k/a affidavits like this for an ESI criminal warrant.

The government will usually get its ESI by consent or warrant. Typically, when the federal government seeks data in criminal cases (and most states have a similar procedure), it requests a search and seizure warrant by filing an application or affidavit sworn before a judge. The application, as provided in Rule 41, identifies the location of the property to be searched and seized, and includes facts that support probable cause (a reasonable belief that a crime has been committed and evidence of such may be at the site) as to why the government needs (and should get) the property. . . .

F.R. Crim. P. Rule 41 then establishes a two-step process when ESI is involved. The first step is the seizure and then a subsequent review of the ESI which must be consistent with the warrant. There is no time frame established for this review since it may take a substantial amount of time, especially with encrypted drives. . . .

Regarding third parties, the court may issue a subpoena under F. R. Crim. P. Rule 17 for a third party to produce records at trial or at another time and place.  This is typically a bank or cell phone carrier but can be any non-party thought to be in possession of relevant information. The court may then allow the defense to inspect all or part of the ESI.

Tom O’Connor

FBI’s Affidavit Is Detailed

I am relieved to see that the government’s warrant Application, and I presume all other similar warrant applications of the DOJ, whether of famous public figures or not, was supported by very detailed allegations of fact. The Application was made by an FBI agent. I did not, however, see the agent’s signature at page 77, and so wonder about that. How can it be a sworn affidavit without a signature? (Answer, it can’t!) This, along with the accidental disclosure itself, appears to be another sloppy error by the DOJ. Also, the final paragraph of the FBI factual affidavit, numbered 154, is blank. Another mere scrivener’s error? I assume that was all corrected later. But who knows? The court file was resealed again after the API leak. Aside from the clerical errors, the actual contents of the FBI’s statement, sworn to or not, was compelling. I have to assume the Warrant was granted and John Allen’s iCloud account was collected and turned over by Apple to the FBI.

Accused Is a Marine Hero President of the Brookings Institution

John Allen is a retired Marine, a Four Star General, who, among other things, served as the commander of the NATO International Security Assistance Force and U.S. Forces – Afghanistan and Chairman of the Joint Chiefs of Staff. He retired in 2013 and joined the prestigious Brookings Institution. In 2017 he was named the President of Bookings and served in this position until put on leave on June 8, 2022. He was put on leave when this Application was published and allegations revealed concerning illegal lobbying efforts that he supposedly made on behalf of the Persian Gulf nation of Qatar. By the way, Qatar was a major donor of Brookings.

I will not go into the facts of the FBI agents sworn statement in detail in this blog article, but will not attempt to cover them up either. Some mention of them is necessary to discuss the ESI discovery points, which are the focus of this article. But I will not do so with glee. Just the contrary. The fact that these allegations are made against John Allen, one of the most decorated Marines in history, saddens me very much. It has nothing to do with the Brookings Institution, which does not seem all that liberal to me.

My dear brother-in-law was a Marine, always a Marine, and my father was a Naval Officer in two wars. My Dad’s ships, where he was the communications officer, used to shuttle Marines around in the Pacific. They were both exemplary models of integrity, honesty and courage for which I am very grateful. May they rest in peace. These allegations against John Allen, if true, and we must presume innocence, are contrary to the Marine and Navy Codes of honor and our American way for life. Personally, I hope retired General Allen is innocent and did not, as alleged, lie for financial gain.

Property To Be Searched

The Application in paragraph three identifies the property to be searched under the warrant as three Apple iCloud accounts:

a. . . . identified as iCloud associated with DSID / Apple Account Number 1338547227 and / or email address rickscafedxb@yahoo.com that is within the possession, custody, or control of Apple , Inc., a company that accepts service of legal process at One Apple Park Way, M / S 169-5CLP, Cupertino, California 95014, regardless of where such information is stored held, or maintained. . . .

b. iCloud Account associated with DSID/Apple Account number 120757353 and/or email address imaad.zuberi@mindspring.com. . . .

c. iCloud Account associated with DSID/Apple Account Number 270847771 and/or email address j.rutherford.allen@gmail.com. . . .

Application, paragraph 3
Richard Olson

Paragraph 14 of the Application identifies the account holder for rickscafedxb@yahoo.com:

Richard Gustave Olson, Jr. (“Olson”), the user of SUBJECT ACCOUNT 2, served as U.S. Ambassador to the United Arab Emirates (“UAE”) from October 2008 through May 2011 and U.S. Ambassador to Pakistan from October 2012 through November 2015. From November 2015 through November 2016, Olson held the position of U.S. Special Representative for Afghanistan and Pakistan.

I understand that Olson was a career U.S. diplomat and his wife is also a U.S. diplomat.

Paragraph 12 of the Application identifies the account holder for imaad.zuberi@mindspring.com:

Imaad Shah Zuberi (“Zuberi”), the user of SUBJECT ACCOUNT 1, is an American businessman who operated a business entity named Avenue Ventures. Zuberi’s business largely consisted of receiving funds from foreign clients, using those funds to make political campaign contributions, parlaying those contributions into political influence, and using that influence to change U.S. government policy for his foreign clients.

Disgusting to me, but not yet illegal. The Application goes on to describe some of the illegal fraud that both Zuberi has been convicted of, but note, Zuberi’s conviction has been appealed. Here is a copy of the indictment of Zuberi, if you want to learn more about this charming guy, who donated to Republicans and Democrats alike.

Imaad Shah Zuberi

13. On November 22, 2019, in United States v. Zuberi, CR 19-642-VAP, Zuberi pleaded guilty to a FARA offense in violation of 22 U.S.C. §§ 612, 618(a)(2), Federal Election Campaign Act offenses in violation of 52 U.S.C. §§ 30116, 30118, 30121, 30122, 30109(d)(1), and tax evasion in violation of 26 U.S.C. 7201. All the aforementioned charges were unrelated to the instant investigation. On June 30, 2020, in United States v. Zuberi, CR 20-155-VAP, Zuberi also pleaded guilty to obstruction of justice (witness tampering) in violation of 18 U.S.C. §1512(c) related to an investigation conducted out of the Southern District of New York. On February 23, 2021, in a consolidated sentencing, the court determined that Zuberi’s obstruction extended beyond the single incident charged and that it included his deletion of four email accounts under the domain avenueventures.com exclusively under his control as well as paying several witnesses millions of dollars to silence them in connection with the government’s investigation. The court sentenced Zuberi to 12 years’ imprisonment.

More about Diplomat Olson. The Application alleges in paragraph 17:

17. On January 14, 2022, Olson entered into a plea agreement with the government that requires him to enter pleas of guilty to Making a False Writing in violation of 18 U.S.C. §1018 (related to his filing a false financial disclosure form in 2016) and Aiding and Advising a Foreign Government with Intent to Influence Decisions of United States Officers in violation of 18 U.S.C. §§ 207(f)(1)(B), 216(a)(1) (relating to his work in support of Qatar).

Paragraph 92 of the Application explains why they are searching the iCloud for these emails, in short, because they had all been deleted by the parties, once they learned of the DOJ investigation. That’s not just bad faith spoliation, that’s destruction of evidence in a criminal investigation, which is itself a crime. You know the common criminal law wisdom, the cover-up is worse than the crime.

92. In his interview subject to a limited use immunity agreement, Olson informed the government that in the spring of 2019, after he became aware of the government’s investigation of Zuberi, Zuberi asked him to delete emails pertaining to Allen from his rickscafedxb@yahoo.com account to protect Allen from government investigators. Olson admitted that he indeed deleted emails in response to Zuberi’s request. Search warrants for Olson’s rickscafedxb@yahoo.com account and Allen’s j.rutherford.allen@gmail.com account reveal that the emails Allen and Olson did not produce no longer exist on the providers’ servers.

The DOJ is going after the Apple iCloud accounts in the hope they will find a backup of deleted emails, or other incriminating evidence, to try to prove their case against the remaining defendant, General Allen. All kinds of ESI can be stored in an iCloud account. Of course, this discovery may fail. Apple may respond to the warrant after service by saying that these iCould accounts no longer exist, and there is no backup at this late date. We do not know who will be yelling at the cloud, but someone is likely to be angry. Isn’t that usually the way it is in ediscovery?

Related Lobbying Fraud Cases

The Washington Post recently reported that Imaad Zuberi had also been working with billionaire Thomas J. Barrack, Jr., Trump’s longtime friend and presidential inaugural committee chairman. Ex-U.S. diplomat pleads guilty in Qatar lobbying plot, names general (Allen) (Washington Post, 6/3/22). Barrack, who sold Trump The Plaza hotel in NYC, has also been criminally charged with obstructing justice and acting as an unregistered agent for the UAE. To Quote the Washington Post article:

Tom Barrack

Barrack, one of his Trump’s closet associates on his road to the White House, pleaded not guilty last summer and was freed on $250 million bond pending trial on charges of conspiring to secretly lobby for the UAE, which invested significantly in his investment firm, Colony Capital.

In October 2020, Elliott Broidy, a Trump fundraiser and former Republican National Committee deputy finance chairman who also received a $200 million security contract with the UAE, pleaded guilty to acting as an unregistered foreign agent and accepting millions of dollars to secretly lobby the Trump administration for Malaysian and Chinese interests.

The same cited Washington Post article reports that Olson plead guilty on June 3, 2022, to charges in connection with a secret lobbying campaign on behalf of Qatar to influence the Trump White House and Congress in 2017. Ex-U.S. diplomat pleads guilty in Qatar lobbying plot, names general (Allen).

Rick’s Cafe

Finally, to conclude Part One of this long blog on a cute movie note, consider the name of the personal email account of the once illustrious U.S. Ambassador, Richard (“Rick”) Gustave Olson, Jr.: rickscafedxb@yahoo.com. The former ambassador’s personal email name is a reference to Casablanca, the WWII cloak and dagger film starring Humphrey Bogart and Ingrid Bergman. The restaurant nightclub in the movie was called Rick’s Cafe. The lead character, Rick Blaine, played by Bogart, owned the cafe and was a cynical neutral serving for profit both sides in Morocco until the end of the movie, when he choose against the Nazis. The best scenes were in Rick’s Cafe where many Nazis would party. See: ‘Play It Again, Issam’: The Real ‘Rick’s Cafe’ Story. The abbreviation of the Dubai airport in Saudi Arabia, where Rick Olson was the U.S. Ambassador, is DXB. Add it all up and you get the Ambassador’s personal email handle, ricks cafe dxb. Yahoo!

Play it again Sam (referring to the jazz song, As Time Goes By, played below by Dooley Wilson). I wonder how many times Rick Olson will play that movie in prison? Maybe none, because it’s possible that he may, somewhat like some banks, be too big to jail. Time will tell. Apparently his sentencing may depend on how helpful he is in the government’s case against General Allen.

Play It Again, Sam (Dooley Wilson)


To be continued. . . The next parts to the blog will go over: Section I of the Application, Search Procedures; Section II, Information to be Disclosed by Provider, Section III, Information to be Seized by the Government and Section IV, Provider Procedures.

The question of the supposedly inadvertent disclosure of the Application from court sealed to the API will also be considered. All speculation of course. It could be on purpose, to embarrass or prejudice John Allen, but it could also very well be an accident. All too easy to happen when humans are involved.

I have some personal experience with a case that was accidentally unsealed. I’ll talk about that case from over ten years ago in Part III. It was an exception to today’s rule that, where confidential ESI is concerned, once the genie is out of the bottle, it can’t be put back in. In the early days of Pacer it could. But these days, with ever more complex e-filing rules and Pacer services everywhere, once a mistake is made, and confidential information goes online, the genie is gone.

Feel free to leave any comments below.

Robophobia: Great New Law Review Article – Part 3

June 14, 2022
Professor Andrew Woods

This article is the conclusion to my three-part review of Robophobia by Professor Andrew Woods. Robophobia, 93 U. Colo. L. Rev. 51  (Winter, 2022). See here for Part 1 and Part 2 of my review. This may seem like a long review, but remember Professor Woods article has 24,614 words, not that I’m counting, including 308 footnotes. It is a groundbreaking work and deserves our attention.

Part 2 ended with a review of Part V of Andrew’s article. There he laid out his key argument, The Case Against Robophobia. Now my review moves on to Part VI of the article, his proposed solutions, and the Conclusion, his and mine.

Part VI. Fighting Robophobia

So what do we do? Here is Andrew Woods overview of the solution.

The costs of robophobia are considerable and they are likely to increase as machines become more capable. The greater the difference between human and robot performance, the greater the costs of preferring a human. Unfortunately, the problem of robophobia is itself a barrier to reform. It has been shown in several settings that people do not want government rules mandating robot use.[272] And policymakers in democratic political systems must navigate around—and resist the urge to pander to—people’s robophobic intuitions. So, what can be done?

Robophobia is a decision-making bias⎯a judgment error.[273] Fortunately, we have well-known tools for addressing judgment errors. These include framing effects, exposure, education and training, and, finally, more radical measures like designing situations so that biased decision-makers—human or machine—are kept out of the loop entirely.

If you don’t know what all of these “well-known” list of tools are, don’t feel too bad, I don’t either. Hey, we’re only human. And you’d rather read my all-too-human review of this article then one written by a computer, which is all too common these days. You would, right? Well, ok, I do know what exposure, education and training mean. I’ve certainly sat through more than my fair share of anti-prejudice training, often very boring training at that. Now we add robots to the list of sensitivity training. Still, what the hell are “framing effects”?

More on that in a second, but also what does the professor mean by keeping biased decision makers out of the loop? As a long time promoter of “Hybrid” Multimodal Predictive Coding, where hybrid means humans and AI working together, my defenses are up when there is talk of making it a default to exclude humans entirely. Yes, I suppose my prejudices are now showing. But when it comes to my experience in working with robots in evidence search, I like to delegate, but have the final say. That’s what I mean by balanced. More on this later.

Professor Woods tries to explain “framing effects” so that even a non-PhD lawyer can understand it. It turns out that framing effects have to do with design, like how you frame a question often impacts the answers. Objection, leading! It includes such things as what lawyers commonly call “putting lipstick one a pig.” Make the robots act more human-like (but not too much so that you enter the “uncanny valley”). For example, have them pause for a second or two, which is like an hour at their speeds, before they make a recommendation. Maybe even say “hmm” and stroke their chin. How about showing some good robot paintings and poems. Non creepy Anthropomorphic design strategies appear to work, so too do strategies of making “use of a robot” the default action, instead of an alternate elective. This requires you to make an affirmative effort to opt-out and use a human, instead of a robot. We are so lazy, we might just go with the process, especially if we are repeatedly told how it will save us money and provide better results; i.e. – Predictive coding, TAR, is better than sliced bread!

Now to the threatening idea, to me at least, and possibly you, to just keep humans out of the loop entirely. Robot knows best. Big Brother as a robot. Does this thing even have a plug?

From the famous Apple Commercial in 1984

Let’s see what Professor Woods has to say about this just let the machines do it idea. I agree with much of what he says here, especially about automated driving, but still want to know how to turn the robots off and stop the surveillance.

[T]here is considerable evidence in a number of scenarios that keeping humans in the loop eliminates the advantages of having an automated system in the first place and, in some instances, actually makes things worse. . . . The National Highway Traffic Safety Administration recognizes six levels of automotive autonomy, ranging from 0 (no automation) to 3 (conditional automation) to 5 (full automation).[302] Some people believe that a fully autonomous system is safer than a human driver, but that a semi-autonomous system—where a human driver works with the autonomous system—is actually less safe than a system that is purely human driven.[303] That is, autonomy can increase safety, but the increase in safety is not linear; introducing some forms of autonomy can introduce new risks.[304] . . .

If algorithms can, at times, make better decisions than humans, but human use of those algorithms eliminates those gains, what should be done? One answer is to ban semi-autonomous systems altogether; human-robot interaction effects are no longer a problem if humans and robots are not allowed to interact. Another possibility would be to ban humans from some decision-making processes; a purely robotic system would not have the same negative human-robot interaction effects. This might mean fewer automated systems but would only leave those with full autonomy.

If humans misjudge algorithms—by both over- and underrelying on them—can they safely coexist? Take again the example of self-driving cars. If robot-driven cars are safer than human-driven cars but human-driven cars become less safe around robot cars, what should be done? Robots can simultaneously make the problem of road safety better and worse. They might shift the distribution of road harms from one set of drivers to another. Or it might be that having some number of robot drivers in a sea of human drivers is actually less safe for all drivers than a system with no robot drivers. The problem is the interaction effect. In response, we might aim to improve robots to work better with humans or improve humans to work better with robots. Alternatively, we might simply decide there are places where human-robot combinations are too risky and instead opt for purely human or purely machine decision-making.


So solution to this problem is a work in progress. What did you expect for an article that first recognizes robophobia? These solutions will take time for us to work out. I predict this will be the first of many articles on this topic, an iterative process, like most things robotic, including law’s favorite, predictive coding. Something I have been teaching and talking about since 2012 when I first began working with these robots and made this snazzy video.

Ralph Losey’s Predictive Discovery Ver. 2.0

Even though it worries me, I understand what Andrew means about the problems of humans in the loop. I have seen far too many lawyers, usually dilettantes with predictive coding, screw everything up royally by not knowing what they are doing, by mistakes. That is one reason that I developed and taught my method of semi-automated document review for years.

Ralph at NIST’s Trec in 2015

I have tried to dumb it down as much as possible to reduce human error and make it as accessible as possible, but it is still fairly complex. The latest version is called Predictive Coding 4.0, a Hybrid Multimodal IST method, where Hybrid means both man and machine, Multimodal means all types of search algorithms are used, and IST stands for Intelligently Spaced Training. IST means you keep training until first pass relevance review is completed, a type of Continuous Active Learning, which Grossman, Cormack, and then others, called CAL. I have had a chance to test out and prove this type of robot many times, including at NIST, with Maura and Gordon. After much hands on experience, I have overcome many of my robophobias, but not all. See: WHY I LOVE PREDICTIVE CODING: Making Document Review Fun Again with Mr. EDR and Predictive Coding 4.0.

I recognize the dangers of keeping humans in the loop that Professor Woods points out. That is one reason my teaching has always emphasized quality controls. That’s Step Seven in my semi-automated process of document review, where ZEN stands for Zero Error Numerics (explained in ZEN website here). Note this quality control step uses both robots (algorithms) and humans (lawyers).

Moreover, as I have discussed in my articles, with human lawyers as the subject matter experts in the machine training (step four), the old “garbage in, garbage out” problem remains. It is even worse if the human is unethical and intentionally games the machine training by calling black white, iw, intentionally calling a relevant document irrelevant. That’s a danger of any hybrid process of machine training and that’s one reason we have quality controls.

But eventually, the machines will get smart enough to see through intentionally bad training, weed out the treasure from the trash, find all of the highly relevant ESI we need. Our law robots can already do this, to a certain extent. They have been correcting my unintentional errors on relevance calls for years. That is part of the iterative process of active machine learning (steps four, five and six in my hybrid process). Such corrections are expected, and once your ego gets over it, you grow to like it. After all, its nice having a helper that can read at the speed of light, well, speed of electrons anyway, and has perfect recall.

Still, as of today at least, expert lawyers know more about the substantive law than Robots do. When the day comes that a computer is the best overall SME for the job, and it surely will, maybe humans can be taken out of the loop. Perhaps just serve as an appeal if certain circumstances are met, much like trying to appeal an arbitration award (my new area of study and practice).

Andrew Woods

My conclusion is that you should read the whole article by Professor Andrew Woods and share it with your friends and colleagues. It deserves the attention. Robophobia, 93 U. Colo. L. Rev. 51  (Winter, 2022). I end with a quote of the last paragraph of Andrew’s article. For still more, look for webinars in the future where Andrew and I grapple with these questions. (We met for the first time after I began publishing this series.) I for one would like to know more about his debiasing and design strategies. Now for Andrew’s last word.

In this Article, I explored relatively standard approaches to what is essentially a judgment error. If our policymaking is biased, the first step is to remove the bias from existing rules and policies. The second step might be to inoculate society against the bias—through education and other debiasing strategies. A third and even stronger step might be to design situations so that the bias is not allowed to operate. For example, if people tend to choose poorer performing human doctors over better performing robot alternatives, a strong regulatory response would be to simply eliminate the choice. Should humans simply be banned from some kinds of jobs? Should robots be required? These are serious questions. If they sound absurd, it is because our conversation about the appropriate role for machines in society is inflected with a fear of and bias against machines.

Robophobia: Great New Law Review Article – Part 2

May 26, 2022
Professor Andrew Woods

This article is Part Two of my review of Robophobia by Professor Andrew Woods. See here for Part 1.

I want to start off Part 2 with a quote from Andrew Woods in the Introduction to his article, Robophobia, 93 U. Colo. L. Rev. 51  (Winter, 2022). Footnotes omitted.

Deciding where to deploy machine decision-makers is one of the most important policy questions of our time. The crucial question is not whether an algorithm has any flaws, but whether it outperforms current methods used to accomplish a task. Yet this view runs counter to the prevailing reactions to the introduction of algorithms in public life and in legal scholarship. Rather than engage in a rational calculation of who performs a task better, we place unreasonably high demands on robots. This is   robophobia – a bias against robots, algorithms, and other nonhuman deciders. 

Robophobia is pervasive. In healthcare, patients prefer human diagnoses to computerized diagnoses, even when they are told that the computer is more effective.  In litigation, lawyers are reluctant to rely on – and juries seem suspicious of – [*56] computer-generated discovery results, even when they have been proven to be more accurate than human discovery results. . . .

In many different domains, algorithms are simply better at performing a given task than people. Algorithms outperform humans at discrete tasks in clinical health, psychology, hiring and admissions, and much more. Yet in setting after setting, we regularly prefer worse-performing humans to a robot alternative, often at an extreme cost. 

Woods, Id. at pgs. 55-56

Bias Against AI in Electronic Discovery

Electronic discovery is a good example of the regular preference of worse-performing humans to a robot alternative, often at an extreme cost. There can be no question now that any decent computer assisted method will significantly outperform human review. We have made great progress in the law through the outstanding leadership of many lawyers and scientists in the field of ediscovery, but there is still a long way to go to convince non-specialists. Professor Woods understands this well and cites many of the leading legal experts on this topic at footnotes 137 to 148. Even though I am not included in his footnotes of authorities (what do you expect, the article was written by a mere human, not an AI), I reproduce them below in the order cited as a grateful shout-out to my esteemed colleagues.

  • Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient than Exhaustive Manual Review, 17 Rich. J.L. & Tech. 1 (2011).
  • Sam Skolnik, Lawyers Aren’t Taking Full Advantage of AI Tools, Survey Shows, Bloomberg L. (May 14, 2019) (reporting results of a survey of 487 lawyers finding that lawyers have not well utilized useful new tools).
  • Moore v. Publicis Groupe, 287 F.R.D. 182, 191 (S.D.N.Y. 2012) (“Computer-assisted review appears to be better than the available alternatives, and thus should be used in appropriate cases.”) Judge Andrew Peck.
  • Bob Ambrogi, Latest ABA Technology Survey Provides Insights on E-Discovery Trends, Catalyst: E-Discovery Search Blog (Nov. 10, 2016) (noting that “firms are failing to use advanced e-discovery technologies or even any e-discovery technology”).
  • Doug Austin, Announcing the State of the Industry Report 2021, eDiscovery Today (Jan. 5, 2021),
  • David C. Blair & M. E. Maron, An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System, 28 Commc’ns ACM 289 (1985).
  • Thomas E. Stevens & Wayne C. Matus, Gaining a Comparative Advantage in the Process, Nat’l L.J. (Aug. 25, 2008) (describing a “general reluctance by counsel to rely on anything but what they perceive to be the most defensible positions in electronic discovery, even if those solutions do not hold up any sort of honest analysis of cost or quality”).
  • Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 127 (S.D.N.Y. 2015). Judge Andrew Peck.
  • See The Sedona Conference, The Sedona Conference Best Practices Commentary on the Use of Search & Information Retrieval Methods in E-Discovery, 15 Sedona Conf. J. 217, 235-26 (2014) (“Some litigators continue to primarily rely upon manual review of information as part of their review process. Principal rationales [include] . . . the perception that there is a lack of scientific validity of search technologies necessary to defend against a court challenge . . . .”).
  • Doug Austin, Learning to Trust TAR as Much as Keyword Search: eDiscovery Best Practices, eDiscovery Today (June 28, 2021).
  • Robert Ambrogi, Fear Not, Lawyers, AI Is Not Your Enemy, Above Law (Oct. 30, 2017).

Robophobia Article Is A First

Robophobia is the first piece of legal scholarship to address our misjudgment of algorithms head-on. Professor Woods makes this assertion up front and I believe it. The Article catalogs different ways that we now misjudge poor algorithms. The evidence of our robophobia is overwhelming, but, before Professor Woods work, it had all been in silos and was not seriously considered. He is the first to bring it all together and consider the legal implications.

His article goes on to suggests several reforms, also a first. But before I get to that, a more detailed overview is in order. The Article is in six parts. Part I provides several examples of robophobia. Although a long list, he says it is far from exhaustive. Part II distinguishes different types of robophobia. Part III considers potential explanations for robophobia. Part IV makes a strong, balanced case for being wary of machine decision-makers, including our inclination to, in some situations, over rely on machines. Part V outlines the components of his case against robophobia. The concluding Part VI offers “tentative policy prescriptions for encouraging rational thinking – and policy making – when it comes to nonhuman deciders.

Part II of the Article – Types of Robophobia

Professor Woods identifies five different types of robophobia.

  • Elevated Performance Standards: we expect algorithms to greatly outperform the human alternatives and often demand perfection.
  • Elevated Process Standards: we demand algorithms explain their decision-making processes clearly and fully; the reasoning must be plain and understandable to human reviewers.
  • Harsher Judgments: algorithmic mistakes are routinely judges more severely than human errors. A corollary of elevated performance standards.
  • Distrust: our confidence in automated decisions is week and fragile. Would you rather get into an empty AI Uber, or one driven by a scruffy looking human?
  • Prioritizing Human Decisions: We must keep “humans in the loop” and give more weight to human input than algorithmic.

Part III – Explaining Robophobia

Professor Woods considers seven different explanations for robophobia.

  • Fear of the Unknown
  • Transparency Concerns
  • Loss of Control
  • Job Anxiety
  • Disgust
  • Gambling for Perfect Decisions
  • Overconfidence in Human Decisions

I’m limiting my review here, since the explanations for most of these should be obvious by now and I want to limit the length of my blog. But the disgust explanation was not one I expected and a short quote by Andrew Woods might be helpful, along with the robot photo I added.

Uncannily Creepy Robot

[T]he more that robots become humanlike, the more they can trigger feelings of disgust. In the 1970s, roboticist Masahiro Mori hypothesized that people would be more willing to accept robots as the machines became more humanlike, but only up to a point, and then human acceptance of nearly-human robots would decline.[227] This decline has been called the “uncanny valley,” and it has turned out to be a profound insight about how humans react to nonhuman agents. This means that as robots take the place of humans with increasing frequency—companion robots for the elderly, sex robots for the lonely, doctor robots for the sick—reports of robots’ uncanny features will likely increase.

For interesting background on the uncanny valley, see these You Tube videos and experience robot disgust for yourself. Uncanny Valley by Popular Science 2008 (old, but pretty disgusting). Here’s a more recent and detailed one, pretty good, by a popular twenty-something with pink hair. Why is this image creepy? by TUV 2022.

Parts IV and V – The Cases For and Against Robophobia

Part IV lays out all the good reasons to be suspect of delegating decision to algorithms. Part V is the new counter-argument, one we have not heard before, why robophobia is bad for us. This is probably the heart of the article and suggest you read this part for sure.

Here is a good quote at the end of Part IV to put the pro versus anti-robot positions into perspective:

Pro-robot bias is no better than antirobot bias. If we are inclined both to over- and underrely on robots, then we need to correct both problems—the human fear of robots is one piece of the larger puzzle of how robots and humans should coexist. The regulatory challenge vis-à-vis human-robot interactions then is not merely minimizing one problem or the other but rather making a rational assessment of the risks and rewards offered by nonhuman decision-makers. This requires a clear sense of the key variables along which to evaluate decision-makers.

In the first two paragraphs of Part V of his article Professor Woods deftly summarizes the case against robophobia.

We are irrational in our embrace of technology, which is driven more by intuition than reasoned debate. Sensible policy will only come from a thoughtful and deliberate—and perhaps counterintuitive—approach to integrating robots into our society. This is a point about the policymaking process as much as it is about the policies themselves. And at the moment, we are getting it wrong—most especially with the important policy choice of where to transfer control from a human decider to a robot decider.

Specifically, in most domains, we should accept much more risk from algorithms than we currently do. We should assess their performance comparatively—usually by comparing robots to the human decider they would replace—and we should care about rates of improvement. This means we should embrace robot decision-makers whenever they are better than human decision-makers. We should even embrace robot decision-makers when they are less effective than humans, as long as we have a high level of confidence that they will soon become better than humans. Implicit in this framing is a rejection of deontological claims—some would say a “right”—to having humans do certain tasks instead of robots.[255] But, this is not to say that we should prefer robots to humans in general. Indeed, we must be just as vigilant about the risks of irrationally preferring robots over humans, which can be just as harmful.[256]

The concluding Part Three of my review of Robophobia is coming soon. In the meantime, take a break and think about Professor Woods policy-based perspective. That is something practicing lawyers like me do not do often enough. Also, it is of value to consider Andrew’s reference to “deontology“, not a word previously in my vocabulary. It is a good ethics term to pick up. Thank you Immanuel Kant.

Robophobia: Great New Law Review Article – Part 1

May 19, 2022

This blog is the first part of my review of one of the most interesting law review articles I’ve read in a long time, Robophobia. Woods, Andrew K., Robophobia, 93 U. Colo. L. Rev. 51  (Winter, 2022). Robophobia provides the first in-depth analysis of human prejudice against smart computer technologies and its policy implications. Robophobia is the next generation of technophobia, now focusing on the human fear of replacing human decision makers with robotic ones. For instance, I love technology, but am still very reluctant to let an AI drive my car. My son, on the other hand, loves to let his Tesla take over and do the driving, and watch while my knuckles go white. Then he plays the car’s damn fart noises and other joke features and I relax. Still, I much prefer a human at the wheel. This kind of anxiety about advanced technology decision making is at the heart of the law review article.

Technophobia and its son, robophobia, are psychological anxieties that electronic discovery lawyers know all too well. Often it is from first-hand experience with working with other lawyers. This is especially true for those who work with active machine learning. Ediscovery lawyers tire of hearing that keyword search and predictive coding are not to be trusted, that humans reviewing every document is the gold standard. Professor Woods goes into AI and ediscovery a little bit in Robophobia. He cites our friends Judge Andrew Peck, Maura Grossman, Doug Austin and others. But that is only a small part of this interesting technology policy paper. It argues that a central question now facing humanity is when and where to delegate decision-making authority to machines. This question should be made based on the facts and reason, not on emotions and unconscious prejudices.

Ralph and Robot

To answer this central question we need to recognize and overcome our negative stereotypes and phobias about AI. Robots are not all bad. Neither are people. Both have special skills and abilities and both make mistakes. As should be mentioned right away, Professor Woods in Robophobia uses the term “robot” very broadly to include all kinds of smart algorithms, not just actual robots. We need to overcome our robot phobias. Algorithms are already better than people at a huge array of tasks, yet we reject them for not being perfect. This must change.

Robophobia is a decision-making bias. It interferes with our ability to make sensible policy choices. The law should help society to decide when and what kind of decisions should be delegated to the robots, to balance the risk of using a robot compared to the risk of not using one. Robophobia is a decision-making bias that interferes with our ability to make sensible policy choices. In my view, we need to overcome this bias now, to delegate responsibly, so that society can survive the current danger of misinformation overload. See eg. my blog, Can Justice Survive the Internet? Can the World? It’s Not a Sure Thing. Look Up!

This meta review article (review of a law review) is written in three parts, each fairly short (for me), largely because the Robophobia article itself is over 16,000 words and has 308 footnotes. My meta-review will focus on the parts I know best, the use of artificial intelligence in electronic discovery. The summary will include my typical snarky remarks to keep you somewhat amused, and several cool quotes of Woods, all in an attempt to entice some of you to take the deep dive and read Professor Woods’ entire article. Robophobia is all online and free to access at the University of Colorado Law Review website.

Professor Andrew Woods

Professor Andrew Woods

Andrew Keane Woods is an Professor of Law at the University of Arizona College of Law. He is a young man with an impressive background. First the academics, since, after all, he is a Professor:

  • Brown University, A.B. in Political Science, magna cum laude, 2002;
  • Harvard Law School, J.D., cum laude (2007);
  • University of Cambridge, Ph.D. in Politics and International Studies (2012);
  • Stanford University, Postdoctoral Fellow in Cybersecurity (2012—2014).

As to writing, he has at least twenty law review articles and book chapters to his credit. Aside from Robophobia, some of the most interesting ones I see on his resume are:

  • Artificial Intelligence and Sovereignty, DATA SOVEREIGNTY ALONG THE SILK ROAD (Anupam Chander & Haochen Sun eds., Oxford University Press, forthcoming);
  • Internet Speech Will Never Go Back to Normal,” (with Jack Goldmsith) THE ATLANTIC (Apr. 25, 2020).
  • Our Robophobia,” LAWFARE (Feb. 19, 2020).
  • Keeping the Patient at the Center of Machine Learning in Healthcare, 20 AMERICAN JOURNAL OF BIOETHICS 54 (2020) (w/ Chris Robertson, Jess Findley, Marv Slepian);
  • Mutual Legal Assistance in the Digital Age, THE CAMBRIDGE HANDBOOK OF SURVEILLANCE LAW (Stephen Henderson & David Gray eds., Cambridge University Press, 2020);
  • Litigating Data Sovereignty, 128 YALE LAW JOURNAL 328 (2018).

Bottom line, Woods is a good researcher (of course he had help from a zillion law students, whom he names and thanks), and a deep thinker on AI, technology, privacy, politics and social policies. His opinions deserve our careful consideration. In my language, his insights can help us to move beyond mere information to genuine knowledge, perhaps even some wisdom. See eg. my prior blogs, Information → Knowledge → Wisdom: Progression of Society in the Age of Computers (2015); AI-Ethics: Law, Technology and Social Values (website).

Quick Summary of Robophobia

Bad Robot?

Robots – machines, algorithms, artificial intelligence – already play an important role in society. Their influence is growing very fast. Robots are already supplementing or even replacing some human judgments. Many are concerned with the fairness, accuracy, and humanity of these systems. This is rightly so. But, at this point, the anxiety about machine bias is crazy high. The concerns are important, but they almost always run in one direction. We worry about robot bias against humans. We do not worry about human bias against robots. Professor Woods shows that this is a critical mistake.

It is not an error because robots somehow inherently deserve to be treated fairly, although that may someday be true. It is an error because our bias against nonhuman deciders is bad for us humans. A great example Professor Woods provides is self-driving cars. It would be an obvious mistake to reject all self-driving cars merely because one causes a single fatal accident. Yet this is what happened, for a while at least, when an Uber self-driving car crashed into a pedestrian in Phoenix. See eg. FN 71 of Robophobia: Ryan Randazzo, Arizona Gov. Doug Ducey Suspends Testing of Uber Self-Driving Cars, Ariz. Republic, (Mar. 26, 2018). This kind of one-sided perfection bias ignores the fact that humans cause forty thousand traffic fatalities a year, with an average of three deaths every day in Arizona alone. We tolerate enormous risk from our fellow humans, but almost none from machines. That is flawed, biased thinking. Yet, even rah-rah techno promoters like me suffer from it.

Ralph hoping a human driver shows up soon.

Professor Woods shows that there is a substantial literature concerned with algorithmic bias, but until now, its has been ignored by scholars. This suggests that we routinely prefer worse-performing humans over better-performing robots. Woods points out that we do this on our roads, in our courthouses, in our military, and in our hospitals. As he puts it in his Highlights section, that precede the Robophobia article itself, which I am liberally paraphrasing in this Quick Summary: “Our bias against robots is costly, and it will only get more so as robots become more capable.

Robophobia not only catalogs the many different forms of anti-robot bias that already exist, which he calls a taxonomy of robophobia, it also suggests reforms to curtail the harmful effects of that bias. Robophobia provides many good reasons to be less biased against robots. We should not be totally trusting mind you, but less biased. It is in our own best interests to do so. As Professor Woods puts it, “We are entering an age when one of the most important policy questions will be how and where to deploy machine decision-makers.

 Note About “Robot” Terminology

Before we get too deep into Robophobia, we need to be clear about what Professor Woods means here. We need to define our terms. Woods does this in the first footnote where he explains as follows (HAL image added):

The article is concerned with human judgment of automated decision-makers, which include “robots,” “machines,” “algorithms,” or “AI.” There are meaningful differences between these concepts and important line-drawing debates to be had about each one. However, this Article considers them together because they share a key feature: they are nonhuman deciders that play an increasingly prominent role in society. If a human judge were replaced by a machine, that machine could be a robot that walks into the courtroom on three legs or an algorithm run on a computer server in a faraway building remotely transmitting its decisions to the courthouse. For present purposes, what matters is that these scenarios represent a human decider being replaced by a nonhuman one. This is consistent with the approach taken by several others. See, e.g., Eugene Volokh, Chief Justice Robots, 68 DUKE L.J. 1135 (2019) (bundling artificial intelligence and physical robots under the same moniker, “robots”); Jack Balkin, 2016 Sidley Austin Distinguished Lecture on Big Data Law and Policy: The Three Laws of Robotics in the Age of Big Data, 78 OHIO ST. L.J. 1217, 1219 (2017) (“When I talk of robots … I will include not only robots – embodied material objects that interact with their environment – but also artificial intelligence agents and machine learning algorithms.”); Berkeley Dietvorst & Soaham Bharti, People Reject Algorithms in Uncertain Decision Domains Because They Have Diminishing Sensitivity to Forecasting Error, 31 PSYCH. SCI. 1302, 1314 n.1 (2020) (“We use the term algorithm to describe any tool that uses a fixed step-by-step decision-making process, including statistical models, actuarial tables, and calculators.”). This grouping contrasts scholars who have focused explicitly on certain kinds of nonhuman deciders. Seee.g., Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 CALIF. L. REV. 513, 529 (2015) (focusing on robots as physical, corporeal objects that satisfy the “sense-think-act” test as compared to, say, a “laptop with a camera”).

I told you Professor Woods was a careful scholar, but wanted you to see for yourself by a full quote of footnote one. I promise to exclude footnotes and his many string cites going forward in this blog article, but I do intend to frequently quote his insightful, policy packed language. Did you note his citation to Chief Justice Roberts in his explanation of “robophobia”? I will end this first part of my review of Robophobia with a side excursion into the Justice Robert cite. It provides a good example of irrational robot fears and insight into the Chief Justice himself, which is something I’ve been considering a lot lately. See eg. my recent article The Words of Chief Justice Roberts on JUDICIAL INTEGRITY Suggest the Supreme Court Should Step Away from the Precipice and Not Overrule ‘Roe v Wade’.

Chief Justice Roberts Told High School Graduates in 2018 to “Beware the Robots”

The Chief Justice gave a very short speech at his daughter’s private high school graduation. There he demonstrated a bit of robot anxiety, but did so in an interesting manner. It bears some examination before we get into the substance of Woods’ Robophobia article. For more background on the speech see eg. Debra Cassens Weiss, Beware the robots,’ chief justice tells high school graduates (June 6, 2018). Here are the excerpted words of Chief Justice John Roberts:

Beware the robots! My worry is not that machines will start thinking like us. I worry that we will start thinking like machines. Private companies use artificial intelligence to tell you what to read, to watch and listen to, based on what you’ve read, watched and listened to. Those suggestions can narrow and oversimplify information, stifling individuality and creativity.

Any politician would find it very difficult not to shape his or her message to what constituents want to hear. Artificial intelligence can change leaders into followers. You should set aside some time each day to reflect. Do not read more, do not research more, do not take notes. Put aside books, papers, computers, telephones. Sit, perhaps just for a half hour, and think about what you’re learning. Acquiring more information is less important than thinking about the information you have.”

Aside from the robot fear part, which was really just an attention grabbing speech thing, I could not agree more with his main point. We should move beyond mere information, we should take time to process the information and subject it to critical scrutiny. We should transform from mere information gatherers, into knowledge makers. My point exactly in Information → Knowledge → Wisdom: Progression of Society in the Age of Computers (2015). You could also compare this progression with an ediscovery example, moving from just keyword search to predictive coding.

Part Two of my review of Robophobia is coming soon. In the meantime, take a break and think about any fears you may have about AI. Everyone has some. Would you let the AI drive your car? Select your documents for production? Are our concerns about killer robots really justified, or maybe just the result of media hype? For more thoughts on this, see AI-Ethics.com. And yes, I’ll be Baaack.

%d bloggers like this: