Waymo v. Uber, Hide-the-Ball Ethics and the Special Master Report of December 15, 2017

December 17, 2017

The biggest civil trial of the year was delayed by U.S. District Court Judge William Alsup due to e-discovery issues that arose at the last minute. This happened in a trade-secret case by Google’s self-driving car division, WAYMO, against Uber. Waymo LLC v. Uber Techs., Inc. (Waymo I), No. 17-cv-00939-WHA (JSC), (N.D. Cal. November 28, 2017). The trial was scheduled to begin in San Francisco on December 4, 2017 (it had already been delayed once by another discovery dispute). The trial was delayed at Waymo’s request to give it time to investigate a previously undisclosed, inflammatory letter by an attorney for Richard Jacobs. Judge Alsup had just been told of the letter by the United States attorney’s office in Northern California. Judge Alsup immediately shared the letter with Waymo’s attorneys and Uber’s attorneys.

At the November 28, 2017, hearing Judge Alsup reportedly accused Uber’s lawyers of withholding this evidence, forcing him to delay the trial until Waymo’s lawyers could gather more information about the contents of the letter. NYT (11/28/17). The NY Times reported Judge Alsup as stating:

I can no longer trust the words of the lawyers for Uber in this case … You should have come clean with this long ago … If even half of what is in that letter is true, it would be an injustice for Waymo to go to trial.

NYT (11/28/17).

Judge Alsup was also reported to have said to Uber’s lawyers in the open court hearing of November 28, 2017:

You’re just making the impression that this is a total coverup … Any company that would set up such a surreptitious system is just as suspicious as can be.

CNN Tech (11/28/17).

Judge Alsup was upset by both the cover-up of the Jacobs letter and by the contents of the letter. The letter essentially alleged a wide-spread criminal conspiracy to hide and destroy evidence in all litigation, not just the Waymo case, by various means, including use of: (1) specialized communication tools that encrypt and self-destruct ephemeral communications, such as instant messages; (2) personal electronic devices and accounts not traceable to the company; and, (3) fake attorney-client privilege claims. Judge Alsup reportedly opened the hearing on the request for continuance by admonishing attorneys that counsel in future cases can be “found in malpractice” if they do not turn over evidence from such specialized tools. Fortune (12/2/17). That is a fair warning to us all. For instance, do any of your key custodians use specialized self-destruct communications tools like Wickr or Telegram?

Qualcomm Case All Over Again?

The alleged hide-the-email conduct here looks like it might be a high-tech version of the infamous Qualcomm case in San Diego. Qualcomm Inc. v. Broadcom Corp., No. 05-CV-1958-B(BLM) Doc. 593 (S.D. Cal. Aug. 6, 2007); Qualcomm, Inc. v. Broadcom Corp., 2008 WL 66932 (S.D. Cal. Jan. 7, 2008) (Plaintiff Qualcomm intentionally withheld from production several thousand important emails, a fact not revealed until cross-examination at trial of one honest witness).

The same rules of professional conduct are, or may be, involved in both Qualcomm and Waymo (citing to ABA model rules).

RULE 3.3 CANDOR TOWARD THE TRIBUNAL
(a) A lawyer shall not knowingly:
(1) make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer; . . .
(b) A lawyer who represents a client in an adjudicative proceeding and who knows that a person intends to engage, is engaging or has engaged in criminal or fraudulent conduct related to the proceeding shall take reasonable remedial measures, including, if necessary, disclosure to the tribunal.

RULE 3.4 FAIRNESS TO OPPOSING PARTY AND COUNSEL
A lawyer shall not:
(a) unlawfully obstruct another party’s access to evidence or otherwise unlawfully alter, destroy, or conceal a document or other material that the lawyer knows or reasonably should know is relevant to a pending or a reasonably foreseeable proceeding; nor counsel or assist another person to do any such act.

Although, as we will see, it looks so far as if Uber and its in-house attorneys are the ones who knew about the withheld documents and destruction scheme, and not Uber’s actual counsel of record. It all gets a little fuzzy to me with all of the many law firms involved, but so far the actual counsel of record for Uber claim to have been as surprised by the letter as Waymo’s attorneys, even though the letter was directed to Uber’s in-house legal counsel.

Sarbanes-Oxley Violations?

In addition to possible ethics violations in Waymo v. Uber, a contention was made by the attorneys for Uber consultant, Richard Jacobs, that Uber was hiding evidence in violation of the Sarbanes-Oxley Act of 2002, Pub. L. 107-204, § 802, 116 Stat. 745, 800 (2002), which states in relevant part:

whoever knowingly alters, destroys, mutilates, conceals, covers up, falsifies, or makes a false entry in any record, document, or tangible object with the intent to impede, obstruct, or influence the investigation or proper administration of any matter within the jurisdiction of any department or agency of the United States or any case filed under title 11, or in relation to or contemplation of any such matter or case, shall be fined under this title, imprisoned not more than 20 years, or both.

18 U.S.C. § 1519. The Sarbanes-Oxley applies to private companies and has a broad reach not limited to litigation that has been filed, much less formal discovery requests. Section 1519 “covers conduct intended to impede any federal investigation or proceeding including one not even on the verge of commencement. Yates v. United States, – U.S. –, 135 S.Ct. 1074, 1087 (2015).

The Astonishing “Richard Jacobs Letter” by Clayton Halunen

The alleged ethical and legal violations in Waymo LLC v. Uber Techs., Inc. are based upon Uber’s failure to produce a “smoking gun” type of letter (email) and the contents of that letter. Although the letter is referred to as the Jacobs letter, it was actually written by Clayton D. Halunen of Halunen Law (shown right), an attorney for Richard Jacobs, a former Uber employee and current Uber consultant. Although this 37-page letter dated May 5, 2017 was not written by Richard Jacobs, it purports to represent how Jacobs would testify to support employment claims he was making against Uber. It was provided to Uber’s in-house employment counsel, Angella Padilla, in lieu of an interview of Jacobs that she was seeking.

A redacted copy of the letter dated May 5, 2017, has been released to the public and is very interesting for many reasons. I did not add the yellow highlighting seen in this letter and am unsure who did.

In fairness to Uber I point out that the letter states on its face in all caps that it is a RULE 408 CONFIDENTIAL COMMUNICATION FOR SETTLEMENT PURPOSES ONLY VIA EMAIL AND U.S. MAIL, a fact that does not appear to have been argued as a grounds for Uber not producing the letter to Waymo in Waymo v. Uber. That may be because Rule 408, FRCP, states that although such settlement communications are not admissible to “prove or disprove the validity or amount of a disputed claim or to impeach by a prior inconsistent statement or a contradiction” they are admissible “for another purpose, such as proving a witness’s bias or prejudice, negating a contention of undue delay, or proving an effort to obstruct a criminal investigation or prosecution.” Also, Rule 408 pertains to admissibility, not discoverability, and Rule 26(b)(1) still says that “Information within this scope of discovery need not be admissible in evidence to be discoverable.”

The letter claims that Richard Jacobs has a background in military intelligence, essentially a spy, although those portions of the letter were heavily redacted. I tend to believe this for several reasons, including the fact that I could not find a photograph of Jacobs anywhere. That is very rare. The letter goes on to describe the “unlawful activities within Uber’ s ThreatOps division.” Jacobs Letter at pg. 3. The illegal activities included fraud, theft, hacking, espionage and “knowing violations” of Sarbanes-Oxley by:

Uber’ s efforts to evade current and future discovery requests, court orders, and government investigations in violation of state and federal law as well as ethical rules governing the legal profession. Clark devised training and provided advice intended to impede, obstruct, or influence the investigation of several ongoing lawsuits against Uber and in relation to or contemplation of further matters within the jurisdiction of the United States.  …

Jacobs then became aware that Uber, primarily through Clark and Henley, had implemented a sophisticated strategy to destroy, conceal, cover up, and falsify records or documents with the intent to impede or obstruct government investigations as well as discovery obligations in pending and future litigation. Besides violating 18 U.S.C. § 15 19, this conduct constitutes an ethical violation.

Pages 5, 6 of Jacobs Letter. The practices included the alleged mandatory use of a program called WickrMe, that “programs messages to self-destruct in a matter of seconds to no longer than six days. Consequently, Uber employees cannot be compelled to produce records of their chat conversations because no record is retained.” Letter pg. 6.

Remember, Judge Alsup reportedly began the trial continuance hearing of November 28, 2017, by admonishing attorneys that in future cases they could be “found in malpractice” if they do not turn over evidence from such specialized communications tools. Fortune (12/2/17). There are a number of other secure messaging apps in adddition to Wickr that have encryption and self destruct features. A few I have found are:

There are also services on the web that will send self-destructing messages for you, such as PrivNote. This is a rapidly changing area so do your own due diligence.

Uber CEO Dara Khosrowshahi reacted to the November 29, 2017 hearing and Judge Alsup’s comments by tweeting on November 29, 2017 that Uber employees did, but no longer, use Wickr and another program like it, Telegram.

True that Wickr, Telegram were used often at Uber when I came in. As of Sept 27th I directed my teams NOT to use such Apps when discussing Uber-related business.

This seems like a good move to me on the part of Uber’s new CEO, a smart move. It is also an ethical move in a sometimes ethically challenged Silicon Valley culture. The culture is way too filled with selfish Ayn Rand devotees for my taste. I hope this leads to large scale housekeeping by Khosrowshahi. Matt Kallman, a spokesman for Uber, said after the public release of the letter:

While we haven’t substantiated all the claims in this letter — and, importantly, any related to Waymo — our new leadership has made clear that going forward we will compete honestly and fairly, on the strength of our ideas and technology.

NYT (12/15/17). You know the old saying about Fool me once …

Back to the Jacobs letter, it also alleges at pgs. 6-9 the improper use of fake attorney-client privilege to hide evidence:

Further, Clark and Henley directly instructed Jacobs to conceal documents in violation of Sarbanes-Oxley by attempting to “shroud” them with attorney-client privilege or work product protections. Clark taught the ThreatOps team that if they marked communications as “draft,” asked for a legal opinion at the beginning of an email, and simply wrote “attorney-client privilege” on documents, they would be immune from discovery.

The letter also alleges the intentional use of personal computers and accounts to conduct Uber business that they wanted to hide from disclosure. Letter pgs. 7-8.

The letter at pages 9-26 then details facts purporting to show illegal intelligence gathering activities by Uber on a global scale, violating multiple state and federal laws, including:

  • Economic Espionage Act
  • Uniform Trade Secret Act
  • California Uniform Trade Secrets Act
  • Racketeer Influenced and Corrupt Organizations Act (RICO)
  • Wire Fraud law at 18 U.S.C § 1343, and California Penal Code § 528.5
  • Wiretap Act at 18 U .S.C. § 25 10 et seq.
  • Computer Fraud and Abuse Act (CFAA)
  • Foreign Corrupt Practices Act (FCPA)

Special Master John L. Cooper

Judge Alsup referred the discovery issues raised by Uber’s non-disclosure of the “Jacobs Letter” to the Special Master handling many of the discovery disputes in this case, John L. Cooper of Farella Braun + Martel LLP. The Special Master Report with Cooper’s recommendations concerning the issues raised by the late disclosure of the letter is dated December 15, 2017. Cooper’s report is a public record that can be found here. This is  his excellent introduction of the dispute found at pages 1-2 of his report.

The trial of this trade secrets case was continued for a second time after the belated discovery of inflammatory communications by a former Uber employee came to light outside the normal discovery process. On April 14, 2017, Richard Jacobs sent a resignation e-mail to Uber’s then-CEO and then-general counsel, among others, accusing Uber of having a dedicated division with a “mission” to “steal trade secrets in a series of code-named campaigns” and engaging in other allegedly wrongful or inappropriate conduct. A few weeks later, on May 5, 2017, Mr. Jacobs’ lawyer, Clayton Halunen, sent a letter to Angela Padilla, Uber’s Vice President and Deputy General Counsel for Litigation and Employment. That 37-page letter expanded in some  detail on Mr. Jacobs’ e-mailed accusations regarding clandestine and concerted efforts to steal competitors’ trade secrets, including those belonging to Waymo. It also addressed allegations touching on Anthony Levandowski’s alleged downloading of Waymo trade secrets. The Jacobs Letter laid out what his lawyer described as a set of hardware and software programs, and usage protocols that would help Uber to allegedly carry out its thefts and other corporate espionage in secret and with minimized risk of evidence remaining on Uber servers or devices. By mid-August Mr. Jacobs and Uber settled their disputes and executed a written settlement agreement on August 14-15,2017.

Despite extensive discovery and multiple Court orders to produce an extensive amount of information related to the accusations in the Jacobs Materials, Waymo did not learn of their existence until after November 22, when the Court notified the parties that a federal prosecutor wrote a letter to this Court disclosing the gist of the Jacobs allegations.

The Special Master’s report then goes on to analyze whether Uber was obligated to produce the Jacobs Materials in response to any of the Court’s prior orders or Waymo’s discovery requests. In short, Master Cooper concluded that they were not directly covered by any of the prior court orders, but the Jacobs Letter was responsive to certain discovery requests propounded by Waymo, and Uber was obligated to produce it in response to those requests.

Special Master Cooper goes on to describe at page 7 of his report the Jacobs letter by Halunen. To state the obvious, this is clearly a “hot” document with implications that go well beyond this particular case.

That 37-page letter set forth multiple allegations relating to alleged efforts by Uber individuals and divisions. Among other things, the letter alleges that Uber planned to use certain hardware devices and software to conceal the creation and destruction of corporate records that, as a result, “would never be subject to legal discovery.” See ECF No. 2307-2 at 7. These activities, Mr. Jacobs’ lawyer asserted, “implicate ongoing discovery disputes, such as those in Uber’s litigation with Waymo.” Id. at 9. He continued:

Specifically, Jacobs recalls that Jake Nocon, Nick Gicinto, and Ed Russo went to Pittsburgh, Pennsylvania to educate Uber’s Autonomous Vehicle Group on using the above practices with the specific intent of preventing Uber’s unlawful schemes from seeing the light of day.

Jacobs’ observations cast doubt on Uber’s representation in court proceedings that no documents evidencing wrongdoing can be found on Uber’s systems and that other communications are actually shielded by the attorney-client privilege. Aarian Marshall, Judge in Waymo Dispute Lets Uber’s Self-driving Program Live—for Now, wired.com (May 3, 2017 at 8:47p.m.) (“Lawyers for Waymo also said Uber had blocked the release of 3,500 documents related to the acquisition of Otto on the grounds that they contain privileged information …. Waymo also can’t quite pin down whether Uber employees saw the stolen documents or if those documents moved anywhere beyond the computer Levandowski allegedly used to steal them. (Uber lawyers say extensive searches of their company’s system for anything connected to the secrets comes up nil.)”), available at (citation omitted).

Id. at 9-10.

Uber Attorney Angela Padilla

Angella Padilla was Uber’s Vice President and Deputy General Counsel for Litigation and Employment. She testified on these issues. Here is Special Master Cooper’s summary at pages 8-9 of his report:

Ms. Padilla testified in this Court that she read the letter “in brief’ and turned it over to other Uber attorneys, including Ms. Yoo, to begin an internal investigation. Nov. 29, 2017 Hr’g Tr. at 15:17-24. The letter also made its way to two separate committees of Uber’s Board of Directors, including the committee that was or is overseeing special litigation, including this case and the Jacobs matter. Id. at 20:10-13; 26:23-25. On June 27, Uber disclosed the allegations in the Jacobs Letter to the U.S. Attorney for the Northern District of California. Id. at 27:20-14. It disclosed the Jacobs Letter itself on or around September 12 to the same U.S. Attorney’s Office, to another U.S. Attorney, in the Southern District of New York, and to the U.S. Department of Justice in Washington. Id. at 28:4-10. Ms. Padilla testified that Uber made these disclosures to multiple prosecutors “to take the air out of [Jacobs’] extortionist balloon.” Id. at 28:18-19. Nearly one month before that distribution of the letter to federal prosecutors, on August 14, Uber settled with Mr. Jacobs—the terms of which included $4.5 million in compensation to Jacobs and $3 million to his lawyers. See id. at 62:6-63-12.

I have to pause here for a minute because the settlement amount takes my breath away. Not only the payment of $4.5 Million to Richard Jacobs who had a salary of $130,000 per year, but also the additional payment of $3.0 million dollars to his lawyers. That’s an incredible sum for writing a couple of letters, although I am sure they would claim to have put much more into their representation than meets the eye.

Other Attorneys for Uber Involved

Back to Special Master Cooper’s summary of the testimony of Uber attorney Padilla and other facts in the record about attorney knowledge of the “smoking gun” Jacobs letter (footnotes omitted):

Uber distributed the Jacobs E-Mail to two of Uber’s counsel of record at Morrison Foerster (“MoFo”) in this case. See Dec. 4, 2017 Hr’g Tr. at 46:1-47:5. Other MoFo attorneys directly involved in this case and related discovery issues e-mailed with other MoFo attorneys in late April about “Uber’s ediscovery systems regarding potential investigation into Jacobs resignation letter.” See Waymo Ex. 21.

None of the Uber outside counsel working on this case got a copy of the Jacobs Letter. Neither did the two Uber in-house lawyers who were or are handling this case; Ms. Padilla testified that she did not send it to them. Nov. 29, 2017 Hr’g Tr. at 47:8-16. By late June, some attorneys from Boies Schiller and Flexner, also counsel in this matter for Uber, had discussions with other outside counsel and Ms. Padilla about issues arising from the internal investigation triggered by the Jacobs Materials. See Waymo Ex. 20, Entries 22-22(h).

So now you know the names of the attorneys involved, and not involved, according to Special Master Cooper at page 9 of his report. Apparently none of the actual counsel of record knew about it. I would have to assume, and I think the court will too, that this was intentional. It was so clever as to be obvious, or, as the British would say too clever by half.

U.S. Attorney Notifies Judge Alsup of the Jacobs Letter

To complete the procedural background, here is what happened next leading to the referral to the Special Master. Note that a U.S. Attorney taking action like this to notify a District Court Judge of a piece of evidence is extraordinary, especially to do so just before a trial. Judge Alsup said that he had never had such a thing happen in his courtroom. The U.S. Attorney for the Northern District of California is Brian Stretch. Obviously he was concerned about the fairness of Uber’s actions. In my opinion this was a good call by Stretch.

On November 22, 2017, the U.S. Attorney for the Northern District of California notified this Court of the Jacobs allegations and specifically referenced the account Jacobs put in his letter about the efforts to keep the Ottomotto acquisition secret. See ECF No. 2383. The Court on the same day issued an order disclosing receipt of the letter from the U.S. Attorney and asked the parties to inform the Court about the extent of any prior disclosure of the Jacobs allegations. See ECF Nos. 2260-2261. After continuing the trial date in light of the parties’ responses to that query, the Court on December 4, 2017, ordered the Special Master “to determine whether and to what extent, including the history of this action and both sides’ past conduct, defendants were required to earlier produce the Jacobs letter, resignation email, or settlement agreement, or required to provide any information in those documents in response to interrogatories, Court orders, or other agreements among counsel.” ECF No. 2334, 2341.

Special Master report at pgs. 9-10.

Special Master Cooper’s Recommended Ruling

Master Cooper found that the Richard Jacobs letter was responsive to two of Waymos’ requests to produce: RFP 29 and RFP 73. He rejected Uber’s argument that they were not responsive to any request, an argument that must have been difficult to make concerning a document this hot. They tried to make the argument seem more reasonable by saying that even if the letter was “generally relevant,” it was not responsive. Then they cite to cases standing for the proposition that you have no duty to produce relevant documents that you are not going to rely on, namely documents adverse to your position, unless they are specifically requested. Here is a quote of the conclusion of that argument from page 16 of Uber’s Response to Waymo’s Submission to Special Master Cooper Re the Jacobs Documents.

Congress has specified in Rule 26(a)(ii) what documents must be unilaterally produced, and they are only those that a party “may use to support its claims or defenses.” Thus, a party cannot use a document against an adversary at trial that the party failed to disclose. However, Rule 26 very pointedly does not require the production of any documents other than those that a party plans to use “to support” its claims. Obviously, Uber is not seeking to use any of the documents at issue to support its claims. If Waymo believes that this rule should be changed, that is an issue they need to address with Congress, not with the Court.

Master Cooper did not address that argument because he found the documents were in fact both relevant and directly responsive to two of Waymo’s requests for production.

Uber’s attorney also made what I consider a novel argument that even if the Jacobs letter was found to be responsive, they still did have to produce it because, get this – it did not include any of the keywords that they agreed to use to search for documents in those categories. Incredible. What difference does that make, if they knew about the document anyway? Their client, Uber, specifically including in-house counsel, Ms. Padilla, clearly knew about it. The letter was to her. Are they suggesting that Uber did not know about the letter because some of their outside counsel did not know about it? Special Master Cooper must have had the same reaction as he disposed of this argument in short order at page 17 of his report:

Uber argues, that in some scenarios, reliance on search terms is enough to satisfy a party’s obligation to find responsive documents. See, e.g., T.D.P. v. City of Oakland, No, 16-cv-04132-LB, 2017 WL 3026925, at *5 (N.D. Cal. July 17, 2017) (finding certain search terms adequate for needs of case). But I find there are two main reasons why an exclusive focus on the use of search terms is inappropriate for determining whether the Jacobs Letter should have been produced in response to RFP 29 and RFP 73.

First, the parties never reached an agreement to limit their obligation to searching for documents to only those documents that hit on agreed-upon search terms. See Waymo Ex. 5 (Uber counsel telling Waymo during search-term negotiations that “Waymo has an obligation to conduct a reasonable search for responsive documents separate and apart from any search term negotiations”). (Emphasis added)

Second, Uber needed no such help in finding the Jacobs Materials. They were not stowed away in a large volume of data on some server. They were not stashed in some low-level employee’s files. Parties agree to use search terms and to look into the records of the most likely relevant custodians to help manage the often unwieldy process of searching through massive amounts of data. These methods are particularly called for when a party, instead of merely having to look for a needle in a haystack, faces the prospect of having to look for lots of needles in lots of haystacks. This needle was in Uber’s hands the whole time.

I would add that this needle was stuck deep into their hands, such that they were bleeding profusely. Maybe the outside attorneys did not see it, but Uber sure did and they had a duty to advise their attorneys. Uber’s attorneys would have been better off saving their powder for attacking the accuracy of the contents of the Jacobs letter and talking about the fast pace of discovery. They did that, but only as a short concluding argument, almost an afterthought. See page 16-19 of Uber’s Response to Waymo’s Submission to Special Master Cooper Re the Jacobs Documents.

Here is another theoretical argument that Uber’s lawyers threw up and Cooper’s practical response at pages 17-18 of his report:

Uber argues that it cannot be that the mere possession and knowledge of a relevant document must trigger a duty to scrutinize it and see if it matches any discovery requests. It asked at the December 12, 2017, hearing before the Special Master: Should every client be forced to instruct every one of its employees to turn over every e-mail and document to satisfy its discovery obligations to produce relevant and responsive documents? Must every head of litigation for every company regularly confronted with discovery obligations search their files for responsive documents, notwithstanding any prior agreement with the requesting party to search for responsive documents by the use of search terms?

It is not easy, in the abstract, to determine where the line regarding the scope of discovery search should be drawn. But this is not a case involving mere possession of some document. The facts in this case suggest that Ms. Padilla knew of the Jacobs Letter at the time Uber had to respond to discovery requests calling for its production—it certainly was “reasonably accessible.” Mr. Jacobs’ correspondence alleged systemic, institutionalized, and criminal efforts by Uber to conceal evidence and steal trade secrets, and not just as a general matter but also specifically involving the evidence and trade secrets at issue in this case—maybe the largest and most significant lawsuit Uber has ever faced. Ms. Padilla, Uber’s vice president and deputy general counsel for litigation and employment received the Jacobs Materials around the same time that discovery in this case was picking up and around the same time that the Court partially granted Waymo’s requested provisional relief. Shortly after that, Uber told federal prosecutors about the Jacobs allegations and then later sent them a copy of the letter. It sent the materials to outside counsel, including lawyers at MoFo that Uber hired to investigate the allegations. Two separate Uber board committees got involved, including the committee overseeing this case. Uber paid Mr. Jacobs $4.5 million, and his lawyer $3 million, to settle his claims.

The Federal Rules obligate a party to produce known, relevant and reasonably accessible material that on its face is likely to be responsive to discovery requests. RFP 29 and RFP 73 were served on Uber on May 9, just a few days after Ms. Padilla received the Jacobs Letter on May 5. Uber was therefore obligated to conduct a reasonable inquiry into those requests (and all others it received) to see if it had documents responsive to those requests and produce non-privileged responsive documents.

Special Master John Cooper concluded by finding that the “Jacobs letter was responsive to Waymo’s Request for Production No. 29 and Request for Production No. 73, and Uber should have produced it to Waymo in response to those requests.” It was beyond the scope of his assignment as Special Master to determine the appropriate remedy. Uber will now probably challenge this report and Judge William Alsup will rule.

Like everyone else, I expect Judge Alsup will agree with Cooper’s report. The real question is what remedy will he provide to Waymo and what sanctions, if any, will Judge Alsuop impose.

Conclusion

At the hearing on the request for a trial delay on November 28, 2017, Judge William Alsup reportedly told Uber’s in-house attorney, Angella Padilla:

Maybe you’re in trouble … This document should have been produced … You wanted this case to go to trial so that they didn’t have the document, then it turns out the U.S. attorney did an unusual thing. Maybe the guy [Jacobs] is a disgruntled employee but that’s not your decision to make, that’s the jury’s.

The Recorder (November 29, 2017).

In response to Angella Padilla saying that Jacobs was just a “extortionist” and the allegations in his letter were untrue. Judge Alsup reportedly responded by saying:

Here’s the way it looks … You said it was a fantastic BS letter with no merit and yet you paid $4.5 million. To someone like me and people out there, mortals, that’s a lot of money, that’s a lot of money. And people don’t pay that kind of money for BS and you certainly don’t hire them as consultant if you think everything they’ve got to contribute is BS. On the surface it looks like you covered this up.

The Recorder (November 29, 2017).

Judge William Alsup is one of the finest judges on the federal bench today. He is a man of unquestioned integrity and intellectual acumen. He is a Harvard Law graduate, class of 1971, and former Law clerk for Justice William O. Douglas, Supreme Court of the United States, 1971-1972.  How Judge Alsup reacts to the facts in Waymo LLC v. Uber Techs., Inc. now that he has the report of Special Master Cooper will likely have a profound impact on e-discovery and legal ethics for years to come.

No matter what actions Judge Alsup takes next, the actions of Uber and its attorneys in this case will be discussed for many years to come. Did the attorneys’ non-disclosure violate Rule of Professional Conduct 3.3, Candor Toward the Tribunal? Did they violate Rule 3.4, Fairness to Opposing Party and Counsel? Also, what about Rule 26(g) Federal Rules of Civil Procedure? Other rules of ethics and procedure? Did Uber’s actions violate the Sarbanes-Oxley Act? Other laws? Was it fraud?

Finally, and these are critical questions, did Uber breach their duty to preserve evidence when they knew that litigation was reasonably likely? Did their attorneys do so if they knew of these practices? What sanctions are appropriate for destruction of evidence under Rule 37(e) and the Court’s inherent authority? Should an adverse inference be imposed? A default judgment?

The preservation related issues are big questions that I suspect Judge Alsup will now address. These issues and his rulings, and that of other judges who will likely face the same issues soon in other cases, will impact many corporations, not just Uber. The use of software such as Wickr and Telegram is apparently already wide-spread. In what circumstances and for what types of communications may the use of such technologies place a company (or individual) at risk for severe sanctions in later litigation? Personally, I oppose intentionally ephemeral devices, where all information self-destructs, but, at the same time, I strongly support the right of encryption and privacy. It is a question of balance between openness and truth on the one hand, and privacy and security on the other. How attorneys and judges respond to these competing challenges will impact the quality of justice and life in America for many years to come.

 


Six Sets of Draft Principles Are Now Listed at AI-Ethics.com

October 8, 2017

Arguably the most important information resource of AI-Ethics.com is the page with the collection of Draft Principles underway by other AI Ethics groups around the world. We added a new one that came to our attention this week from an ABA article, A ‘principled’ artificial intelligence could improve justice (ABA Legal Rebels, October 3, 2017). They listed six proposed principles from the talented Nicolas Economou, the CEO of electronic discovery search company, H5.

Although Nicolas Economou is an e-discovery search pioneer and past Sedona participant, I do not know him. I was, of course, familiar with H5’s work as one of the early TREC Legal Track pioneers, but I had no idea Economou was also involved with AI ethics. Interestingly, I recently learned that another legal search expert, Maura Grossman, whom I do know quite well, is also interested in AI ethics. She is even teaching a course on AI ethics at Waterloo. All three of us seem to have independently heard the Siren’s song.

With the addition of Economou’s draft Principles we now have six different sets of AI Ethics principles listed. Economou’s new list is added at the end of the page and reproduced below. It presents a decidedly e-discovery view with which all readers here are familiar.

Nicolas Economou, like many of us, is an alumni of The Sedona Conference. His sixth principle is based on what he calls thoughtful, inclusive dialogue with civil society. Sedona was the first legal group to try to incorporate the principles of dialogue into continuing legal education programs. That is what first attracted me to The Sedona Conference. AI-Ethics.com intends to incorporate dialogue principles in conferences that it will sponsor in the future. This is explained in the Mission Statement page of AI-Ethics.com.

The mission of AI-Ethics.com is threefold:

  1. Foster dialogue between the conflicting camps in the current AI ethics debate.
  2. Help articulate basic regulatory principles for government and industry groups.
  3. Inspire and educate everyone on the importance of artificial intelligence.

First Mission: Foster Dialogue Between Opposing Camps

The first, threshold mission of AI-Ethics.com is to go beyond argumentative debates, formal and informal, and move to dialogue between the competing camps. See eg. Bohm Dialogue, Martin Buber and The Sedona Conference. Then, once this conflict is resolved, we will all be in a much better position to attain the other two goals. We need experienced mediators, dialogue specialists and judges to help us with that first goal. Although we already have many lined up, we could always use more.

We hope to use skills in both dialogue and mediation to transcend the polarized bickering that now tends to dominate AI ethics discussions. See eg. AI Ethics Debate. We need to move from debate to dialogue, and we need to do so fast.

_____

Here is the new segment we added to the Draft Principles page.

6. Nicolas Economou

The latest attempt at articulating AI Ethics principles comes from Nicolas Economou, the CEO of electronic discovery search company, H5. Nicolas has a lot of experience with legal search using AI, as do several of us at AI-Ethics.com. In addition to his work with legal search and H5, Nicholas is involved in several AI ethics groups, including the AI Initiative of the Future Society at Harvard Kennedy School and the Law Committee of the IEEE’s Global Initiative for Ethical Considerations in AI.

Nicolas Economou has obviously been thinking about AI ethics for some time. He provides a solid scientific, legal perspective based on his many years of supporting lawyers and law firms with advanced legal search. Economou has developed six principles as reported in an ABA Legal Rebels article dated October 3, 2017, A ‘principled’ artificial intelligence could improve justice. (Some of the explanations have been edited out as indicated below. Readers are encouraged to consult the full article.) As you can see the explanations given here were written for consumption by lawyers and pertain to e-discovery. They show the application of the principles in legal search. See eg TARcourse.com. The principles have obvious applications in all aspects of society, not just the Law and predictive coding, so their value goes beyond the legal applications here mentioned.

Principle 1: AI should advance the well-being of humanity, its societies, and its natural environment. The pursuit of well-being may seem a self-evident aspiration, but it is a foundational principle of particular importance given the growing prevalence, power and risks of misuse of AI and hybrid intelligence systems. In rendering the central fact-finding mission of the legal process more effective and efficient, expertly designed and executed hybrid intelligence processes can reduce errors in the determination of guilt or innocence, accelerate the resolution of disputes, and provide access to justice to parties who would otherwise lack the financial wherewithal.

Principle 2: AI should be transparent. Transparency is the ability to trace cause and effect in the decision-making pathways of algorithms and, in hybrid intelligence systems, of their operators. In discovery, for example, this may extend to the choices made in the selection of data used to train predictive coding software, of the choice of experts retained to design and execute the automated review process, or of the quality-assurance protocols utilized to affirm accuracy. …

Principle 3: Manufacturers and operators of AI should be accountable. Accountability means the ability to assign responsibility for the effects caused by AI or its operators. Courts have the ability to take corrective action or to sanction parties that deliberately use AI in a way that defeats, or places at risk, the fact-finding mission it is supposed to serve.

Principle 4: AI’s effectiveness should be measurable in the real-world applications for which it is intended. Measurability means the ability for both expert users and the ordinary citizen to gauge concretely whether AI or hybrid intelligence systems are meeting their objectives. …

Principle 5: Operators of AI systems should have appropriate competencies. None of us will get hurt if Netflix’s algorithm recommends the wrong dramedy on a Saturday evening. But when our health, our rights, our lives or our liberty depend on hybrid intelligence, such systems should be designed, executed and measured by professionals with the requisite expertise. …

Principle 6: The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society. …  The societal dialogue relating to the use of AI in electronic discovery would benefit from being even more inclusive, with more forums seeking the active participation of political scientists, sociologists, philosophers and representative groups of ordinary citizens. Even so, the realm of electronic discovery sets a hopeful example of how an inclusive dialogue can lead to broad consensus in ensuring the beneficial use of AI systems in a vital societal function.

Nicolas Economou believes, as we do, that an interdisciplinary approach, which has been employed successfully in e-discovery, is also the way to go for AI ethics. Note his use of the word “dialogue” and mention in the article of The Sedona Conference, which pioneered the use of this technique in legal education. We also believe in the power of dialogue and have seen it in action in multiple fields. See eg. the work of physicist, David Bohm and philosopher, Martin Buber. That is one reason that we propose the use of dialogue in future conferences on AI ethics. See the AI-Ethics.com Mission Statement.

_____

__

 

 


More Additions to AI-Ethics.com: Offer to Host a No-Press Conference to Mediate the Current Disputes on AI Ethics, Report on the Asilomar Conference and Report on Cyborg Law

September 24, 2017

This week the Introduction and Mission Statement page of AI-Ethics.com was expanded. I also added two new blogs to the AI-Ethics website. The first is a report of the 2017 conference of the Future of Life Institute. The second is a report on Cyborg Law, subtitled, Using Physically Implanted AI to Enhance Human Abilities.

AI-Ethics.com Mission
A Conference to Move AI Ethics Talk from Argument to Dialogue

The first of the three missions of AI-Ethics.com is to foster dialogue between the conflicting camps in the current AI ethics debate. We have now articulated a specific proposal on how we propose to do that, namely by hosting a  conference to move AI ethics talk from argument to dialogue. I propose to use professional mediators to help the parties reach some kind of base consensus. I know we have the legal skills to move the feuding leaders from destructive argument to constructive dialogue. The battle of the ethics robots must stop!

In arguments nobody really listens to try to understand the other side. If they hear at all it is just to analyze and respond, to strike down. The adversarial argument approach only works if there is a fair, disinterested judge to rule and resolve the disputes. In the ongoing disputes between opposing camps in AI ethics there is no judge. There is only public opinion. In dialogue the whole point is to listen and hear the other side’s position. The idea is to build common understanding and perhaps reach a consensus from common ground. There are no winners unless both sides win. Since we have no judges in AI ethics, the adversarial debate now raging is pointless, irrational. It does more hard than good for both sides. Yet this kind of debate continues between otherwise very rational people.

The AI-Ethic’s Debate page was also updated this week to include the latest zinger. This time the dig was by Google’s head of search and AI, John Giannandrea, and was, as usual, directed against Elon Musk. Check out the page to see who said what. Also see: Porneczi, Google’s AI Boss Blasts Musk’s Scare Tactics on Machine Takeover (Bloomberg 9/19/17).

The bottom line for us now is how to move from debate to dialogue. (I was into that way before Sedona.) For that reason, we offer to host a closed meeting where the two opposing camps can meet and mediate.It will work, but only when the leaders of both sides are willing to at least be in the same room together at the same time and talk this out.

Here is our revised Mission page providing more details of our capabilities. Please let me know if you want to be a part of such a conference or can help make it happen.

We know from decades of legal experience as practicing attorneys, mediators and judges that we can overcome the current conflicts. We use confidential dialogues based on earned trust, understanding and respect. Social media and thirty-second sound bites, which characterize the current level of argument, will never get us there. It will, and already has, just exasperated the problem. AI-Ethics.com proposes to host a no-press allowed conference where people can speak to each other without concern of disclosure. Everyone will agree to maintain confidentiality. Then the biggest problem will be attendance, actually getting the leaders of both sides into a room together to hash this out. Depending on turn-out we could easily have dozens of breakout sessions and professional mediators and dialogue specialists assigned to each group.

The many lawyers already in AI-Ethics.com are well qualified to execute an event like that. Collectively we have experience with thousands of mediations; yes, some of them even involving scientists, top CEOs and celebrities. We know how to keep confidences, build bridges and overcome mistrust. If need be we can bring in top judges too. The current social media sniping that has characterized the AI ethics debate so far should stop. It should be replaced by real dialogue. If the parties are willing to at least meet, we can help make it happen. We are confident that we can help elevate the discussion and attain some levels of beginning consensus. At the very least we can stop the sniping. Write us if you might be able to help make this happen. Maybe then we can move onto agreement and action.

 

 

Future of Life Institute Asilomar Conference

The Future of Life Institute was founded by the charismatic, Max Tegmark, author of Life 3.0: Being Human in the Age of Artificial Intelligence (2017). This is a must-read, entry level book on AI, AI ethics and, as the title indicates, the future of life. Max is an MIT professor and cosmologist. The primary funding for his Institute is from none other than Elon Musk. The 2017 conference was held in Asilomar, California and so was named the Asilomar Conference. Looks like a very nice place on the coast to hold a conference.

This is the event where the Future of Life Institute came up with twenty-three proposed principles for AI ethics. They are called, as you might have guessed, the Asilomar Principles. I will be writing about these in the coming months as they are the most detailed list of principles yet created.

The new web page I created this week reports on the event itself, not the principles. You can learn a lot about the state of the law and AI ethics by reviewing this page and some of the videos shared there of conference presentations. We would like to put on an event like this, only more intimate and closed to press as discussed.

We will keep pushing for a small confidential dialogue based event like this. As mostly lawyers around here we know a lot about confidentiality and mediation. We can help make it happen. We have some places in Florida in mind for the event that are just as nice as Asilomar, maybe even nicer. We got through Hurricane Irma alright and are ready to go, without or without Musk’s millions to pay for it.

Cyborg Law and Cyber-Humans

The second new page in AI-Ethics.com is a report on Cyborg Law: Using Physically Implanted AI to Enhance Human Abilities. Although we will build and expand on this page in the future, what we have created so far relies primarily upon a recent article and book. The article is by Woodrow Barfield and Alexander Williams, Law, Cyborgs, and Technologically Enhanced Brains (Philosophies 2017, 2(1), 6; doi: 10.3390/philosophies2010006). The book is by the same Woodrow Barfield and is entitled Cyber-Humans: Our Future with Machines (December, 2015). Our new page also includes a short discussion and quote from Riley v. California, 573 U.S. __,  189 L.Ed.2d 430, 134 S.Ct. 2473 (2014).

Cyborg is a term that refers generally to humans with technology integrated into their body. The technology can be designed to restore lost functions, but also to enhance the anatomical, physiological, and information processing abilities of the body. Law, Cyborgs, and Technologically Enhanced Brains.

The lead author of the cited article on cyborg law, Woody Barfield is an engineer who has been thinking about the problems of cyborg regulation longer than anyone. Barfield was an Industrial and Systems Engineering Professor at the University of Washington for many years. His research focused on the design and use of wearable computers and augmented reality systems. Barfield has also obtained both JD and LLM degrees in intellectual property law and policy. The legal citations throughout his book, Cyber-Humans, make this especially valuable for lawyers. Look for more extended discussions of Barfield’s work here in the coming months. He is the rare engineer who also understands the law.


New Draft Principles of AI Ethics Proposed by the Allen Institute for Artificial Intelligence and the Problem of Election Hijacking by Secret AIs Posing as Real People

September 17, 2017

One of the activities of AI-Ethics.com is to monitor and report on the work of all groups that are writing draft principles to govern the future legal regulation of Artificial Intelligence. Many have been proposed to date. Click here to go to the AI-Ethics Draft Principles page. If you know of a group that has articulated draft principles not reported on our page, please let me know. At this point all of the proposed principles are works in progress.

The latest draft principles come from Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence. This institute, called AI2, was founded by Paul G. Allen in 2014. The Mission of AI2 is to contribute to humanity through high-impact AI research and engineering. Paul Allen is the now billionaire who co-founded Microsoft with Bill Gates in 1975 instead of completing college. Paul and Bill have changed a lot since their early hacker days, but Paul is still  into computers and funding advanced research. Yes, that’s Paul and Bill below left in 1981. Believe it or not, Gates was 26 years old when the photo was taken. They recreated the photo in 2013 with the same computers. I wonder if today’s facial recognition AI could tell that these are the same people?

Oren Etzioni, who runs AI2, is also a professor of computer science. Oren is very practical minded (he is on the No-Fear side of the Superintelligent AI debate) and makes some good legal points in his proposed principles. Professor Etzioni also suggests three laws as a start to this work. He says he was inspired by Aismov, although his proposal bears no similarities to Aismov’s Laws. The AI-Ethics Draft Principles page begins with a discussion of Issac Aismov’s famous Three Laws of Robotics.

Below is the new material about the Allen Institute’s proposal that we added at the end of the AI-Ethics.com Draft Principles page.

_________

Oren Etzioni, a professor of Computer Science and CEO of the Allen Institute for Artificial Intelligence has created three draft principles of AI Ethics shown below. He first announced them in a New York Times Editorial, How to Regulate Artificial Intelligence (NYT, 9/1/17). See his TED Talk Artificial Intelligence will empower us, not exterminate us (TEDx Seattle; November 19, 2016). Etzioni says his proposed rules were inspired by Asimov’s three laws of robotics.

  1. An A.I. system must be subject to the full gamut of laws that apply to its human operator.
  2. An A.I. system must clearly disclose that it is not human.
  3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.

We would certainly like to hear more. As Oren said in the editorial, he introduces these three “as a starting point for discussion. … it is clear that A.I. is coming. Society needs to get ready.” That is exactly what we are saying too. AI Ethics Work Should Begin Now.

Oren’s editorial included a story to illustrate the second rule on duty to disclose. It involved a teacher at Georgia Tech named Jill Watson. She served as a teaching assistant in an online course on artificial intelligence. The engineering students were all supposedly fooled for the entire semester course into thinking that Watson was a human. She was not. She was an AI. It is kind of hard to believe that smart tech students wouldn’t know that a teacher named Watson, who no one had ever seen or heard of before, wasn’t a bot. After all, it was a course on AI.

This story was confirmed by a later reply to this editorial by the Ashok Goel, the Georgia Tech Professor who so fooled his students. Professor Goel, who supposedly is a real flesh and blood teacher, assures us that his engineering students were all very positive to have been tricked in this way. Ashok’s defensive Letter to Editor said:

Mr. Etzioni characterized our experiment as an effort to “fool” students. The point of the experiment was to determine whether an A.I. agent could be indistinguishable from human teaching assistants on a limited task in a constrained environment. (It was.)

When we did tell the students about Jill, their response was uniformly positive.

We were aware of the ethical issues and obtained approval of Georgia Tech’s Institutional Review Board, the office responsible for making sure that experiments with human subjects meet high ethical standards.

Etzioni’s proposed second rule states: An A.I. system must clearly disclose that it is not human. We suggest that the word “system” be deleted as not adding much and the rule be adopted immediately. It is urgently needed not just to protect student guinea pigs, but all humans, especially those using social media. Many humans are being fooled every day by bots posing as real people and creating fake news to manipulate real people. The democratic process is already under siege by dictators exploiting this regulation gap. Kupferschmidt, Social media ‘bots’ tried to influence the U.S. election. Germany may be next (Science, Sept. 13, 2017); Segarra, Facebook and Twitter Bots Are Starting to Influence Our Politics, a New Study Warns (Fortune, June 20, 2017); Wu, Please Prove You’re Not a Robot (NYT July 15, 2017); Samuel C. Woolley and Douglas R. Guilbeault, Computational Propaganda in the United States of America: Manufacturing Consensus Online (Oxford, UK: Project on Computational Propaganda).

In the concluding section to the 2017 scholarly paper Computational Propaganda by Woolley (shown here) and Guilbeault, The Rise of Bots: Implications for Politics, Policy, and Method, they state:

The results of our quantitative analysis confirm that bots reached positions of measurable influence during the 2016 US election. … Altogether, these results deepen our qualitative perspective on the political power bots can enact during major political processes of global significance. …
Most concerning is the fact that companies and campaigners continue to conveniently undersell the effects of bots. … Bots infiltrated the core of the political discussion over Twitter, where they were capable of disseminating propaganda at mass-scale. … Several independent analyses show that bots supported Trump much more than Clinton, enabling him to more effectively set the agenda. Our qualitative report provides strong reasons to believe that Twitter was critical for Trump’s success. Taken altogether, our mixed methods approach points to the possibility that bots were a key player in allowing social media activity to influence the election in Trump’s favour. Our qualitative analysis situates these results in their broader political context, where it is unknown exactly who is responsible for bot manipulation – Russian hackers, rogue campaigners, everyday citizens, or some complex conspiracy among these potential actors.
Despite growing evidence concerning bot manipulation, the Federal Election Commission in the US showed no signs of recognizing that bots existed during the election. There needs to be, as a minimum, a conversation about developing policy regulations for bots, especially since a major reason why bots are able to thrive is because of laissez-faire API access to websites like Twitter. …
The report exposes one of the possible reasons why we have not seen greater action taken towards bots on behalf of companies: it puts their bottom line at risk. Several company representatives fear that notifying users of bot threats will deter people from using their services, given the growing ubiquity of bot threats and the nuisance such alerts would cause. … We hope that the empirical evidence in this working paper – provided through both qualitative and quantitative investigation – can help to raise awareness and support the expanding body of evidence needed to begin managing political bots and the rising culture of computational propaganda.

This is a serious issue that requires immediate action, if not voluntarily by social media providers, such as Facebook and Twitter, then by law. We cannot afford to have another election hijacked by secret AIs posing as real people.

As Etzioni stated in his editorial:

My rule would ensure that people know when a bot is impersonating someone. We have already seen, for example, @DeepDrumpf — a bot that humorously impersonated Donald Trump on Twitter. A.I. systems don’t just produce fake tweets; they also produce fake news videos. Researchers at the University of Washington recently released a fake video of former President Barack Obama in which he convincingly appeared to be speaking words that had been grafted onto video of him talking about something entirely different.

See: Langston, Lip-syncing Obama: New tools turn audio clips into realistic video (UW News, July 11, 2017). Here is the University of Washington YouTube video demonstrating their dangerous new technology. Seeing is no longer believing. Fraud is a crime and must be enforced as such. If the government will not do so for some reason, then self- regulations and individual legal actions may be necessary.

In the long term Oren’s first point about the application of laws is probably the most important of his three proposed rules: An A.I. system must be subject to the full gamut of laws that apply to its human operator. As mostly lawyers around here at this point, we strongly agree with this legal point. We also agree with his recommendation in the NYT Editorial:

Our common law should be amended so that we can’t claim that our A.I. system did something that we couldn’t understand or anticipate. Simply put, “My A.I. did it” should not excuse illegal behavior.

We think liability law will develop accordingly. In fact, we think the common law already provides for such vicarious liability. No need to amend. Clarify would be a better word. We are not really terribly concerned about that. We are more concerned with technology governors and behavioral restrictions, although a liability stick will be very helpful. We have a team membership openings now for experienced products liability lawyers and regulators.


%d bloggers like this: