Waymo v. Uber, Hide-the-Ball Ethics and the Special Master Report of December 15, 2017

December 17, 2017

The biggest civil trial of the year was delayed by U.S. District Court Judge William Alsup due to e-discovery issues that arose at the last minute. This happened in a trade-secret case by Google’s self-driving car division, WAYMO, against Uber. Waymo LLC v. Uber Techs., Inc. (Waymo I), No. 17-cv-00939-WHA (JSC), (N.D. Cal. November 28, 2017). The trial was scheduled to begin in San Francisco on December 4, 2017 (it had already been delayed once by another discovery dispute). The trial was delayed at Waymo’s request to give it time to investigate a previously undisclosed, inflammatory letter by an attorney for Richard Jacobs. Judge Alsup had just been told of the letter by the United States attorney’s office in Northern California. Judge Alsup immediately shared the letter with Waymo’s attorneys and Uber’s attorneys.

At the November 28, 2017, hearing Judge Alsup reportedly accused Uber’s lawyers of withholding this evidence, forcing him to delay the trial until Waymo’s lawyers could gather more information about the contents of the letter. NYT (11/28/17). The NY Times reported Judge Alsup as stating:

I can no longer trust the words of the lawyers for Uber in this case … You should have come clean with this long ago … If even half of what is in that letter is true, it would be an injustice for Waymo to go to trial.

NYT (11/28/17).

Judge Alsup was also reported to have said to Uber’s lawyers in the open court hearing of November 28, 2017:

You’re just making the impression that this is a total coverup … Any company that would set up such a surreptitious system is just as suspicious as can be.

CNN Tech (11/28/17).

Judge Alsup was upset by both the cover-up of the Jacobs letter and by the contents of the letter. The letter essentially alleged a wide-spread criminal conspiracy to hide and destroy evidence in all litigation, not just the Waymo case, by various means, including use of: (1) specialized communication tools that encrypt and self-destruct ephemeral communications, such as instant messages; (2) personal electronic devices and accounts not traceable to the company; and, (3) fake attorney-client privilege claims. Judge Alsup reportedly opened the hearing on the request for continuance by admonishing attorneys that counsel in future cases can be “found in malpractice” if they do not turn over evidence from such specialized tools. Fortune (12/2/17). That is a fair warning to us all. For instance, do any of your key custodians use specialized self-destruct communications tools like Wickr or Telegram?

Qualcomm Case All Over Again?

The alleged hide-the-email conduct here looks like it might be a high-tech version of the infamous Qualcomm case in San Diego. Qualcomm Inc. v. Broadcom Corp., No. 05-CV-1958-B(BLM) Doc. 593 (S.D. Cal. Aug. 6, 2007); Qualcomm, Inc. v. Broadcom Corp., 2008 WL 66932 (S.D. Cal. Jan. 7, 2008) (Plaintiff Qualcomm intentionally withheld from production several thousand important emails, a fact not revealed until cross-examination at trial of one honest witness).

The same rules of professional conduct are, or may be, involved in both Qualcomm and Waymo (citing to ABA model rules).

(a) A lawyer shall not knowingly:
(1) make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer; . . .
(b) A lawyer who represents a client in an adjudicative proceeding and who knows that a person intends to engage, is engaging or has engaged in criminal or fraudulent conduct related to the proceeding shall take reasonable remedial measures, including, if necessary, disclosure to the tribunal.

A lawyer shall not:
(a) unlawfully obstruct another party’s access to evidence or otherwise unlawfully alter, destroy, or conceal a document or other material that the lawyer knows or reasonably should know is relevant to a pending or a reasonably foreseeable proceeding; nor counsel or assist another person to do any such act.

Although, as we will see, it looks so far as if Uber and its in-house attorneys are the ones who knew about the withheld documents and destruction scheme, and not Uber’s actual counsel of record. It all gets a little fuzzy to me with all of the many law firms involved, but so far the actual counsel of record for Uber claim to have been as surprised by the letter as Waymo’s attorneys, even though the letter was directed to Uber’s in-house legal counsel.

Sarbanes-Oxley Violations?

In addition to possible ethics violations in Waymo v. Uber, a contention was made by the attorneys for Uber consultant, Richard Jacobs, that Uber was hiding evidence in violation of the Sarbanes-Oxley Act of 2002, Pub. L. 107-204, § 802, 116 Stat. 745, 800 (2002), which states in relevant part:

whoever knowingly alters, destroys, mutilates, conceals, covers up, falsifies, or makes a false entry in any record, document, or tangible object with the intent to impede, obstruct, or influence the investigation or proper administration of any matter within the jurisdiction of any department or agency of the United States or any case filed under title 11, or in relation to or contemplation of any such matter or case, shall be fined under this title, imprisoned not more than 20 years, or both.

18 U.S.C. § 1519. The Sarbanes-Oxley applies to private companies and has a broad reach not limited to litigation that has been filed, much less formal discovery requests. Section 1519 “covers conduct intended to impede any federal investigation or proceeding including one not even on the verge of commencement. Yates v. United States, – U.S. –, 135 S.Ct. 1074, 1087 (2015).

The Astonishing “Richard Jacobs Letter” by Clayton Halunen

The alleged ethical and legal violations in Waymo LLC v. Uber Techs., Inc. are based upon Uber’s failure to produce a “smoking gun” type of letter (email) and the contents of that letter. Although the letter is referred to as the Jacobs letter, it was actually written by Clayton D. Halunen of Halunen Law (shown right), an attorney for Richard Jacobs, a former Uber employee and current Uber consultant. Although this 37-page letter dated May 5, 2017 was not written by Richard Jacobs, it purports to represent how Jacobs would testify to support employment claims he was making against Uber. It was provided to Uber’s in-house employment counsel, Angella Padilla, in lieu of an interview of Jacobs that she was seeking.

A redacted copy of the letter dated May 5, 2017, has been released to the public and is very interesting for many reasons. I did not add the yellow highlighting seen in this letter and am unsure who did.

In fairness to Uber I point out that the letter states on its face in all caps that it is a RULE 408 CONFIDENTIAL COMMUNICATION FOR SETTLEMENT PURPOSES ONLY VIA EMAIL AND U.S. MAIL, a fact that does not appear to have been argued as a grounds for Uber not producing the letter to Waymo in Waymo v. Uber. That may be because Rule 408, FRCP, states that although such settlement communications are not admissible to “prove or disprove the validity or amount of a disputed claim or to impeach by a prior inconsistent statement or a contradiction” they are admissible “for another purpose, such as proving a witness’s bias or prejudice, negating a contention of undue delay, or proving an effort to obstruct a criminal investigation or prosecution.” Also, Rule 408 pertains to admissibility, not discoverability, and Rule 26(b)(1) still says that “Information within this scope of discovery need not be admissible in evidence to be discoverable.”

The letter claims that Richard Jacobs has a background in military intelligence, essentially a spy, although those portions of the letter were heavily redacted. I tend to believe this for several reasons, including the fact that I could not find a photograph of Jacobs anywhere. That is very rare. The letter goes on to describe the “unlawful activities within Uber’ s ThreatOps division.” Jacobs Letter at pg. 3. The illegal activities included fraud, theft, hacking, espionage and “knowing violations” of Sarbanes-Oxley by:

Uber’ s efforts to evade current and future discovery requests, court orders, and government investigations in violation of state and federal law as well as ethical rules governing the legal profession. Clark devised training and provided advice intended to impede, obstruct, or influence the investigation of several ongoing lawsuits against Uber and in relation to or contemplation of further matters within the jurisdiction of the United States.  …

Jacobs then became aware that Uber, primarily through Clark and Henley, had implemented a sophisticated strategy to destroy, conceal, cover up, and falsify records or documents with the intent to impede or obstruct government investigations as well as discovery obligations in pending and future litigation. Besides violating 18 U.S.C. § 15 19, this conduct constitutes an ethical violation.

Pages 5, 6 of Jacobs Letter. The practices included the alleged mandatory use of a program called WickrMe, that “programs messages to self-destruct in a matter of seconds to no longer than six days. Consequently, Uber employees cannot be compelled to produce records of their chat conversations because no record is retained.” Letter pg. 6.

Remember, Judge Alsup reportedly began the trial continuance hearing of November 28, 2017, by admonishing attorneys that in future cases they could be “found in malpractice” if they do not turn over evidence from such specialized communications tools. Fortune (12/2/17). There are a number of other secure messaging apps in adddition to Wickr that have encryption and self destruct features. A few I have found are:

There are also services on the web that will send self-destructing messages for you, such as PrivNote. This is a rapidly changing area so do your own due diligence.

Uber CEO Dara Khosrowshahi reacted to the November 29, 2017 hearing and Judge Alsup’s comments by tweeting on November 29, 2017 that Uber employees did, but no longer, use Wickr and another program like it, Telegram.

True that Wickr, Telegram were used often at Uber when I came in. As of Sept 27th I directed my teams NOT to use such Apps when discussing Uber-related business.

This seems like a good move to me on the part of Uber’s new CEO, a smart move. It is also an ethical move in a sometimes ethically challenged Silicon Valley culture. The culture is way too filled with selfish Ayn Rand devotees for my taste. I hope this leads to large scale housekeeping by Khosrowshahi. Matt Kallman, a spokesman for Uber, said after the public release of the letter:

While we haven’t substantiated all the claims in this letter — and, importantly, any related to Waymo — our new leadership has made clear that going forward we will compete honestly and fairly, on the strength of our ideas and technology.

NYT (12/15/17). You know the old saying about Fool me once …

Back to the Jacobs letter, it also alleges at pgs. 6-9 the improper use of fake attorney-client privilege to hide evidence:

Further, Clark and Henley directly instructed Jacobs to conceal documents in violation of Sarbanes-Oxley by attempting to “shroud” them with attorney-client privilege or work product protections. Clark taught the ThreatOps team that if they marked communications as “draft,” asked for a legal opinion at the beginning of an email, and simply wrote “attorney-client privilege” on documents, they would be immune from discovery.

The letter also alleges the intentional use of personal computers and accounts to conduct Uber business that they wanted to hide from disclosure. Letter pgs. 7-8.

The letter at pages 9-26 then details facts purporting to show illegal intelligence gathering activities by Uber on a global scale, violating multiple state and federal laws, including:

  • Economic Espionage Act
  • Uniform Trade Secret Act
  • California Uniform Trade Secrets Act
  • Racketeer Influenced and Corrupt Organizations Act (RICO)
  • Wire Fraud law at 18 U.S.C § 1343, and California Penal Code § 528.5
  • Wiretap Act at 18 U .S.C. § 25 10 et seq.
  • Computer Fraud and Abuse Act (CFAA)
  • Foreign Corrupt Practices Act (FCPA)

Special Master John L. Cooper

Judge Alsup referred the discovery issues raised by Uber’s non-disclosure of the “Jacobs Letter” to the Special Master handling many of the discovery disputes in this case, John L. Cooper of Farella Braun + Martel LLP. The Special Master Report with Cooper’s recommendations concerning the issues raised by the late disclosure of the letter is dated December 15, 2017. Cooper’s report is a public record that can be found here. This is  his excellent introduction of the dispute found at pages 1-2 of his report.

The trial of this trade secrets case was continued for a second time after the belated discovery of inflammatory communications by a former Uber employee came to light outside the normal discovery process. On April 14, 2017, Richard Jacobs sent a resignation e-mail to Uber’s then-CEO and then-general counsel, among others, accusing Uber of having a dedicated division with a “mission” to “steal trade secrets in a series of code-named campaigns” and engaging in other allegedly wrongful or inappropriate conduct. A few weeks later, on May 5, 2017, Mr. Jacobs’ lawyer, Clayton Halunen, sent a letter to Angela Padilla, Uber’s Vice President and Deputy General Counsel for Litigation and Employment. That 37-page letter expanded in some  detail on Mr. Jacobs’ e-mailed accusations regarding clandestine and concerted efforts to steal competitors’ trade secrets, including those belonging to Waymo. It also addressed allegations touching on Anthony Levandowski’s alleged downloading of Waymo trade secrets. The Jacobs Letter laid out what his lawyer described as a set of hardware and software programs, and usage protocols that would help Uber to allegedly carry out its thefts and other corporate espionage in secret and with minimized risk of evidence remaining on Uber servers or devices. By mid-August Mr. Jacobs and Uber settled their disputes and executed a written settlement agreement on August 14-15,2017.

Despite extensive discovery and multiple Court orders to produce an extensive amount of information related to the accusations in the Jacobs Materials, Waymo did not learn of their existence until after November 22, when the Court notified the parties that a federal prosecutor wrote a letter to this Court disclosing the gist of the Jacobs allegations.

The Special Master’s report then goes on to analyze whether Uber was obligated to produce the Jacobs Materials in response to any of the Court’s prior orders or Waymo’s discovery requests. In short, Master Cooper concluded that they were not directly covered by any of the prior court orders, but the Jacobs Letter was responsive to certain discovery requests propounded by Waymo, and Uber was obligated to produce it in response to those requests.

Special Master Cooper goes on to describe at page 7 of his report the Jacobs letter by Halunen. To state the obvious, this is clearly a “hot” document with implications that go well beyond this particular case.

That 37-page letter set forth multiple allegations relating to alleged efforts by Uber individuals and divisions. Among other things, the letter alleges that Uber planned to use certain hardware devices and software to conceal the creation and destruction of corporate records that, as a result, “would never be subject to legal discovery.” See ECF No. 2307-2 at 7. These activities, Mr. Jacobs’ lawyer asserted, “implicate ongoing discovery disputes, such as those in Uber’s litigation with Waymo.” Id. at 9. He continued:

Specifically, Jacobs recalls that Jake Nocon, Nick Gicinto, and Ed Russo went to Pittsburgh, Pennsylvania to educate Uber’s Autonomous Vehicle Group on using the above practices with the specific intent of preventing Uber’s unlawful schemes from seeing the light of day.

Jacobs’ observations cast doubt on Uber’s representation in court proceedings that no documents evidencing wrongdoing can be found on Uber’s systems and that other communications are actually shielded by the attorney-client privilege. Aarian Marshall, Judge in Waymo Dispute Lets Uber’s Self-driving Program Live—for Now, wired.com (May 3, 2017 at 8:47p.m.) (“Lawyers for Waymo also said Uber had blocked the release of 3,500 documents related to the acquisition of Otto on the grounds that they contain privileged information …. Waymo also can’t quite pin down whether Uber employees saw the stolen documents or if those documents moved anywhere beyond the computer Levandowski allegedly used to steal them. (Uber lawyers say extensive searches of their company’s system for anything connected to the secrets comes up nil.)”), available at (citation omitted).

Id. at 9-10.

Uber Attorney Angela Padilla

Angella Padilla was Uber’s Vice President and Deputy General Counsel for Litigation and Employment. She testified on these issues. Here is Special Master Cooper’s summary at pages 8-9 of his report:

Ms. Padilla testified in this Court that she read the letter “in brief’ and turned it over to other Uber attorneys, including Ms. Yoo, to begin an internal investigation. Nov. 29, 2017 Hr’g Tr. at 15:17-24. The letter also made its way to two separate committees of Uber’s Board of Directors, including the committee that was or is overseeing special litigation, including this case and the Jacobs matter. Id. at 20:10-13; 26:23-25. On June 27, Uber disclosed the allegations in the Jacobs Letter to the U.S. Attorney for the Northern District of California. Id. at 27:20-14. It disclosed the Jacobs Letter itself on or around September 12 to the same U.S. Attorney’s Office, to another U.S. Attorney, in the Southern District of New York, and to the U.S. Department of Justice in Washington. Id. at 28:4-10. Ms. Padilla testified that Uber made these disclosures to multiple prosecutors “to take the air out of [Jacobs’] extortionist balloon.” Id. at 28:18-19. Nearly one month before that distribution of the letter to federal prosecutors, on August 14, Uber settled with Mr. Jacobs—the terms of which included $4.5 million in compensation to Jacobs and $3 million to his lawyers. See id. at 62:6-63-12.

I have to pause here for a minute because the settlement amount takes my breath away. Not only the payment of $4.5 Million to Richard Jacobs who had a salary of $130,000 per year, but also the additional payment of $3.0 million dollars to his lawyers. That’s an incredible sum for writing a couple of letters, although I am sure they would claim to have put much more into their representation than meets the eye.

Other Attorneys for Uber Involved

Back to Special Master Cooper’s summary of the testimony of Uber attorney Padilla and other facts in the record about attorney knowledge of the “smoking gun” Jacobs letter (footnotes omitted):

Uber distributed the Jacobs E-Mail to two of Uber’s counsel of record at Morrison Foerster (“MoFo”) in this case. See Dec. 4, 2017 Hr’g Tr. at 46:1-47:5. Other MoFo attorneys directly involved in this case and related discovery issues e-mailed with other MoFo attorneys in late April about “Uber’s ediscovery systems regarding potential investigation into Jacobs resignation letter.” See Waymo Ex. 21.

None of the Uber outside counsel working on this case got a copy of the Jacobs Letter. Neither did the two Uber in-house lawyers who were or are handling this case; Ms. Padilla testified that she did not send it to them. Nov. 29, 2017 Hr’g Tr. at 47:8-16. By late June, some attorneys from Boies Schiller and Flexner, also counsel in this matter for Uber, had discussions with other outside counsel and Ms. Padilla about issues arising from the internal investigation triggered by the Jacobs Materials. See Waymo Ex. 20, Entries 22-22(h).

So now you know the names of the attorneys involved, and not involved, according to Special Master Cooper at page 9 of his report. Apparently none of the actual counsel of record knew about it. I would have to assume, and I think the court will too, that this was intentional. It was so clever as to be obvious, or, as the British would say too clever by half.

U.S. Attorney Notifies Judge Alsup of the Jacobs Letter

To complete the procedural background, here is what happened next leading to the referral to the Special Master. Note that a U.S. Attorney taking action like this to notify a District Court Judge of a piece of evidence is extraordinary, especially to do so just before a trial. Judge Alsup said that he had never had such a thing happen in his courtroom. The U.S. Attorney for the Northern District of California is Brian Stretch. Obviously he was concerned about the fairness of Uber’s actions. In my opinion this was a good call by Stretch.

On November 22, 2017, the U.S. Attorney for the Northern District of California notified this Court of the Jacobs allegations and specifically referenced the account Jacobs put in his letter about the efforts to keep the Ottomotto acquisition secret. See ECF No. 2383. The Court on the same day issued an order disclosing receipt of the letter from the U.S. Attorney and asked the parties to inform the Court about the extent of any prior disclosure of the Jacobs allegations. See ECF Nos. 2260-2261. After continuing the trial date in light of the parties’ responses to that query, the Court on December 4, 2017, ordered the Special Master “to determine whether and to what extent, including the history of this action and both sides’ past conduct, defendants were required to earlier produce the Jacobs letter, resignation email, or settlement agreement, or required to provide any information in those documents in response to interrogatories, Court orders, or other agreements among counsel.” ECF No. 2334, 2341.

Special Master report at pgs. 9-10.

Special Master Cooper’s Recommended Ruling

Master Cooper found that the Richard Jacobs letter was responsive to two of Waymos’ requests to produce: RFP 29 and RFP 73. He rejected Uber’s argument that they were not responsive to any request, an argument that must have been difficult to make concerning a document this hot. They tried to make the argument seem more reasonable by saying that even if the letter was “generally relevant,” it was not responsive. Then they cite to cases standing for the proposition that you have no duty to produce relevant documents that you are not going to rely on, namely documents adverse to your position, unless they are specifically requested. Here is a quote of the conclusion of that argument from page 16 of Uber’s Response to Waymo’s Submission to Special Master Cooper Re the Jacobs Documents.

Congress has specified in Rule 26(a)(ii) what documents must be unilaterally produced, and they are only those that a party “may use to support its claims or defenses.” Thus, a party cannot use a document against an adversary at trial that the party failed to disclose. However, Rule 26 very pointedly does not require the production of any documents other than those that a party plans to use “to support” its claims. Obviously, Uber is not seeking to use any of the documents at issue to support its claims. If Waymo believes that this rule should be changed, that is an issue they need to address with Congress, not with the Court.

Master Cooper did not address that argument because he found the documents were in fact both relevant and directly responsive to two of Waymo’s requests for production.

Uber’s attorney also made what I consider a novel argument that even if the Jacobs letter was found to be responsive, they still did have to produce it because, get this – it did not include any of the keywords that they agreed to use to search for documents in those categories. Incredible. What difference does that make, if they knew about the document anyway? Their client, Uber, specifically including in-house counsel, Ms. Padilla, clearly knew about it. The letter was to her. Are they suggesting that Uber did not know about the letter because some of their outside counsel did not know about it? Special Master Cooper must have had the same reaction as he disposed of this argument in short order at page 17 of his report:

Uber argues, that in some scenarios, reliance on search terms is enough to satisfy a party’s obligation to find responsive documents. See, e.g., T.D.P. v. City of Oakland, No, 16-cv-04132-LB, 2017 WL 3026925, at *5 (N.D. Cal. July 17, 2017) (finding certain search terms adequate for needs of case). But I find there are two main reasons why an exclusive focus on the use of search terms is inappropriate for determining whether the Jacobs Letter should have been produced in response to RFP 29 and RFP 73.

First, the parties never reached an agreement to limit their obligation to searching for documents to only those documents that hit on agreed-upon search terms. See Waymo Ex. 5 (Uber counsel telling Waymo during search-term negotiations that “Waymo has an obligation to conduct a reasonable search for responsive documents separate and apart from any search term negotiations”). (Emphasis added)

Second, Uber needed no such help in finding the Jacobs Materials. They were not stowed away in a large volume of data on some server. They were not stashed in some low-level employee’s files. Parties agree to use search terms and to look into the records of the most likely relevant custodians to help manage the often unwieldy process of searching through massive amounts of data. These methods are particularly called for when a party, instead of merely having to look for a needle in a haystack, faces the prospect of having to look for lots of needles in lots of haystacks. This needle was in Uber’s hands the whole time.

I would add that this needle was stuck deep into their hands, such that they were bleeding profusely. Maybe the outside attorneys did not see it, but Uber sure did and they had a duty to advise their attorneys. Uber’s attorneys would have been better off saving their powder for attacking the accuracy of the contents of the Jacobs letter and talking about the fast pace of discovery. They did that, but only as a short concluding argument, almost an afterthought. See page 16-19 of Uber’s Response to Waymo’s Submission to Special Master Cooper Re the Jacobs Documents.

Here is another theoretical argument that Uber’s lawyers threw up and Cooper’s practical response at pages 17-18 of his report:

Uber argues that it cannot be that the mere possession and knowledge of a relevant document must trigger a duty to scrutinize it and see if it matches any discovery requests. It asked at the December 12, 2017, hearing before the Special Master: Should every client be forced to instruct every one of its employees to turn over every e-mail and document to satisfy its discovery obligations to produce relevant and responsive documents? Must every head of litigation for every company regularly confronted with discovery obligations search their files for responsive documents, notwithstanding any prior agreement with the requesting party to search for responsive documents by the use of search terms?

It is not easy, in the abstract, to determine where the line regarding the scope of discovery search should be drawn. But this is not a case involving mere possession of some document. The facts in this case suggest that Ms. Padilla knew of the Jacobs Letter at the time Uber had to respond to discovery requests calling for its production—it certainly was “reasonably accessible.” Mr. Jacobs’ correspondence alleged systemic, institutionalized, and criminal efforts by Uber to conceal evidence and steal trade secrets, and not just as a general matter but also specifically involving the evidence and trade secrets at issue in this case—maybe the largest and most significant lawsuit Uber has ever faced. Ms. Padilla, Uber’s vice president and deputy general counsel for litigation and employment received the Jacobs Materials around the same time that discovery in this case was picking up and around the same time that the Court partially granted Waymo’s requested provisional relief. Shortly after that, Uber told federal prosecutors about the Jacobs allegations and then later sent them a copy of the letter. It sent the materials to outside counsel, including lawyers at MoFo that Uber hired to investigate the allegations. Two separate Uber board committees got involved, including the committee overseeing this case. Uber paid Mr. Jacobs $4.5 million, and his lawyer $3 million, to settle his claims.

The Federal Rules obligate a party to produce known, relevant and reasonably accessible material that on its face is likely to be responsive to discovery requests. RFP 29 and RFP 73 were served on Uber on May 9, just a few days after Ms. Padilla received the Jacobs Letter on May 5. Uber was therefore obligated to conduct a reasonable inquiry into those requests (and all others it received) to see if it had documents responsive to those requests and produce non-privileged responsive documents.

Special Master John Cooper concluded by finding that the “Jacobs letter was responsive to Waymo’s Request for Production No. 29 and Request for Production No. 73, and Uber should have produced it to Waymo in response to those requests.” It was beyond the scope of his assignment as Special Master to determine the appropriate remedy. Uber will now probably challenge this report and Judge William Alsup will rule.

Like everyone else, I expect Judge Alsup will agree with Cooper’s report. The real question is what remedy will he provide to Waymo and what sanctions, if any, will Judge Alsuop impose.


At the hearing on the request for a trial delay on November 28, 2017, Judge William Alsup reportedly told Uber’s in-house attorney, Angella Padilla:

Maybe you’re in trouble … This document should have been produced … You wanted this case to go to trial so that they didn’t have the document, then it turns out the U.S. attorney did an unusual thing. Maybe the guy [Jacobs] is a disgruntled employee but that’s not your decision to make, that’s the jury’s.

The Recorder (November 29, 2017).

In response to Angella Padilla saying that Jacobs was just a “extortionist” and the allegations in his letter were untrue. Judge Alsup reportedly responded by saying:

Here’s the way it looks … You said it was a fantastic BS letter with no merit and yet you paid $4.5 million. To someone like me and people out there, mortals, that’s a lot of money, that’s a lot of money. And people don’t pay that kind of money for BS and you certainly don’t hire them as consultant if you think everything they’ve got to contribute is BS. On the surface it looks like you covered this up.

The Recorder (November 29, 2017).

Judge William Alsup is one of the finest judges on the federal bench today. He is a man of unquestioned integrity and intellectual acumen. He is a Harvard Law graduate, class of 1971, and former Law clerk for Justice William O. Douglas, Supreme Court of the United States, 1971-1972.  How Judge Alsup reacts to the facts in Waymo LLC v. Uber Techs., Inc. now that he has the report of Special Master Cooper will likely have a profound impact on e-discovery and legal ethics for years to come.

No matter what actions Judge Alsup takes next, the actions of Uber and its attorneys in this case will be discussed for many years to come. Did the attorneys’ non-disclosure violate Rule of Professional Conduct 3.3, Candor Toward the Tribunal? Did they violate Rule 3.4, Fairness to Opposing Party and Counsel? Also, what about Rule 26(g) Federal Rules of Civil Procedure? Other rules of ethics and procedure? Did Uber’s actions violate the Sarbanes-Oxley Act? Other laws? Was it fraud?

Finally, and these are critical questions, did Uber breach their duty to preserve evidence when they knew that litigation was reasonably likely? Did their attorneys do so if they knew of these practices? What sanctions are appropriate for destruction of evidence under Rule 37(e) and the Court’s inherent authority? Should an adverse inference be imposed? A default judgment?

The preservation related issues are big questions that I suspect Judge Alsup will now address. These issues and his rulings, and that of other judges who will likely face the same issues soon in other cases, will impact many corporations, not just Uber. The use of software such as Wickr and Telegram is apparently already wide-spread. In what circumstances and for what types of communications may the use of such technologies place a company (or individual) at risk for severe sanctions in later litigation? Personally, I oppose intentionally ephemeral devices, where all information self-destructs, but, at the same time, I strongly support the right of encryption and privacy. It is a question of balance between openness and truth on the one hand, and privacy and security on the other. How attorneys and judges respond to these competing challenges will impact the quality of justice and life in America for many years to come.


Ethical Guidelines for Artificial Intelligence Research

November 7, 2017

The most complete set of AI ethics developed to date, the twenty-three Asilomar Principles, was created by the Future of Life Institute in early 2017 at their Asilomar Conference. Ninety percent or more of the attendees at the conference had to agree upon a principle for it to be accepted. The first five of the agreed-upon principles pertain to AI research issues.

Although all twenty-three principles are important, the research issues are especially time sensitive. That is because AI research is already well underway by hundreds, if not thousands of different groups. There is a current compelling need to have some general guidelines in place for this research. AI Ethics Work Should Begin Now. We still have a little time to develop guidelines for the advanced AI products and services expected in the near future, but as to research, the train has already left the station.

Asilomar Research Principles

Other groups are concerned with AI ethics and regulation, including research guidelines. See the Draft Principles page of AI-Ethics.com which lists principles from six different groups. The five draft principles developed by Asilomar are, however, a good place to start examining the regulation needed for research.

Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Principle One: Research Goal

The proposed first principle is good, but the wording? Not so much. The goal of AI research should be to create not undirected intelligence, but beneficial intelligence. This is a double-negative English language mishmash that only an engineer could love. Here is one way this principle could be better articulated:

Research Goal: The goal of AI research should be the creation of beneficial intelligence, not  undirected intelligence.

Researchers should develop intelligence that is beneficial for all of mankind. The Institute of Electrical and Electronics Engineers (IEEE) first general principle is entitled “Human Benefit.” The Asilomar first principle is slightly different. It does not really say human benefit. Instead it refers to beneficial intelligence. I think the intent is to be more inclusive, to include all life on earth, all of earth. Although IEEE has that covered too in their background statement of purpose to “Prioritize the maximum benefit to humanity and the natural environment.”

Pure research, where raw intelligence is created just for the hell of it, with no intended helpful “direction” of any kind, should be avoided. Because we can is not a valid goal. Pure, raw intelligence, with neither good intent, nor bad, is not the goal here. The research goal is beneficial intelligence. Asilomar is saying that Undirected intelligence is unethical and should be avoided. Social values must be built into the intelligence. This is subtle, but important.

The restriction to beneficial intelligence is somewhat controversial, but the other side of this first principle is not. Namely, that research should not be conducted to create intelligence that is hostile to humans.  No one favors detrimental, evil intelligence. So, for example, the enslavement of humanity by Terminator AIs is not an acceptable research goal. I don’t care how bad you think our current political climate is.

To be slightly more realistic, if you have a secret research goal of taking over the world, such as  Max Tegmark imagines in The Tale of the Omega Team in his book, Life 3.0, and we find out, we will shut you down (or try to). Even if it is all peaceful and well-meaning, and no one gets hurt, as Max visualizes, plotting world domination by machines is not a positive value. If you get caught researching how to do that, some of the more creative prosecuting lawyers around will find a way to send you to jail. We have all seen the cheesy movies, and so have the juries, so do not tempt us.

Keep a positive, pro-humans, pro-Earth, pro-freedom goal for your research. I do not doubt that we will someday have AI smarter than our existing world leaders, perhaps sooner than many expect, but that does not justify a machine take-over. Wisdom comes slowly and is different than intelligence.

Still, what about autonomous weapons? Is research into advanced AI in this area beneficial? Are military defense capabilities beneficial? Pro-security? Is the slaughter of robots not better than the slaughter of humans? Could robots be more ethical at “soldiering” than humans? As attorney Matt Scherer has noted, who is the editor of a good blog, LawAndAI.com and a Future of Life Institute member:

Autonomous weapons are going to inherently be capable of reacting on time scales that are shorter than humans’ time scales in which they can react. I can easily imagine it reaching the point very quickly where the only way that you can counteract an attack by an autonomous weapon is with another autonomous weapon. Eventually, having humans involved in the military conflict will be the equivalent of bringing bows and arrows to a battle in World War II.

At that point, you start to wonder where human decision makers can enter into the military decision making process. Right now there’s very clear, well-established laws in place about who is responsible for specific military decisions, under what circumstances a soldier is held accountable, under what circumstances their commander is held accountable, on what circumstances the nation is held accountable. That’s going to become much blurrier when the decisions are not being made by human soldiers, but rather by autonomous systems. It’s going to become even more complicated as machine learning technology is incorporated into these systems, where they learn from their observations and experiences in the field on the best way to react to different military situations.

Podcast: Law and Ethics of Artificial Intelligence (Future of Life, 3/31/17).

The question of beneficial or not can become very complicated, fast. Like it or not, military research into killer robots is already well underway, in both the public and private sector. Kalashnikov Will Make an A.I.-Powered Killer Robot: What could possibly go wrong? (Popular Mechanics, 7/19/17); Congress told to brace for ‘robotic soldiers’ (The Hill, 3/1/17); US military reveals it hopes to use artificial intelligence to create cybersoldiers and even help fly its F-35 fighter jet – but admits it is ALREADY playing catch up (Daily Mail, 12/15/15) (a little dated, and sensationalistic article perhaps, but easy read with several videos).

AI weapons are a fact, but they should still be regulated, in the same way that we have regulated nuclear weapons since WWII. Tom Simonite, AI Could Revolutionize War as Much as Nukes (Wired, 7/19/17); Autonomous Weapons: an Open Letter from AI & Robotics Researchers.

Principle Two: Research Funding

The second principle of Funding is more than an enforcement mechanism for the first, that you should only fund beneficial AI. It is also a recognition that ethical work requires funding too. This should be every lawyer’s favorite AI ethics principle. Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies. The principle then adds a list of five bullet-point examples.

How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked. The goal of avoiding the creation of AI systems that can be hacked, easily or not, is a good one. If a hostile power can take over and misuse an AI for evil end, then the built-in beneficence may be irrelevant. The example of a driverless car come to mind that could be hacked and crashed as a perverse joy-ride, kidnapping or terrorist act.

The economic issues raised by the second example are very important: How can we grow our prosperity through automation while maintaining people’s resources and purpose? We do not want a system that only benefits the top one percent, or top ten percent, or whatever. It needs to benefit everyone, or at least try to. Also see Asilomar Principle Fifteen: Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

Yoshua Bengio, Professor of Computer Science at the University of Montreal, had this important comment to make on the Asilomar principles during an interview at the end of the conference:

I’m a very progressive person so I feel very strongly that dignity and justice mean wealth is redistributed. And I’m really concerned about AI worsening the effects and concentration of power and wealth that we’ve seen in the last 30 years. So this is pretty important for me.

I consider that one of the greatest dangers is that people either deal with AI in an irresponsible way or maliciously – I mean for their personal gain. And by having a more egalitarian society, throughout the world, I think we can reduce those dangers. In a society where there’s a lot of violence, a lot of inequality, the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question.

Most everyone at the Asilomar Conference agreed with that sentiment, but I do not yet see a strong consensus in AI businesses. Time will tell if profit motives and greed will at least be constrained by enlightened self-interest. Hopefully capitalist leaders will have the wisdom to share the great wealth with all of society that AI is likley to create.

How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? The legal example is also a good one, with the primary tension we see so far between fair versus efficient. Just policing high crime areas might well be efficient, at least for reducing some type of crime, but would it be fair? Do we want to embed racial profiling into our AI? Neighborhood slumlord profiling? Religious, ethic profiling? No. Existing law prohibits that and for good reason. Still, predictive policing is already a fact of life in many cities and we need to be sure it has proper legal, ethical regulation.

We have seen the tension between “speedy” and “inexpensive” on the one hand, and “just” on the other in Rule One of the Federal Rules of Civil Procedure and e-discovery. When applied using active machine learning a technical solution was attained to these competing goals. The predictive coding methods we developed allowed for both precision (“speedy” and “inexpensive”) and recall (“just”). Hopefully this success can be replicated in other areas of the law where machine learning is under proportional control by experienced human experts.

The final example given is much more troubling: What set of values should AI be aligned with, and what legal and ethical status should it have? Whose values? Who is to say what is right and wrong? This is easy in a dictatorship, or a uniform, monochrome culture (sea of white dudes), but it is very challenging in a diverse democracy. This may be the greatest research funding challenge of all.

Principle Three: Science-Policy Link

This principle is fairly straightforward, but will in practice require a great deal of time and effort to be done right. A constructive and healthy exchange between AI researchers and policy-makers is necessarily a two-way street. It first of all assumes that policy-makers, which in most countries includes government regulators, not just industry, have a valid place at the table. It assumes some form of government regulation. That is anathema to some in the business community who assume (falsely in our opinion) that all government is inherently bad and essentially has nothing to contribute. The countervailing view of overzealous government controllers who just want to jump in, uninformed, and legislate, is also discouraged by this principle. We are talking about a healthy exchange.

It does not take an AI to know this kind of give and take and information sharing will involve countless meetings. It will also require a positive healthy attitude between the two groups. If it gets bogged down into an adversary relationship, you can multiply the cost of compliance (and number of meetings) by two or three. If it goes to litigation, we lawyers will smile in our tears, but no one else will. So researchers, you are better off not going there. A constructive and healthy exchange is the way to go.

Principle Four: Research Culture

The need for a good culture applies in spades to the research community itself. The Fourth Principal states: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI. This favors the open source code movement for AI, but runs counter to the trade-secret  business models of many corporations. See Eg.:OpenAI.com, Deep Mind Open Source; Liam , ‘One machine learning model to rule them all’: Google open-sources tools for simpler AI (ZDNet, 6/20/17).

This tension is likley to increase as multiple parties get close to a big breakthrough. The successful efforts for open source now, before superintelligence seems imminent, may help keep the research culture positive. Time will tell, but if not there could be trouble all around and the promise of full employment for litigation attorneys.

Principle Five: Race Avoidance

The Fifth Principle is a tough one, but very important: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards. Moving fast and breaking things may be the mantra of Silicon Valley, but the impact of bad AI could be catastrophic. Bold is one thing, but reckless is quite another. In this area of research there may not be leisure for constant improvements to make things right. HackerWay.org.
Not only will there be legal consequences, mass liability, for any group that screws up, but the PR blow alone from a bad AI mistake could destroy most companies. Loss of trust may never be regained by a wary public, even if Congress and Trial Lawyers do not overreact. Sure, move fast, but not too fast where you become unsafe. Striking the right balance is going to require an acute technical, ethical sensitivity. Keep it safe.

Last Word

AI ethics is hard work, but well worth the effort. The risks and rewards are very high. The place to start this work is to talk about the fundamental principles and try to reach consensus. Everyone involved in this work is driven by a common understanding of the power of the technology, especially artificial intelligence. We all see the great changes on the horizon and share a common vision of a better tomorrow.

During an interview at the end of the Asilomar conference, Dan Weld, Professor of Computer Science, University of Washington, provided a good summary of this common vision:

In the near term I see greater prosperity and reduced mortality due to things like highway accidents and medical errors, where there’s a huge loss of life today.

In the longer term, I’m excited to create machines that can do the work that is dangerous or that people don’t find fulfilling. This should lower the costs of all services and let people be happier… by doing the things that humans do best – most of which involve social and interpersonal interaction. By automating rote work, people can focus on creative and community-oriented activities. Artificial Intelligence and robotics should provide enough prosperity for everyone to live comfortably – as long as we find a way to distribute the resulting wealth equitably.

Moravec’s Paradox of Artificial Intelligence and a Possible Solution by Hiroshi Yamakawa with Interesting Ethical Implications

October 29, 2017

Have you heard of Moravec’s Paradox? This is a principle discovered by AI robotics expert Hans Moravec in the 1980s. He discovered that, contrary to traditional assumptions, high-level reasoning requires relatively little computation power, whereas low-level sensorimotor skills require enormous computational resources. The paradox is sometimes simplified by the phrase: Robots find the difficult things easy and the easy things difficult. Moravec’s Paradox explains why we can now create specialized AI, such as predictive coding software to help lawyers find evidence, or AI software that can beat the top human experts at complex games such as Chess, Jeopardy and Go, but we cannot create robots as smart as dogs, much less as smart as gifted two-year-olds like my granddaughter. Also see the possible economic, cultural implications of this paradox as described, for instance, by Robots will not lead to fewer jobs – but the hollowing out of the middle class (The Guardian, 8/20/17).

Hans Moravec is a legend in the world of AI. An immigrant from Austria, he is now serving as a research professor in the Robotics Institute of Carnegie Mellon University. His work includes attempts to develop a fully autonomous robot that is capable of navigating its environment without human intervention. Aside from his paradox discovery, he is well-known for a book he wrote in 1990, Mind Children: The Future of Robot and Human Intelligence. This book has become a classic, well-known and admired by most AI scientists. It is also fairly easy for non-experts to read and understand, which is a rarity in most fields.

Moravec is also a futurist with many of his publications and predictions focusing on transhumanism, including Robot: Mere Machine to Transcendent Mind (Oxford U. Press, 1998). In Robot he predicted that Machines will attain human levels of intelligence by the year 2040, and by 2050 will have far surpassed us. His prediction may still come true, especially if exponential acceleration of computational power following Moore’s Law continues. But for now, we still have a long was to go. The video below gives funny examples of this in a compilation of robots falling down during a DARPA competition.

But then just a few weeks after this blog was originally published, we are shown how far along robots have come. This November 16, 2017, video of the latest Boston Dynamics robot is a dramatic example of accelerating, exponential change.

Yamakawa on Moravec’s Paradox

A recent interview of Horoshi Yamakawa, a leading researcher in Japan working on Artificial General Intelligence (AGI), sheds light on the Moravec Paradox.  See the April 5, 2017 interview of Dr. Hiroshi Yamakawa, by a host of AI Experts, Eric Gastfriend, Jason Orlosky, Mamiko Matsumoto, Benjamin Peterson, and Kazue Evans. The interview is published by the Future of Life Institute where you will find the full transcript and more details about Yamakawa.

In his interview Horoshi explains the Moravec Paradox and the emerging best hope for its solution by deep learning.

The field of AI has traditionally progressed with symbolic logic as its center. It has been built with knowledge defined by developers and manifested as AI that has a particular ability. This looks like “adult” intelligence ability. From this, programming logic becomes possible, and the development of technologies like calculators has steadily increased. On the other hand, the way a child learns to recognize objects or move things during early development, which corresponds to “child” AI, is conversely very difficult to explain. Because of this, programming some child-like behaviors is very difficult, which has stalled progress. This is also called Moravec’s Paradox.

However, with the advent of deep learning, development of this kind of “child” AI has become possible by learning from large amounts of training data. Understanding the content of learning by deep learning networks has become an important technological hurdle today. Understanding our inability to explain exactly how “child” AI works is key to understanding why we have had to wait for the appearance of deep learning.

Horoshi Yamakawa calls his approach to deep learning the Whole Brain Architecture approach.

The whole brain architecture is an engineering-based research approach “To create a human-like artificial general intelligence (AGI) by learning from the architecture of the entire brain.”  … In short, the goal is brain-inspired AI, which is essentially AGI. Basically, this approach to building AGI is the integration of artificial neural networks and machine-learning modules while using the brain’s hard wiring as a reference. However, even though we are using the entire brain as a building reference, our goal is not to completely understand the intricacies of the brain. In this sense, we are not looking to perfectly emulate the structure of the brain but to continue development with it as a coarse reference.

Yamakawa sees at least two advantages to this approach.

The first is that since we are creating AI that resembles the human brain, we can develop AGI with an affinity for humans. Simply put, I think it will be easier to create an AI with the same behavior and sense of values as humans this way. Even if superintelligence exceeds human intelligence in the near future, it will be comparatively easy to communicate with AI designed to think like a human, and this will be useful as machines and humans continue to live and interact with each other. …

The second merit of this unique approach is that if we successfully control this whole brain architecture, our completed AGI will arise as an entity to be shared with all of humanity. In short, in conjunction with the development of neuroscience, we will increasingly be able to see the entire structure of the brain and build a corresponding software platform. Developers will then be able to collaboratively contribute to this platform. … Moreover, with collaborative development, it will likely be difficult for this to become “someone’s” thing or project. …

Act Now for AI Safety?

As part of the interview Yamakawa was asked whether he thinks it would be productive to start working on AI Safety now? As readers here know, one of the major points of the AI-Ethics.com organization I started is that we need to begin work know on such regulations. Fortunately, Yamakawa agrees. His promising Whole Brained Architecture approach to deep learning as a way to overcome Moravec’s Paradox thus will likley have a strong ethics component. Here is Horoshi Yamakawa full, very interesting answer to this question.

I do not think it is at all too early to act for safety, and I think we should progress forward quickly. Technological development is accelerating at a fast pace as predicted by Kurzweil. Though we may be in the midst of this exponential development, since the insight of humans is relatively linear, we may still not be close to the correct answer. In situations where humans are exposed to a number of fears or risks, something referred to as “normalcy bias” in psychology typically kicks in. People essentially think, “Since things have been OK up to now, they will probably continue to be OK.” Though this is often correct, in this case, we should subtract this bias.

If possible, we should have several methods to be able to calculate the existential risk brought about by AGI. First, we should take a look at the Fermi Paradox. This is a type of estimation process that proposes that we can estimate the time at which intelligent life will become extinct based on the fact that we have not yet met with alien life and on the probability that alien life exists. However, using this type of estimation would result in a rather gloomy conclusion, so it doesn’t really serve as a good guide as to what we should do. As I mentioned before, it probably makes sense for us to think of things from the perspective of increasing decision making bodies that have increasing power to bring about the destruction of humanity.


%d bloggers like this: