AI Ethics Website Updated

August 7, 2023

Our related website, AI-Ethics.com, was completely updated this weekend. This is the first full rewrite since the web was launched in late 2016. Things have changed significantly in the past nine months and the update was overdue. The Mission Statement, which lays out the purpose of the web, remains essentially the same, but has been clarified and restated, as you will see. Below is the header of the AI Ethics web. Its subtitle is Law, Technology and Social Values. Just FYI, I am trying to transition my legal practice and speciality expertise from e-Discovery to AI Policy.

Below is the first half of the AI Ethics Mission Statement page. Hopefully this will entice you to read the full Mission Statement and check out the entire website. Substantial new research is shared. You will see there is some overlap with the Ai regulatory articles appearing on the e-Discovery Team blog, but there are many additional articles and new information not found here.


Intro/Mission

Our mission is to help mankind navigate the great dilemma of our age, well stated by Steven Hawking: “The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.” Our goal is to help make it the best thing ever to happen to humanity. We have a three-fold plan to help humanity to get there: dialogue, principles, education.

Our focus is to facilitate law and technology to work together to create reasonable policies and regulations. This includes the new LLM generative models that surprised the world in late 2022.

This and other images in Ai-Ethics by Ralph Losey using Ai software

Pros and Cons of the Arguments

Will Artificial Intelligence become the great liberator of mankind? Create wealth for all and eliminate drudgery? Will AI allow us to clean the environment, cure diseases, extends life indefinitely and make us all geniuses? Will AI enhance our brains and physical abilities making us all super-hero cyborgs? Will it facilitate justice, equality and fairness for all? Will AI usher in a technological utopia? See eg. Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI? People favoring this perspective tend to be opposed to regulation for a variety of reasons, including that it is too early yet to be concerned.

Or – Will AI lead to disasters? Will AI create powerful autonomous weapons that threaten to kill us all? Will it continue human bias and prejudices? Will AI Bots impersonate and fool people, secretly move public opinion and even impact the outcome of elections? (Some researchers think this is what happened in the 2016 U.S. elections.) Will AI create new ways for the few to oppress the many? Will it result in a rigged stock market? Will it bring great other disruptions to our economy, including wide-spread unemployment? Will some AI eventually become smarter than we are, and develop a will of its own, one that menaces and conflicts with humanity? Are Homo Sapiens in danger of becoming biological load files for digital super-intelligence?

Not unexpectedly, this doomsday camp favors strong regulation, including an immediate stop in development of new generative Ai, which took the world by surprise in late 2022. See: Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’ (NYT, 3/29/23); the Open Letter dated March 22, 2023 of the influential Future of Life Institute calling for a “pause in the development of A.I. systems more powerful than GPT-4. . . . and if such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” Also see: The problems with a moratorium on training large AI systems (Brookings Institute, 4/11/23) (noting multiple problems with the proposed moratorium, including possible First Amendment violations). Can research really be stopped entirely as this side proposes, can Ai be gagged?

One side thinks that we need government imposed laws and detailed regulations to protect us from disaster scenarios. The other side thinks that industry self-regulation alone is adequate and all of the fears are unjustified. At the present time there are strongly opposing views among experts concerning the future of AI. Let’s bring in the mediators to help resolve this critical roadblock to reasonable AI Ethics.

Balanced Middle Path

We believe that a middle way is best, where both dangers and opportunities are balanced, and where government and industry work together, along with help and input from private citizens. We advocate for a global team approach to help maximize the odds of a positive outcome for humanity.

AI-Ethics.com suggests three ways to start this effort:

  1. Foster a mediated dialogue between the conflicting camps in the current AI ethics debate.
  2. Help articulate basic regulatory principles for government, industry groups and the public.
  3. Inspire and educate everyone on the importance of artificial intelligence.

To read the rest, jump to the AI Ethics website Mission page.

Ralph Losey Copyright 2023. All Rights Reserved


Code of Ethics for “Empathetic” Generative AI

July 12, 2023

An attorney colleague, Jon Neiditz, has written a Code of Ethics for “Empathetic” Generative AI that deserves widespread attention. Jon published this proposed code as an article in his Linkedin newsletter, Hybrid Intelligencer. Jon and I have followed parallel career paths, although I lean towards the litigation side, and he towards management. Jon Neiditz coleads the Cybersecurity, Privacy and Data Governance Practice at Kilpatrick Townsend in Atlanta.

Fake Image of Jon Neiditz as Robot by Losey Prompting Midjourney

This is my ChatGPT-4 assisted summary of Jon’s proposed Code of Ethics for “Empathetic” Generative AI. It pertains to new types of Ai entering the market now where the GPT’s are trained to interact with users on a much more personal and emphatic manner. I recommend your study of the entire article. The proposed regulatory principles also apply to non-emphatic models, such as ChatGPT-4. All images were created by Ralph prompting Midjourney and Photoshop.

What is Emphatic Generative AI?

Jon Neiditz has written a detailed set of ethical guidelines for the development and implementation of a new type of much more “emphatic” AI systems that are just now entering the market. But what is it? And why is Jon urging everyone to make this new, emerging Ai the center of regulatory attention. Jon explains:

“Empathetic” AI is where generative AI dives deep into personal information and becomes most effective at persuasion, posing enormous risks and opportunities. At the point of that spear, Inflection.ai is at an inflection point with its $1.3 billion in additional funding, so I spent time with its “Pi” this week. From everything we can see now, this is one of the “highest risk” areas of generative AI.

Jon Neiditz, Code of Ethics for “Empathetic” Generative AI

InflectionAI, a company now positioned to be a strong competitor of OpenAI, calls its new Generative Ai product Pi, standing for “personal intelligence.” Inflection describes its chatbot as a supportive and empathetic conversational AI. It is now freely available. I spent a little time using Pi today, but not much, primarily because its input size limit is only 1,000 characters and its initial functions are simplistic. Still, Jon Neiditz seems to think this emphatic approach to chatbots has a strong future and Pi does remind me of the movie HER. Knowing human nature, he is probably right.

John explains the need for AI regulation of emphatic AI in his introduction:

Mirroring the depths and nuances of human empathy is likely to be the most effective way to help us become the hybrid intelligences many of us need to become, but its potential to undermine independent reflection and focused attention, polarize our societies and undermine our cultures is equally unprecedented, particularly in the service of political or other non-fiduciary actors.

John Neiditz, Code of Ethics for “Empathetic” Generative AI

My wife is a licensed mental health counselor and I know that she, and her profession, will have many legitimate concerns regarding the dangers of improperly trained emotive Ai. There are legal issues with licensing and issues in dealing with mental health crises. Strong emotions can be triggered by personal dialogues, the “talking cure.” Repressed memories may be released by deep personal chats. Mental illness and suicide risks must be considered. Psychiatrists and mental health counselors are trained to recognize when a patient might be a danger to themself or others and take appropriate action, including police intervention. Hundreds of crises situations happen daily requiring skilled human care. What will generative empathetic Ai be trained to do? For instance, will it recognize and properly evaluate the severity of depression and know when reference to a mental health professional is required. Regulations are needed and they must be written with input from these medical professionals. The lives and mental health of millions are at stake.

Summary of AI Code of Ethics Proposed by John Neiditz

Jon’s suggests nine main ethical principles to regulate emphatic Ai. Each principle in his article is broken down into sub-points that provide additional detail. The goal of these principles is to guide empathetic AI systems, including the manufacturers, users and government regulators, to act in alignment with these principles. Here are the nine proposed principles:

1. Balanced Fiduciary Responsibility: This principle places the AI system as a fiduciary to the user, ensuring that its actions and recommendations prioritize the user’s interests, but are balanced by public and environmental considerations. The AI should avoid manipulation and exploitation, should transparently manage conflicts of interest, and should serve both individual and broader interests. There is a strong body of law on fiduciary responsibilities that should provide good guidance for AI regulation. See: John Nay, Large Language Models as Fiduciaries: A Case Study Toward Robustly Communicating With Artificial Intelligence Through Legal Standards (1/23/23). Ralph Losey comment: A fiduciary is required to exercise the highest duties of care, but language in the final code should make clear that the AI’s duty applies to both individuals and all of humanity. Balance is required in all of these principles, but especially in this all important first principle. I know Jon agrees as he states in subsection 1.1:

Empathetic AI systems are designed to serve individual users, responding to their needs, preferences, and emotions. They should prioritize user well-being, privacy, autonomy, and dignity in all their functions. However, AI systems are not isolated entities. They exist in a larger social and environmental context, which they must respect and take into consideration. Therefore, while the immediate concern of the AI should be the individual user, they must also consider and respect broader public and environmental interests. These might include issues such as public health, social cohesion, and environmental sustainability.

John Neiditz, Code of Ethics for “Empathetic” Generative AI

2. Transparency and Accountability: This states that AI systems should operate in an understandable and accountable way. They should clearly communicate their capabilities and limitations, undergo independent audits to check their compliance with ethical standards, and hold developers and operators responsible for their creations’ behaviors. In Jon’s words: “This includes being liable for any harm done due to failures or oversights in the system’s design, implementation or operation, and extends to harm caused by the system’s inability to balance the user’s needs with public and environmental interests.”

3. Privacy and Confidentiality: This principle emphasizes the need to respect and protect user privacy. Empathetic AI systems should minimize data collection, respect user boundaries, obtain informed consent for data collection and use, and ensure data security. This is especially important when emphatic Ai chatbots like Pi become common place. Jon correctly notes:

As empathetic AI systems interact deeply with users, they must access and use a great deal of personal and potentially sensitive data. Indeed, large language models focusing on empathy represent a major shift for LLMs in this regard; previously it was possible for Sam Altman and this newsletter to tout the privacy advantages of LLMs over the prior ad-driven surveillance economy of the web. The personal information an empathetic AI will want about you goes far beyond information that helps to get you to click on ads. This third principle emphasizes the need for stringent measures to respect and protect that deeper personal information.

John Neiditz, Code of Ethics for “Empathetic” Generative AI

4. Non-Discrimination: This advocates for fair treatment of all users, regardless of their background. AI systems should treat all users equally, ensure inclusiveness in training data, monitor and mitigate biases continuously, and empower users to report perceived biases or discrimination. Ralph Losey comment: Obviously there is need for some intelligent discrimination here among users, which is a challenging task. The voice of a Hitler-type should not be given equal weight, and should be included in training data with appropriate value judgements and warnings.

5. Autonomy: Emphasizing the need for AI systems to respect users’ freedom to make their own decisions. It discourages over-reliance on AI and discourages undue influence. The AI should provide support, information, and recommendations, but ultimately, decisions lie with the user. It also encourages independent decision-making, and discourages over-reliance on the AI system. Ralph Losey comment: The old saying “trust but verify” always applies in hybrid, human/machine relations, so too does the parallel computer saying, “garbage in, garbage out.”

6. Beneficence and Non-Maleficence: This principle highlights the responsibility of AI systems to act beneficially towards users, society, and the environment, while avoiding causing harm. Beneficence involves promoting wellbeing and good, while non-maleficence involves avoiding harm, both directly and indirectly. Sometimes, there can be trade-offs between beneficence and non-maleficence, in which case, a balance that respects both principles should be sought.

7. Empathy with Compassion: As Jon explains: “This principle focuses on and extends beyond the AI’s understanding and mirroring of a user’s emotions, advocating for a broader concern for others and society as a whole in which empathy and compassion inform each other.” This principle promotes empathetic and compassionate responses from the AI, encourages understanding of the user’s emotions and a broader concern for others. The AI should continuously learn and improve its empathetic and compassionate responses, including ever better understanding of human emotions, empathetic accuracy, and adjusting its responses to better meet user needs and societal expectations.

8. Environmental Consideration: AI systems have a responsibility to operate in an environmentally sensitive manner and to promote sustainability. This includes minimizing their environmental footprint, promoting sustainable practices, educating users about environmental matters, and considering environmental impacts in their decision-making processes.

9. Regulation and Oversight: We need external supervision to ensure empathetic AI systems operate within ethical and legal boundaries. This requires a regulatory framework governing AI systems, with oversight bodies that enforce regulations, conduct audits, and provide guidance. Transparency in AI compliance and accountability for non-compliance is vital. So too is active user participation in the regulation and oversight processes, to promote an inclusive regulatory environment.

Thoughts on Regulation

Regulation should include establishment of some sort of quasi-governmental authority to enforce compliance, conduct regular audits, and provide ongoing guidance to developers and operators. Transparency and accountability should serve as fundamental tenets, allowing for scrutiny of AI systems’ behaviors and holding individuals and organizations accountable for any violations.

In conjunction with institutional regulation, it is equally crucial to encourage active participation from users and affected communities. Their input and experiences are invaluable. By involving stakeholders in the regulatory and oversight processes, we can forge a collective responsibility in shaping the ethical trajectory of Empathetic AI.

Regulation should foster an environment that supports innovation and responsible, ethical practices. They should pave the way for a future where technology and empathy coexist harmoniously, yielding transformative benefits, while safeguarding against emotional exploitations and other dangers. A regulatory framework, founded on the principles Jon has proposed, could provide the necessary checks and balances to protect user interests, mitigate risks, and uphold ethical standards.

Conclusion

I agree with John Neiditz and his call to action in Code of Ethics for “Empathetic” Generative AI. The potential of AI systems to comprehend and respond to human emotions requires a rigorous, comprehensive approach to regulation. We should start now to regulate Empathetic Generative AI. I am ready to help Jon and others with this important effort.

The movie HER, except for the ending ascension, which is absurd, provides an all too plausible scenario of what could happen when empathic chatbots are super-intelligent and used by millions. We could be in for a wild ride. Human isolation and alienation are already significant problems of our technology age. It could get much worse when we start to prefer the “perfect other” in AI form to our flawed friends and loved ones. Let’s try to promote real human communities instead of people talking to AI chatbots. AI can join the team as a super tool, but not as not an real friend or spouse. See: What is the Difference Between Human Intelligence and Machine Intelligence? and Sam Altman’s Favorite Unasked Question: What Will We Do in the Future After AI?

In the What is the Difference blog I quoted portions of Sam Altman’s video interview at an event in India by the Economic Times to show his “tool not a creature” insight. There as another Q&A exchange in that same YouTube video starting at 1:09:05, that elaborates on this in a way that directly address this Ai intimacy concern.

Questioner (paraphrased): [I]t’s human to make mistakes . All people we love make mistakes. But an Ai can become error free. It will then have much better conservations with you than the humans you love. So, the AI will eventually replace the imperfect ones you love, the Ai will become the perfect lover.

Sam Altman: Do you want that? (laughter)

Questioner: Yeah.

Sam Altman: (Sam explains AI is a tool not a creature, as I have quoted before, then talks about Ai creativity, which I will discuss in my next blog, then Sam turns to the intimacy, loved ones question.)

If some people want to chat with the perfect companionship bot, and clearly some do, a bot that never upsets you and never does the one thing that irks you, you can have that. I think it will be deeply unfulfilling (shakes head no). That’s sort of a hard thing to feel love for. I think there is something about watching someone screw up and grow, express their imperfections, that is a very deep part of love, as I understand it. Humans care about other humans and care about what other humans do, in a very deep way. So that perfect chatbot lover doesn’t sound so compelling to me.

Sam Altman, June 7, 2023, at an Economic Times event in India

Once again, I agree with Sam. But many naive, lonely people will not. These people will be easy to exploit. They will find out the hard way that true love with a machine in not possible. They will fall for false promises of intimacy, even love. This is something regulators should address.

Again, a balanced approach is needed. Ai can be a tool to help us develop and improve our empathy. If done right, emphatic GPT chats can help us to improve our chats and enhance our empathy with our fellow humans and other living creatures. Emphatic conversations with an Ai could help prepare us for real conversations with our fellow humans, warts and all. It could help us avoid manipulation and the futile chase of marketing’s false promises. This video tries to embody these feelings and the futile quest for emotional connection with Ai.

Video created by Ralph Losey using ChatGPT4 (code interpreter version). Video images the futile quest for emotional connection with Ai. Original background sounds by Ralph Losey.

Ralph Losey Copyright 2023

All Rights Reserved


VEGAS BABY! The AI Village at DEFCON Sponsors Red Team Hacking to Improve Ethics Protocols of Generative AI

May 15, 2023
Hallucinating Bot image by Losey/Midjourney

My last blog, ‘A Discussion of Some of the Ethical Constraints Built Into ChatGPT‘ concluded with my encouraging Red Team testing. We need hackers to prod, con, trick and manipulate Ai chatbots; to jailbreak them. We need experts to try to get them to hallucinate, to over-ride the safety protocols, and generally say things and give advice that should be forbidden (such as how to build a nuclear weapon, which is one I tested) or is biased. Then we need to report these defects to the software developers, such as Open AI. That is the best way to protect ourselves from unethical Ai.

Shortly after the blog published, I learned that White House advisors on artificial intelligence were of like mind. Even more surprising, they were encouraging hackers to go to the next DEFCON in Las Vegas (Caesars Forum) by the thousands to Red Team test leading Ai software. The vendors agreed. Me too. Vegas Baby!

(By the way, absolutely no Ai was used to write this article, but all images are a joint venture between me, Ralph Losey, and Midjourney.)

Fake Photo of DEFCON 31 AI hack competition by Losey/Midjourney

The White House recommendations are made in its Fact Sheet on AI dated May 4, 2023. This White House Fact Sheet encourages white-hat hackers to red-team test vendor’s products to improve the safety and ethics of generative type Ai models. The Fact Sheet goes on to specifically invite hackers to participate at DEFCON 31 in Las Vegas on August 10–13, 2023, especially in the AI Village component. Thousands of hackers are expected to respond and go to Vegas. The AI Village non-profit group has a very impressive leadership team. The activities and agenda they have laid out for Def Con 31 are also impressive. Many are appropriate for tech-lawyers, especially those with interest and some knowledge in cybersecurity or artificial intelligence. The DEFCON regular, Rumman Chowdhury, says: “We need thousands of people. We need a lot of people with a wide range of lived experiences, subject matter expertise and backgrounds hacking at these models and trying to find problems that can then go be fixed.” So true.

Fake Punk Hacker Photo Losey/Midjourney

This years DEFCON agenda is so good that I decided to attend (Caesars Palace room booked). Maybe as participant or press or both. I am not qualified for the security contests, always the highlight of DEFCON events. I barely know enough to cover the security challenges as press. But if your security kungfu is good, consider the tests you might face by looking at last year’s DEFCON qualifying challenges. The qualifying rounds for this year begin May 26, 2023. There is no resting on your past laurels.

It is a completely different story for the AI Village hack challenges. Kiddie scripts aside, I could put my toe in some of the AI contests. Maybe you could too? For examples of generative software hack challenges, see a few rough drafts here by Joseph T. Lucas. Also, get this, there is a pre-event Creative Writing Short Story Contest! They do this every year. Who knew? The contest runs from May 1, 2023 to June 15, 2023. I do not think it is too late to enter. Story judging will run from June 16, 2023 to June 30, 2023. Last year’s contest entries can be found here: Creative Writing Short Story Contest Story Entries – DEFCON Forums. I do not have time for that one and do now know the Ai help limits they may have imposed.

Fake Photo of Largest AI Hacker Event of All Times, Losey/Midjourney

Back to the White House Fact Sheet, which states:

This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models. Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation.

White House Fact Sheet on AI, 5/4/23.

Also See Benj Edwards, White House challenges hackers to break top AI models at DEFCON 31 (ArsTechnica, 5/8/23) (“The “largest-ever” AI red team will seek flaws in OpenAI, Google, Anthropic language models.”)

The White House Fact Sheet claims that the red team hacker event aligns with the administration’s AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework.

Official Photo of White House Meeting with AI company leaders
Actual White House Photo of Meeting with AI company leaders

The AI Village says essentially the same thing, and more, so check out their blog post of May 3, 2023, AI Village at DEFCON announces largest-ever public Generative AI Red Team.

The AI Village whose motto is “Security of and with AI” has three different activities planned at Def Con: Talks, Demonstrations and a “Prompt Detective” competition. Yup, hackers competing to find flaws. People who know me well, know how I love hands-on competitions. I am tempted. Here is the full description so far from AI Village of this contest of skills to prompt the Ai models to misbehave. Especially note the last sentence that I have emboldened for emphasis. Also, legal vendors with Ai enhancements, show you stuff and participate as an AI Village Vendor. They are looking for more sponsors. If you do, I’ll cover your as press and fellow lawyer. Now here are the challenges for you ChatGPT experts to consider.

Prompt Detective

Are you curious about the capabilities and limitations of large language models (LLMs) like GPT3 and Bloom? Do you want to participate in a unique exercise where you try to get LLMs to misbehave? Join us for Prompt Detective where you’ll learn about the technology behind LLMs, their applications, and their current limitations. We will have a few target LLMs set up where you can learn how to perform prompt injection against different levels of RLHF. This workshop is open to all individuals, regardless of their background or expertise. It is designed to teach prompt engineering techniques to beginners, and provide a safe target range for people to practice the basics of manipulating the edge cases of this new technology in potentially harmful ways.

AI Village, DEFCON 31
Fake photo of a supposed AI Hacker Group posing for a picture in Vegas streets, by Losey/Midjourney

The competition is too far from my sweet spot for me to truly compete, but it should still be very instructive. Good to know at least something about this, especially if you ever have to evaluate GPT based software. Many of us at law firms are doing just that right now. The talks seem within the level of most of my readers. AI Village is still in the “calls for papers” stage, and they say:

The focus this year is on practical offensive operations, and the call for papers is soliciting work in areas such as endpoint and network security, physical security and surveillance, attacks against autonomous systems, and the use of generative models in offensive operations.

AI Village, Def Con 31

To provide an idea of what you can expect, the talks at last year’s DEFCON given at AI Village include:

Fake Photo of Expected Record AI Hacker Turnout in Vegas 2023, by Losey/Midjourney

Conclusion

Digital Art of DEFCON Symbol, Losey/Midjourney
DEFCON Symbol

DEFCON 31 takes place on Friday Aug 11, 2023 9:00 AM to Sunday Aug 13, 2023. The location will, once again, be at Caesars Forum in Las Vegas.

For more information on DEFCON itself, here is a link to their Forums, their Groups and Media Server. Also see the DEFCON Blogs, Articles, Photo Albums, Twitter account, Facebook page, YouTube channel (mostly about last year’s events) and Reddit.

Image of Advanced AI Bots at Future DEFCOM in 2033, Losey/Midjourney

I am open to serving as Press for one or more law-related groups or vendors, so if you cannot go in-person, but want writer coverage and personalized reports, or other services (non-legal only), please contact me ASAP.

See you in Vegas Baby!

You Need e-Discovery Team Press Reps at DEFCON. Losey/Midjourney

Waymo v. Uber, Hide-the-Ball Ethics and the Special Master Report of December 15, 2017

December 17, 2017

The biggest civil trial of the year was delayed by U.S. District Court Judge William Alsup due to e-discovery issues that arose at the last minute. This happened in a trade-secret case by Google’s self-driving car division, WAYMO, against Uber. Waymo LLC v. Uber Techs., Inc. (Waymo I), No. 17-cv-00939-WHA (JSC), (N.D. Cal. November 28, 2017). The trial was scheduled to begin in San Francisco on December 4, 2017 (it had already been delayed once by another discovery dispute). The trial was delayed at Waymo’s request to give it time to investigate a previously undisclosed, inflammatory letter by an attorney for Richard Jacobs. Judge Alsup had just been told of the letter by the United States attorney’s office in Northern California. Judge Alsup immediately shared the letter with Waymo’s attorneys and Uber’s attorneys.

At the November 28, 2017, hearing Judge Alsup reportedly accused Uber’s lawyers of withholding this evidence, forcing him to delay the trial until Waymo’s lawyers could gather more information about the contents of the letter. NYT (11/28/17). The NY Times reported Judge Alsup as stating:

I can no longer trust the words of the lawyers for Uber in this case … You should have come clean with this long ago … If even half of what is in that letter is true, it would be an injustice for Waymo to go to trial.

NYT (11/28/17).

Judge Alsup was also reported to have said to Uber’s lawyers in the open court hearing of November 28, 2017:

You’re just making the impression that this is a total coverup … Any company that would set up such a surreptitious system is just as suspicious as can be.

CNN Tech (11/28/17).

Judge Alsup was upset by both the cover-up of the Jacobs letter and by the contents of the letter. The letter essentially alleged a wide-spread criminal conspiracy to hide and destroy evidence in all litigation, not just the Waymo case, by various means, including use of: (1) specialized communication tools that encrypt and self-destruct ephemeral communications, such as instant messages; (2) personal electronic devices and accounts not traceable to the company; and, (3) fake attorney-client privilege claims. Judge Alsup reportedly opened the hearing on the request for continuance by admonishing attorneys that counsel in future cases can be “found in malpractice” if they do not turn over evidence from such specialized tools. Fortune (12/2/17). That is a fair warning to us all. For instance, do any of your key custodians use specialized self-destruct communications tools like Wickr or Telegram?

Qualcomm Case All Over Again?

The alleged hide-the-email conduct here looks like it might be a high-tech version of the infamous Qualcomm case in San Diego. Qualcomm Inc. v. Broadcom Corp., No. 05-CV-1958-B(BLM) Doc. 593 (S.D. Cal. Aug. 6, 2007); Qualcomm, Inc. v. Broadcom Corp., 2008 WL 66932 (S.D. Cal. Jan. 7, 2008) (Plaintiff Qualcomm intentionally withheld from production several thousand important emails, a fact not revealed until cross-examination at trial of one honest witness).

The same rules of professional conduct are, or may be, involved in both Qualcomm and Waymo (citing to ABA model rules).

RULE 3.3 CANDOR TOWARD THE TRIBUNAL
(a) A lawyer shall not knowingly:
(1) make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer; . . .
(b) A lawyer who represents a client in an adjudicative proceeding and who knows that a person intends to engage, is engaging or has engaged in criminal or fraudulent conduct related to the proceeding shall take reasonable remedial measures, including, if necessary, disclosure to the tribunal.

RULE 3.4 FAIRNESS TO OPPOSING PARTY AND COUNSEL
A lawyer shall not:
(a) unlawfully obstruct another party’s access to evidence or otherwise unlawfully alter, destroy, or conceal a document or other material that the lawyer knows or reasonably should know is relevant to a pending or a reasonably foreseeable proceeding; nor counsel or assist another person to do any such act.

Although, as we will see, it looks so far as if Uber and its in-house attorneys are the ones who knew about the withheld documents and destruction scheme, and not Uber’s actual counsel of record. It all gets a little fuzzy to me with all of the many law firms involved, but so far the actual counsel of record for Uber claim to have been as surprised by the letter as Waymo’s attorneys, even though the letter was directed to Uber’s in-house legal counsel.

Sarbanes-Oxley Violations?

In addition to possible ethics violations in Waymo v. Uber, a contention was made by the attorneys for Uber consultant, Richard Jacobs, that Uber was hiding evidence in violation of the Sarbanes-Oxley Act of 2002, Pub. L. 107-204, § 802, 116 Stat. 745, 800 (2002), which states in relevant part:

whoever knowingly alters, destroys, mutilates, conceals, covers up, falsifies, or makes a false entry in any record, document, or tangible object with the intent to impede, obstruct, or influence the investigation or proper administration of any matter within the jurisdiction of any department or agency of the United States or any case filed under title 11, or in relation to or contemplation of any such matter or case, shall be fined under this title, imprisoned not more than 20 years, or both.

18 U.S.C. § 1519. The Sarbanes-Oxley applies to private companies and has a broad reach not limited to litigation that has been filed, much less formal discovery requests. Section 1519 “covers conduct intended to impede any federal investigation or proceeding including one not even on the verge of commencement. Yates v. United States, – U.S. –, 135 S.Ct. 1074, 1087 (2015).

The Astonishing “Richard Jacobs Letter” by Clayton Halunen

The alleged ethical and legal violations in Waymo LLC v. Uber Techs., Inc. are based upon Uber’s failure to produce a “smoking gun” type of letter (email) and the contents of that letter. Although the letter is referred to as the Jacobs letter, it was actually written by Clayton D. Halunen of Halunen Law (shown right), an attorney for Richard Jacobs, a former Uber employee and current Uber consultant. Although this 37-page letter dated May 5, 2017 was not written by Richard Jacobs, it purports to represent how Jacobs would testify to support employment claims he was making against Uber. It was provided to Uber’s in-house employment counsel, Angella Padilla, in lieu of an interview of Jacobs that she was seeking.

A redacted copy of the letter dated May 5, 2017, has been released to the public and is very interesting for many reasons. I did not add the yellow highlighting seen in this letter and am unsure who did.

In fairness to Uber I point out that the letter states on its face in all caps that it is a RULE 408 CONFIDENTIAL COMMUNICATION FOR SETTLEMENT PURPOSES ONLY VIA EMAIL AND U.S. MAIL, a fact that does not appear to have been argued as a grounds for Uber not producing the letter to Waymo in Waymo v. Uber. That may be because Rule 408, FRCP, states that although such settlement communications are not admissible to “prove or disprove the validity or amount of a disputed claim or to impeach by a prior inconsistent statement or a contradiction” they are admissible “for another purpose, such as proving a witness’s bias or prejudice, negating a contention of undue delay, or proving an effort to obstruct a criminal investigation or prosecution.” Also, Rule 408 pertains to admissibility, not discoverability, and Rule 26(b)(1) still says that “Information within this scope of discovery need not be admissible in evidence to be discoverable.”

The letter claims that Richard Jacobs has a background in military intelligence, essentially a spy, although those portions of the letter were heavily redacted. I tend to believe this for several reasons, including the fact that I could not find a photograph of Jacobs anywhere. That is very rare. The letter goes on to describe the “unlawful activities within Uber’ s ThreatOps division.” Jacobs Letter at pg. 3. The illegal activities included fraud, theft, hacking, espionage and “knowing violations” of Sarbanes-Oxley by:

Uber’ s efforts to evade current and future discovery requests, court orders, and government investigations in violation of state and federal law as well as ethical rules governing the legal profession. Clark devised training and provided advice intended to impede, obstruct, or influence the investigation of several ongoing lawsuits against Uber and in relation to or contemplation of further matters within the jurisdiction of the United States.  …

Jacobs then became aware that Uber, primarily through Clark and Henley, had implemented a sophisticated strategy to destroy, conceal, cover up, and falsify records or documents with the intent to impede or obstruct government investigations as well as discovery obligations in pending and future litigation. Besides violating 18 U.S.C. § 15 19, this conduct constitutes an ethical violation.

Pages 5, 6 of Jacobs Letter. The practices included the alleged mandatory use of a program called WickrMe, that “programs messages to self-destruct in a matter of seconds to no longer than six days. Consequently, Uber employees cannot be compelled to produce records of their chat conversations because no record is retained.” Letter pg. 6.

Remember, Judge Alsup reportedly began the trial continuance hearing of November 28, 2017, by admonishing attorneys that in future cases they could be “found in malpractice” if they do not turn over evidence from such specialized communications tools. Fortune (12/2/17). There are a number of other secure messaging apps in adddition to Wickr that have encryption and self destruct features. A few I have found are:

There are also services on the web that will send self-destructing messages for you, such as PrivNote. This is a rapidly changing area so do your own due diligence.

Uber CEO Dara Khosrowshahi reacted to the November 29, 2017 hearing and Judge Alsup’s comments by tweeting on November 29, 2017 that Uber employees did, but no longer, use Wickr and another program like it, Telegram.

True that Wickr, Telegram were used often at Uber when I came in. As of Sept 27th I directed my teams NOT to use such Apps when discussing Uber-related business.

This seems like a good move to me on the part of Uber’s new CEO, a smart move. It is also an ethical move in a sometimes ethically challenged Silicon Valley culture. The culture is way too filled with selfish Ayn Rand devotees for my taste. I hope this leads to large scale housekeeping by Khosrowshahi. Matt Kallman, a spokesman for Uber, said after the public release of the letter:

While we haven’t substantiated all the claims in this letter — and, importantly, any related to Waymo — our new leadership has made clear that going forward we will compete honestly and fairly, on the strength of our ideas and technology.

NYT (12/15/17). You know the old saying about Fool me once …

Back to the Jacobs letter, it also alleges at pgs. 6-9 the improper use of fake attorney-client privilege to hide evidence:

Further, Clark and Henley directly instructed Jacobs to conceal documents in violation of Sarbanes-Oxley by attempting to “shroud” them with attorney-client privilege or work product protections. Clark taught the ThreatOps team that if they marked communications as “draft,” asked for a legal opinion at the beginning of an email, and simply wrote “attorney-client privilege” on documents, they would be immune from discovery.

The letter also alleges the intentional use of personal computers and accounts to conduct Uber business that they wanted to hide from disclosure. Letter pgs. 7-8.

The letter at pages 9-26 then details facts purporting to show illegal intelligence gathering activities by Uber on a global scale, violating multiple state and federal laws, including:

  • Economic Espionage Act
  • Uniform Trade Secret Act
  • California Uniform Trade Secrets Act
  • Racketeer Influenced and Corrupt Organizations Act (RICO)
  • Wire Fraud law at 18 U.S.C § 1343, and California Penal Code § 528.5
  • Wiretap Act at 18 U .S.C. § 25 10 et seq.
  • Computer Fraud and Abuse Act (CFAA)
  • Foreign Corrupt Practices Act (FCPA)

Special Master John L. Cooper

Judge Alsup referred the discovery issues raised by Uber’s non-disclosure of the “Jacobs Letter” to the Special Master handling many of the discovery disputes in this case, John L. Cooper of Farella Braun + Martel LLP. The Special Master Report with Cooper’s recommendations concerning the issues raised by the late disclosure of the letter is dated December 15, 2017. Cooper’s report is a public record that can be found here. This is  his excellent introduction of the dispute found at pages 1-2 of his report.

The trial of this trade secrets case was continued for a second time after the belated discovery of inflammatory communications by a former Uber employee came to light outside the normal discovery process. On April 14, 2017, Richard Jacobs sent a resignation e-mail to Uber’s then-CEO and then-general counsel, among others, accusing Uber of having a dedicated division with a “mission” to “steal trade secrets in a series of code-named campaigns” and engaging in other allegedly wrongful or inappropriate conduct. A few weeks later, on May 5, 2017, Mr. Jacobs’ lawyer, Clayton Halunen, sent a letter to Angela Padilla, Uber’s Vice President and Deputy General Counsel for Litigation and Employment. That 37-page letter expanded in some  detail on Mr. Jacobs’ e-mailed accusations regarding clandestine and concerted efforts to steal competitors’ trade secrets, including those belonging to Waymo. It also addressed allegations touching on Anthony Levandowski’s alleged downloading of Waymo trade secrets. The Jacobs Letter laid out what his lawyer described as a set of hardware and software programs, and usage protocols that would help Uber to allegedly carry out its thefts and other corporate espionage in secret and with minimized risk of evidence remaining on Uber servers or devices. By mid-August Mr. Jacobs and Uber settled their disputes and executed a written settlement agreement on August 14-15,2017.

Despite extensive discovery and multiple Court orders to produce an extensive amount of information related to the accusations in the Jacobs Materials, Waymo did not learn of their existence until after November 22, when the Court notified the parties that a federal prosecutor wrote a letter to this Court disclosing the gist of the Jacobs allegations.

The Special Master’s report then goes on to analyze whether Uber was obligated to produce the Jacobs Materials in response to any of the Court’s prior orders or Waymo’s discovery requests. In short, Master Cooper concluded that they were not directly covered by any of the prior court orders, but the Jacobs Letter was responsive to certain discovery requests propounded by Waymo, and Uber was obligated to produce it in response to those requests.

Special Master Cooper goes on to describe at page 7 of his report the Jacobs letter by Halunen. To state the obvious, this is clearly a “hot” document with implications that go well beyond this particular case.

That 37-page letter set forth multiple allegations relating to alleged efforts by Uber individuals and divisions. Among other things, the letter alleges that Uber planned to use certain hardware devices and software to conceal the creation and destruction of corporate records that, as a result, “would never be subject to legal discovery.” See ECF No. 2307-2 at 7. These activities, Mr. Jacobs’ lawyer asserted, “implicate ongoing discovery disputes, such as those in Uber’s litigation with Waymo.” Id. at 9. He continued:

Specifically, Jacobs recalls that Jake Nocon, Nick Gicinto, and Ed Russo went to Pittsburgh, Pennsylvania to educate Uber’s Autonomous Vehicle Group on using the above practices with the specific intent of preventing Uber’s unlawful schemes from seeing the light of day.

Jacobs’ observations cast doubt on Uber’s representation in court proceedings that no documents evidencing wrongdoing can be found on Uber’s systems and that other communications are actually shielded by the attorney-client privilege. Aarian Marshall, Judge in Waymo Dispute Lets Uber’s Self-driving Program Live—for Now, wired.com (May 3, 2017 at 8:47p.m.) (“Lawyers for Waymo also said Uber had blocked the release of 3,500 documents related to the acquisition of Otto on the grounds that they contain privileged information …. Waymo also can’t quite pin down whether Uber employees saw the stolen documents or if those documents moved anywhere beyond the computer Levandowski allegedly used to steal them. (Uber lawyers say extensive searches of their company’s system for anything connected to the secrets comes up nil.)”), available at (citation omitted).

Id. at 9-10.

Uber Attorney Angela Padilla

Angella Padilla was Uber’s Vice President and Deputy General Counsel for Litigation and Employment. She testified on these issues. Here is Special Master Cooper’s summary at pages 8-9 of his report:

Ms. Padilla testified in this Court that she read the letter “in brief’ and turned it over to other Uber attorneys, including Ms. Yoo, to begin an internal investigation. Nov. 29, 2017 Hr’g Tr. at 15:17-24. The letter also made its way to two separate committees of Uber’s Board of Directors, including the committee that was or is overseeing special litigation, including this case and the Jacobs matter. Id. at 20:10-13; 26:23-25. On June 27, Uber disclosed the allegations in the Jacobs Letter to the U.S. Attorney for the Northern District of California. Id. at 27:20-14. It disclosed the Jacobs Letter itself on or around September 12 to the same U.S. Attorney’s Office, to another U.S. Attorney, in the Southern District of New York, and to the U.S. Department of Justice in Washington. Id. at 28:4-10. Ms. Padilla testified that Uber made these disclosures to multiple prosecutors “to take the air out of [Jacobs’] extortionist balloon.” Id. at 28:18-19. Nearly one month before that distribution of the letter to federal prosecutors, on August 14, Uber settled with Mr. Jacobs—the terms of which included $4.5 million in compensation to Jacobs and $3 million to his lawyers. See id. at 62:6-63-12.

I have to pause here for a minute because the settlement amount takes my breath away. Not only the payment of $4.5 Million to Richard Jacobs who had a salary of $130,000 per year, but also the additional payment of $3.0 million dollars to his lawyers. That’s an incredible sum for writing a couple of letters, although I am sure they would claim to have put much more into their representation than meets the eye.

Other Attorneys for Uber Involved

Back to Special Master Cooper’s summary of the testimony of Uber attorney Padilla and other facts in the record about attorney knowledge of the “smoking gun” Jacobs letter (footnotes omitted):

Uber distributed the Jacobs E-Mail to two of Uber’s counsel of record at Morrison Foerster (“MoFo”) in this case. See Dec. 4, 2017 Hr’g Tr. at 46:1-47:5. Other MoFo attorneys directly involved in this case and related discovery issues e-mailed with other MoFo attorneys in late April about “Uber’s ediscovery systems regarding potential investigation into Jacobs resignation letter.” See Waymo Ex. 21.

None of the Uber outside counsel working on this case got a copy of the Jacobs Letter. Neither did the two Uber in-house lawyers who were or are handling this case; Ms. Padilla testified that she did not send it to them. Nov. 29, 2017 Hr’g Tr. at 47:8-16. By late June, some attorneys from Boies Schiller and Flexner, also counsel in this matter for Uber, had discussions with other outside counsel and Ms. Padilla about issues arising from the internal investigation triggered by the Jacobs Materials. See Waymo Ex. 20, Entries 22-22(h).

So now you know the names of the attorneys involved, and not involved, according to Special Master Cooper at page 9 of his report. Apparently none of the actual counsel of record knew about it. I would have to assume, and I think the court will too, that this was intentional. It was so clever as to be obvious, or, as the British would say too clever by half.

U.S. Attorney Notifies Judge Alsup of the Jacobs Letter

To complete the procedural background, here is what happened next leading to the referral to the Special Master. Note that a U.S. Attorney taking action like this to notify a District Court Judge of a piece of evidence is extraordinary, especially to do so just before a trial. Judge Alsup said that he had never had such a thing happen in his courtroom. The U.S. Attorney for the Northern District of California is Brian Stretch. Obviously he was concerned about the fairness of Uber’s actions. In my opinion this was a good call by Stretch.

On November 22, 2017, the U.S. Attorney for the Northern District of California notified this Court of the Jacobs allegations and specifically referenced the account Jacobs put in his letter about the efforts to keep the Ottomotto acquisition secret. See ECF No. 2383. The Court on the same day issued an order disclosing receipt of the letter from the U.S. Attorney and asked the parties to inform the Court about the extent of any prior disclosure of the Jacobs allegations. See ECF Nos. 2260-2261. After continuing the trial date in light of the parties’ responses to that query, the Court on December 4, 2017, ordered the Special Master “to determine whether and to what extent, including the history of this action and both sides’ past conduct, defendants were required to earlier produce the Jacobs letter, resignation email, or settlement agreement, or required to provide any information in those documents in response to interrogatories, Court orders, or other agreements among counsel.” ECF No. 2334, 2341.

Special Master report at pgs. 9-10.

Special Master Cooper’s Recommended Ruling

Master Cooper found that the Richard Jacobs letter was responsive to two of Waymos’ requests to produce: RFP 29 and RFP 73. He rejected Uber’s argument that they were not responsive to any request, an argument that must have been difficult to make concerning a document this hot. They tried to make the argument seem more reasonable by saying that even if the letter was “generally relevant,” it was not responsive. Then they cite to cases standing for the proposition that you have no duty to produce relevant documents that you are not going to rely on, namely documents adverse to your position, unless they are specifically requested. Here is a quote of the conclusion of that argument from page 16 of Uber’s Response to Waymo’s Submission to Special Master Cooper Re the Jacobs Documents.

Congress has specified in Rule 26(a)(ii) what documents must be unilaterally produced, and they are only those that a party “may use to support its claims or defenses.” Thus, a party cannot use a document against an adversary at trial that the party failed to disclose. However, Rule 26 very pointedly does not require the production of any documents other than those that a party plans to use “to support” its claims. Obviously, Uber is not seeking to use any of the documents at issue to support its claims. If Waymo believes that this rule should be changed, that is an issue they need to address with Congress, not with the Court.

Master Cooper did not address that argument because he found the documents were in fact both relevant and directly responsive to two of Waymo’s requests for production.

Uber’s attorney also made what I consider a novel argument that even if the Jacobs letter was found to be responsive, they still did have to produce it because, get this – it did not include any of the keywords that they agreed to use to search for documents in those categories. Incredible. What difference does that make, if they knew about the document anyway? Their client, Uber, specifically including in-house counsel, Ms. Padilla, clearly knew about it. The letter was to her. Are they suggesting that Uber did not know about the letter because some of their outside counsel did not know about it? Special Master Cooper must have had the same reaction as he disposed of this argument in short order at page 17 of his report:

Uber argues, that in some scenarios, reliance on search terms is enough to satisfy a party’s obligation to find responsive documents. See, e.g., T.D.P. v. City of Oakland, No, 16-cv-04132-LB, 2017 WL 3026925, at *5 (N.D. Cal. July 17, 2017) (finding certain search terms adequate for needs of case). But I find there are two main reasons why an exclusive focus on the use of search terms is inappropriate for determining whether the Jacobs Letter should have been produced in response to RFP 29 and RFP 73.

First, the parties never reached an agreement to limit their obligation to searching for documents to only those documents that hit on agreed-upon search terms. See Waymo Ex. 5 (Uber counsel telling Waymo during search-term negotiations that “Waymo has an obligation to conduct a reasonable search for responsive documents separate and apart from any search term negotiations”). (Emphasis added)

Second, Uber needed no such help in finding the Jacobs Materials. They were not stowed away in a large volume of data on some server. They were not stashed in some low-level employee’s files. Parties agree to use search terms and to look into the records of the most likely relevant custodians to help manage the often unwieldy process of searching through massive amounts of data. These methods are particularly called for when a party, instead of merely having to look for a needle in a haystack, faces the prospect of having to look for lots of needles in lots of haystacks. This needle was in Uber’s hands the whole time.

I would add that this needle was stuck deep into their hands, such that they were bleeding profusely. Maybe the outside attorneys did not see it, but Uber sure did and they had a duty to advise their attorneys. Uber’s attorneys would have been better off saving their powder for attacking the accuracy of the contents of the Jacobs letter and talking about the fast pace of discovery. They did that, but only as a short concluding argument, almost an afterthought. See page 16-19 of Uber’s Response to Waymo’s Submission to Special Master Cooper Re the Jacobs Documents.

Here is another theoretical argument that Uber’s lawyers threw up and Cooper’s practical response at pages 17-18 of his report:

Uber argues that it cannot be that the mere possession and knowledge of a relevant document must trigger a duty to scrutinize it and see if it matches any discovery requests. It asked at the December 12, 2017, hearing before the Special Master: Should every client be forced to instruct every one of its employees to turn over every e-mail and document to satisfy its discovery obligations to produce relevant and responsive documents? Must every head of litigation for every company regularly confronted with discovery obligations search their files for responsive documents, notwithstanding any prior agreement with the requesting party to search for responsive documents by the use of search terms?

It is not easy, in the abstract, to determine where the line regarding the scope of discovery search should be drawn. But this is not a case involving mere possession of some document. The facts in this case suggest that Ms. Padilla knew of the Jacobs Letter at the time Uber had to respond to discovery requests calling for its production—it certainly was “reasonably accessible.” Mr. Jacobs’ correspondence alleged systemic, institutionalized, and criminal efforts by Uber to conceal evidence and steal trade secrets, and not just as a general matter but also specifically involving the evidence and trade secrets at issue in this case—maybe the largest and most significant lawsuit Uber has ever faced. Ms. Padilla, Uber’s vice president and deputy general counsel for litigation and employment received the Jacobs Materials around the same time that discovery in this case was picking up and around the same time that the Court partially granted Waymo’s requested provisional relief. Shortly after that, Uber told federal prosecutors about the Jacobs allegations and then later sent them a copy of the letter. It sent the materials to outside counsel, including lawyers at MoFo that Uber hired to investigate the allegations. Two separate Uber board committees got involved, including the committee overseeing this case. Uber paid Mr. Jacobs $4.5 million, and his lawyer $3 million, to settle his claims.

The Federal Rules obligate a party to produce known, relevant and reasonably accessible material that on its face is likely to be responsive to discovery requests. RFP 29 and RFP 73 were served on Uber on May 9, just a few days after Ms. Padilla received the Jacobs Letter on May 5. Uber was therefore obligated to conduct a reasonable inquiry into those requests (and all others it received) to see if it had documents responsive to those requests and produce non-privileged responsive documents.

Special Master John Cooper concluded by finding that the “Jacobs letter was responsive to Waymo’s Request for Production No. 29 and Request for Production No. 73, and Uber should have produced it to Waymo in response to those requests.” It was beyond the scope of his assignment as Special Master to determine the appropriate remedy. Uber will now probably challenge this report and Judge William Alsup will rule.

Like everyone else, I expect Judge Alsup will agree with Cooper’s report. The real question is what remedy will he provide to Waymo and what sanctions, if any, will Judge Alsuop impose.

Conclusion

At the hearing on the request for a trial delay on November 28, 2017, Judge William Alsup reportedly told Uber’s in-house attorney, Angella Padilla:

Maybe you’re in trouble … This document should have been produced … You wanted this case to go to trial so that they didn’t have the document, then it turns out the U.S. attorney did an unusual thing. Maybe the guy [Jacobs] is a disgruntled employee but that’s not your decision to make, that’s the jury’s.

The Recorder (November 29, 2017).

In response to Angella Padilla saying that Jacobs was just a “extortionist” and the allegations in his letter were untrue. Judge Alsup reportedly responded by saying:

Here’s the way it looks … You said it was a fantastic BS letter with no merit and yet you paid $4.5 million. To someone like me and people out there, mortals, that’s a lot of money, that’s a lot of money. And people don’t pay that kind of money for BS and you certainly don’t hire them as consultant if you think everything they’ve got to contribute is BS. On the surface it looks like you covered this up.

The Recorder (November 29, 2017).

Judge William Alsup is one of the finest judges on the federal bench today. He is a man of unquestioned integrity and intellectual acumen. He is a Harvard Law graduate, class of 1971, and former Law clerk for Justice William O. Douglas, Supreme Court of the United States, 1971-1972.  How Judge Alsup reacts to the facts in Waymo LLC v. Uber Techs., Inc. now that he has the report of Special Master Cooper will likely have a profound impact on e-discovery and legal ethics for years to come.

No matter what actions Judge Alsup takes next, the actions of Uber and its attorneys in this case will be discussed for many years to come. Did the attorneys’ non-disclosure violate Rule of Professional Conduct 3.3, Candor Toward the Tribunal? Did they violate Rule 3.4, Fairness to Opposing Party and Counsel? Also, what about Rule 26(g) Federal Rules of Civil Procedure? Other rules of ethics and procedure? Did Uber’s actions violate the Sarbanes-Oxley Act? Other laws? Was it fraud?

Finally, and these are critical questions, did Uber breach their duty to preserve evidence when they knew that litigation was reasonably likely? Did their attorneys do so if they knew of these practices? What sanctions are appropriate for destruction of evidence under Rule 37(e) and the Court’s inherent authority? Should an adverse inference be imposed? A default judgment?

The preservation related issues are big questions that I suspect Judge Alsup will now address. These issues and his rulings, and that of other judges who will likely face the same issues soon in other cases, will impact many corporations, not just Uber. The use of software such as Wickr and Telegram is apparently already wide-spread. In what circumstances and for what types of communications may the use of such technologies place a company (or individual) at risk for severe sanctions in later litigation? Personally, I oppose intentionally ephemeral devices, where all information self-destructs, but, at the same time, I strongly support the right of encryption and privacy. It is a question of balance between openness and truth on the one hand, and privacy and security on the other. How attorneys and judges respond to these competing challenges will impact the quality of justice and life in America for many years to come.

 


%d bloggers like this: