Ethical Guidelines for Artificial Intelligence Research

November 7, 2017

The most complete set of AI ethics developed to date, the twenty-three Asilomar Principles, was created by the Future of Life Institute in early 2017 at their Asilomar Conference. Ninety percent or more of the attendees at the conference had to agree upon a principle for it to be accepted. The first five of the agreed-upon principles pertain to AI research issues.

Although all twenty-three principles are important, the research issues are especially time sensitive. That is because AI research is already well underway by hundreds, if not thousands of different groups. There is a current compelling need to have some general guidelines in place for this research. AI Ethics Work Should Begin Now. We still have a little time to develop guidelines for the advanced AI products and services expected in the near future, but as to research, the train has already left the station.

Asilomar Research Principles

Other groups are concerned with AI ethics and regulation, including research guidelines. See the Draft Principles page of AI-Ethics.com which lists principles from six different groups. The five draft principles developed by Asilomar are, however, a good place to start examining the regulation needed for research.

Research Issues

1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

  • How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
  • How can we grow our prosperity through automation while maintaining people’s resources and purpose?
  • How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
  • What set of values should AI be aligned with, and what legal and ethical status should it have?

3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Principle One: Research Goal

The proposed first principle is good, but the wording? Not so much. The goal of AI research should be to create not undirected intelligence, but beneficial intelligence. This is a double-negative English language mishmash that only an engineer could love. Here is one way this principle could be better articulated:

Research Goal: The goal of AI research should be the creation of beneficial intelligence, not  undirected intelligence.

Researchers should develop intelligence that is beneficial for all of mankind. The Institute of Electrical and Electronics Engineers (IEEE) first general principle is entitled “Human Benefit.” The Asilomar first principle is slightly different. It does not really say human benefit. Instead it refers to beneficial intelligence. I think the intent is to be more inclusive, to include all life on earth, all of earth. Although IEEE has that covered too in their background statement of purpose to “Prioritize the maximum benefit to humanity and the natural environment.”

Pure research, where raw intelligence is created just for the hell of it, with no intended helpful “direction” of any kind, should be avoided. Because we can is not a valid goal. Pure, raw intelligence, with neither good intent, nor bad, is not the goal here. The research goal is beneficial intelligence. Asilomar is saying that Undirected intelligence is unethical and should be avoided. Social values must be built into the intelligence. This is subtle, but important.

The restriction to beneficial intelligence is somewhat controversial, but the other side of this first principle is not. Namely, that research should not be conducted to create intelligence that is hostile to humans.  No one favors detrimental, evil intelligence. So, for example, the enslavement of humanity by Terminator AIs is not an acceptable research goal. I don’t care how bad you think our current political climate is.

To be slightly more realistic, if you have a secret research goal of taking over the world, such as  Max Tegmark imagines in The Tale of the Omega Team in his book, Life 3.0, and we find out, we will shut you down (or try to). Even if it is all peaceful and well-meaning, and no one gets hurt, as Max visualizes, plotting world domination by machines is not a positive value. If you get caught researching how to do that, some of the more creative prosecuting lawyers around will find a way to send you to jail. We have all seen the cheesy movies, and so have the juries, so do not tempt us.

Keep a positive, pro-humans, pro-Earth, pro-freedom goal for your research. I do not doubt that we will someday have AI smarter than our existing world leaders, perhaps sooner than many expect, but that does not justify a machine take-over. Wisdom comes slowly and is different than intelligence.

Still, what about autonomous weapons? Is research into advanced AI in this area beneficial? Are military defense capabilities beneficial? Pro-security? Is the slaughter of robots not better than the slaughter of humans? Could robots be more ethical at “soldiering” than humans? As attorney Matt Scherer has noted, who is the editor of a good blog, LawAndAI.com and a Future of Life Institute member:

Autonomous weapons are going to inherently be capable of reacting on time scales that are shorter than humans’ time scales in which they can react. I can easily imagine it reaching the point very quickly where the only way that you can counteract an attack by an autonomous weapon is with another autonomous weapon. Eventually, having humans involved in the military conflict will be the equivalent of bringing bows and arrows to a battle in World War II.

At that point, you start to wonder where human decision makers can enter into the military decision making process. Right now there’s very clear, well-established laws in place about who is responsible for specific military decisions, under what circumstances a soldier is held accountable, under what circumstances their commander is held accountable, on what circumstances the nation is held accountable. That’s going to become much blurrier when the decisions are not being made by human soldiers, but rather by autonomous systems. It’s going to become even more complicated as machine learning technology is incorporated into these systems, where they learn from their observations and experiences in the field on the best way to react to different military situations.

Podcast: Law and Ethics of Artificial Intelligence (Future of Life, 3/31/17).

The question of beneficial or not can become very complicated, fast. Like it or not, military research into killer robots is already well underway, in both the public and private sector. Kalashnikov Will Make an A.I.-Powered Killer Robot: What could possibly go wrong? (Popular Mechanics, 7/19/17); Congress told to brace for ‘robotic soldiers’ (The Hill, 3/1/17); US military reveals it hopes to use artificial intelligence to create cybersoldiers and even help fly its F-35 fighter jet – but admits it is ALREADY playing catch up (Daily Mail, 12/15/15) (a little dated, and sensationalistic article perhaps, but easy read with several videos).

AI weapons are a fact, but they should still be regulated, in the same way that we have regulated nuclear weapons since WWII. Tom Simonite, AI Could Revolutionize War as Much as Nukes (Wired, 7/19/17); Autonomous Weapons: an Open Letter from AI & Robotics Researchers.

Principle Two: Research Funding

The second principle of Funding is more than an enforcement mechanism for the first, that you should only fund beneficial AI. It is also a recognition that ethical work requires funding too. This should be every lawyer’s favorite AI ethics principle. Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies. The principle then adds a list of five bullet-point examples.

How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked. The goal of avoiding the creation of AI systems that can be hacked, easily or not, is a good one. If a hostile power can take over and misuse an AI for evil end, then the built-in beneficence may be irrelevant. The example of a driverless car come to mind that could be hacked and crashed as a perverse joy-ride, kidnapping or terrorist act.

The economic issues raised by the second example are very important: How can we grow our prosperity through automation while maintaining people’s resources and purpose? We do not want a system that only benefits the top one percent, or top ten percent, or whatever. It needs to benefit everyone, or at least try to. Also see Asilomar Principle Fifteen: Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

Yoshua Bengio, Professor of Computer Science at the University of Montreal, had this important comment to make on the Asilomar principles during an interview at the end of the conference:

I’m a very progressive person so I feel very strongly that dignity and justice mean wealth is redistributed. And I’m really concerned about AI worsening the effects and concentration of power and wealth that we’ve seen in the last 30 years. So this is pretty important for me.

I consider that one of the greatest dangers is that people either deal with AI in an irresponsible way or maliciously – I mean for their personal gain. And by having a more egalitarian society, throughout the world, I think we can reduce those dangers. In a society where there’s a lot of violence, a lot of inequality, the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question.

Most everyone at the Asilomar Conference agreed with that sentiment, but I do not yet see a strong consensus in AI businesses. Time will tell if profit motives and greed will at least be constrained by enlightened self-interest. Hopefully capitalist leaders will have the wisdom to share the great wealth with all of society that AI is likley to create.

How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? The legal example is also a good one, with the primary tension we see so far between fair versus efficient. Just policing high crime areas might well be efficient, at least for reducing some type of crime, but would it be fair? Do we want to embed racial profiling into our AI? Neighborhood slumlord profiling? Religious, ethic profiling? No. Existing law prohibits that and for good reason. Still, predictive policing is already a fact of life in many cities and we need to be sure it has proper legal, ethical regulation.

We have seen the tension between “speedy” and “inexpensive” on the one hand, and “just” on the other in Rule One of the Federal Rules of Civil Procedure and e-discovery. When applied using active machine learning a technical solution was attained to these competing goals. The predictive coding methods we developed allowed for both precision (“speedy” and “inexpensive”) and recall (“just”). Hopefully this success can be replicated in other areas of the law where machine learning is under proportional control by experienced human experts.

The final example given is much more troubling: What set of values should AI be aligned with, and what legal and ethical status should it have? Whose values? Who is to say what is right and wrong? This is easy in a dictatorship, or a uniform, monochrome culture (sea of white dudes), but it is very challenging in a diverse democracy. This may be the greatest research funding challenge of all.

Principle Three: Science-Policy Link

This principle is fairly straightforward, but will in practice require a great deal of time and effort to be done right. A constructive and healthy exchange between AI researchers and policy-makers is necessarily a two-way street. It first of all assumes that policy-makers, which in most countries includes government regulators, not just industry, have a valid place at the table. It assumes some form of government regulation. That is anathema to some in the business community who assume (falsely in our opinion) that all government is inherently bad and essentially has nothing to contribute. The countervailing view of overzealous government controllers who just want to jump in, uninformed, and legislate, is also discouraged by this principle. We are talking about a healthy exchange.

It does not take an AI to know this kind of give and take and information sharing will involve countless meetings. It will also require a positive healthy attitude between the two groups. If it gets bogged down into an adversary relationship, you can multiply the cost of compliance (and number of meetings) by two or three. If it goes to litigation, we lawyers will smile in our tears, but no one else will. So researchers, you are better off not going there. A constructive and healthy exchange is the way to go.

Principle Four: Research Culture

The need for a good culture applies in spades to the research community itself. The Fourth Principal states: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI. This favors the open source code movement for AI, but runs counter to the trade-secret  business models of many corporations. See Eg.:OpenAI.com, Deep Mind Open Source; Liam , ‘One machine learning model to rule them all’: Google open-sources tools for simpler AI (ZDNet, 6/20/17).

This tension is likley to increase as multiple parties get close to a big breakthrough. The successful efforts for open source now, before superintelligence seems imminent, may help keep the research culture positive. Time will tell, but if not there could be trouble all around and the promise of full employment for litigation attorneys.

Principle Five: Race Avoidance

The Fifth Principle is a tough one, but very important: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards. Moving fast and breaking things may be the mantra of Silicon Valley, but the impact of bad AI could be catastrophic. Bold is one thing, but reckless is quite another. In this area of research there may not be leisure for constant improvements to make things right. HackerWay.org.
Not only will there be legal consequences, mass liability, for any group that screws up, but the PR blow alone from a bad AI mistake could destroy most companies. Loss of trust may never be regained by a wary public, even if Congress and Trial Lawyers do not overreact. Sure, move fast, but not too fast where you become unsafe. Striking the right balance is going to require an acute technical, ethical sensitivity. Keep it safe.

Last Word

AI ethics is hard work, but well worth the effort. The risks and rewards are very high. The place to start this work is to talk about the fundamental principles and try to reach consensus. Everyone involved in this work is driven by a common understanding of the power of the technology, especially artificial intelligence. We all see the great changes on the horizon and share a common vision of a better tomorrow.

During an interview at the end of the Asilomar conference, Dan Weld, Professor of Computer Science, University of Washington, provided a good summary of this common vision:

In the near term I see greater prosperity and reduced mortality due to things like highway accidents and medical errors, where there’s a huge loss of life today.

In the longer term, I’m excited to create machines that can do the work that is dangerous or that people don’t find fulfilling. This should lower the costs of all services and let people be happier… by doing the things that humans do best – most of which involve social and interpersonal interaction. By automating rote work, people can focus on creative and community-oriented activities. Artificial Intelligence and robotics should provide enough prosperity for everyone to live comfortably – as long as we find a way to distribute the resulting wealth equitably.

Six Sets of Draft Principles Are Now Listed at AI-Ethics.com

October 8, 2017

Arguably the most important information resource of AI-Ethics.com is the page with the collection of Draft Principles underway by other AI Ethics groups around the world. We added a new one that came to our attention this week from an ABA article, A ‘principled’ artificial intelligence could improve justice (ABA Legal Rebels, October 3, 2017). They listed six proposed principles from the talented Nicolas Economou, the CEO of electronic discovery search company, H5.

Although Nicolas Economou is an e-discovery search pioneer and past Sedona participant, I do not know him. I was, of course, familiar with H5’s work as one of the early TREC Legal Track pioneers, but I had no idea Economou was also involved with AI ethics. Interestingly, I recently learned that another legal search expert, Maura Grossman, whom I do know quite well, is also interested in AI ethics. She is even teaching a course on AI ethics at Waterloo. All three of us seem to have independently heard the Siren’s song.

With the addition of Economou’s draft Principles we now have six different sets of AI Ethics principles listed. Economou’s new list is added at the end of the page and reproduced below. It presents a decidedly e-discovery view with which all readers here are familiar.

Nicolas Economou, like many of us, is an alumni of The Sedona Conference. His sixth principle is based on what he calls thoughtful, inclusive dialogue with civil society. Sedona was the first legal group to try to incorporate the principles of dialogue into continuing legal education programs. That is what first attracted me to The Sedona Conference. AI-Ethics.com intends to incorporate dialogue principles in conferences that it will sponsor in the future. This is explained in the Mission Statement page of AI-Ethics.com.

The mission of AI-Ethics.com is threefold:

  1. Foster dialogue between the conflicting camps in the current AI ethics debate.
  2. Help articulate basic regulatory principles for government and industry groups.
  3. Inspire and educate everyone on the importance of artificial intelligence.

First Mission: Foster Dialogue Between Opposing Camps

The first, threshold mission of AI-Ethics.com is to go beyond argumentative debates, formal and informal, and move to dialogue between the competing camps. See eg. Bohm Dialogue, Martin Buber and The Sedona Conference. Then, once this conflict is resolved, we will all be in a much better position to attain the other two goals. We need experienced mediators, dialogue specialists and judges to help us with that first goal. Although we already have many lined up, we could always use more.

We hope to use skills in both dialogue and mediation to transcend the polarized bickering that now tends to dominate AI ethics discussions. See eg. AI Ethics Debate. We need to move from debate to dialogue, and we need to do so fast.

_____

Here is the new segment we added to the Draft Principles page.

6. Nicolas Economou

The latest attempt at articulating AI Ethics principles comes from Nicolas Economou, the CEO of electronic discovery search company, H5. Nicolas has a lot of experience with legal search using AI, as do several of us at AI-Ethics.com. In addition to his work with legal search and H5, Nicholas is involved in several AI ethics groups, including the AI Initiative of the Future Society at Harvard Kennedy School and the Law Committee of the IEEE’s Global Initiative for Ethical Considerations in AI.

Nicolas Economou has obviously been thinking about AI ethics for some time. He provides a solid scientific, legal perspective based on his many years of supporting lawyers and law firms with advanced legal search. Economou has developed six principles as reported in an ABA Legal Rebels article dated October 3, 2017, A ‘principled’ artificial intelligence could improve justice. (Some of the explanations have been edited out as indicated below. Readers are encouraged to consult the full article.) As you can see the explanations given here were written for consumption by lawyers and pertain to e-discovery. They show the application of the principles in legal search. See eg TARcourse.com. The principles have obvious applications in all aspects of society, not just the Law and predictive coding, so their value goes beyond the legal applications here mentioned.

Principle 1: AI should advance the well-being of humanity, its societies, and its natural environment. The pursuit of well-being may seem a self-evident aspiration, but it is a foundational principle of particular importance given the growing prevalence, power and risks of misuse of AI and hybrid intelligence systems. In rendering the central fact-finding mission of the legal process more effective and efficient, expertly designed and executed hybrid intelligence processes can reduce errors in the determination of guilt or innocence, accelerate the resolution of disputes, and provide access to justice to parties who would otherwise lack the financial wherewithal.

Principle 2: AI should be transparent. Transparency is the ability to trace cause and effect in the decision-making pathways of algorithms and, in hybrid intelligence systems, of their operators. In discovery, for example, this may extend to the choices made in the selection of data used to train predictive coding software, of the choice of experts retained to design and execute the automated review process, or of the quality-assurance protocols utilized to affirm accuracy. …

Principle 3: Manufacturers and operators of AI should be accountable. Accountability means the ability to assign responsibility for the effects caused by AI or its operators. Courts have the ability to take corrective action or to sanction parties that deliberately use AI in a way that defeats, or places at risk, the fact-finding mission it is supposed to serve.

Principle 4: AI’s effectiveness should be measurable in the real-world applications for which it is intended. Measurability means the ability for both expert users and the ordinary citizen to gauge concretely whether AI or hybrid intelligence systems are meeting their objectives. …

Principle 5: Operators of AI systems should have appropriate competencies. None of us will get hurt if Netflix’s algorithm recommends the wrong dramedy on a Saturday evening. But when our health, our rights, our lives or our liberty depend on hybrid intelligence, such systems should be designed, executed and measured by professionals with the requisite expertise. …

Principle 6: The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society. …  The societal dialogue relating to the use of AI in electronic discovery would benefit from being even more inclusive, with more forums seeking the active participation of political scientists, sociologists, philosophers and representative groups of ordinary citizens. Even so, the realm of electronic discovery sets a hopeful example of how an inclusive dialogue can lead to broad consensus in ensuring the beneficial use of AI systems in a vital societal function.

Nicolas Economou believes, as we do, that an interdisciplinary approach, which has been employed successfully in e-discovery, is also the way to go for AI ethics. Note his use of the word “dialogue” and mention in the article of The Sedona Conference, which pioneered the use of this technique in legal education. We also believe in the power of dialogue and have seen it in action in multiple fields. See eg. the work of physicist, David Bohm and philosopher, Martin Buber. That is one reason that we propose the use of dialogue in future conferences on AI ethics. See the AI-Ethics.com Mission Statement.

_____

__

 

 


Good New 33-Point e-Discovery Checklist From Miami

October 1, 2017

The United States District Court for the Southern District Court of Florida is now revising it’s Local Rule 16.1 on Pretrial Procedure in Civil Actions. (In the interests of full disclosure, I am a member of that Court, but am not on the Committee that prepared the proposed revisions.) The revisions pertain to Rule 16.1(b),  Scheduling Conference and Order. The amendments will go into effect on December 1, 2017. These amendments include an excellent new 33-point e-discovery checklist.

The main revision in the local rules is the addition of a new subsection (K) under 16.1(b)(2) Conference Report that lists what must be included in the attorneys’ report:

(K) any issues about: (i) disclosure, discovery, or preservation of electronically stored information, including the form or forms in which it should be produced; (ii) claims of privilege or of protection as trial-preparation materials, including — if the parties agree on a procedure to assert those claims after production — whether to ask the court to include their agreement in an order under Federal Rule of Evidence 502: and (iii) when the parties have agreed to use the ESI Checklist available on the Court’s website (www.flsd.uscourts.gov), matters enumerated on the ESI Checklist;

This rule revision and checklist are a fine addition to the local rules. My congratulations to the ad hoc committee that prepared them. My only criticism of the rule change is that it does not go far enough on Federal Rule of Evidence 502. A 502(d) order should be entered in every case where there is a production of ESI. It should be a standing order and follow the standard language used by Judge Andrew Peck and many others, including my law firm:

1. The production of privileged or work-product protected documents, electronically stored information (“ESI”) or information, whether inadvertent or otherwise, is not a waiver of the privilege or protection from discovery in this case or in any other federal or state proceeding. This Order shall be interpreted to provide the maximum protection allowed by Federal Rule of Evidence 502(d).
2. Nothing contained herein is intended to or shall serve to limit a party’s right to conduct a review of documents, ESI or information (including metadata) for relevance, responsiveness and/or segregation of privileged and/or protected information before production.

My only criticism of the ESI Checklist itself in their use of vague bullet-points, instead of numbering. With that one exception, other courts around the country should consider using the 33-point ESI Checklist for their own local rules. Many already have their own similar checklists, of course, but this is the latest and one of the best. It is complete, but not overly long and complicated.

Checklist Use Is Discretionary

The first thing to note about this new local rule 16.1(b)(2)(K) is that it does not require attorneys to use or follow the ESI Checklist in their discovery plan discussions. Perhaps future versions of the rule will require its use, but I agree with the Ad Hoc Committee’s thinking here to start with discretionary use. There are still plenty of Milton-type lawyers in Florida, and elsewhere, who only think of discovery as near-endless mind-numbing exercises of looking at boxes of paper. IMO there are way too many of these guys, young and old, but the clients who retain attorneys seem to love them, so what can you do? They do often seem to win at the end, as all Office Space fans know. I knew a multi-zillionairre attorney in Miami once where you had to clear a path through all of the paper in his office just to walk to his desk.

If, however, the parties are cool and do agree to use the ESI Checklist, then they are required by the new local rule to include the Checklist points in their Conference Report. It is unclear whether they must include all 33-items in their Report, which, by the way, is supposed to be a Joint Report, but I predict that most will. The Checklist does, however, include the introductory sentence that justifies partial use: “The usefulness of any particular topic may depend on the nature and complexity of the matter.

I also predict that some judges will strongly encourage the use of the Checklist, the way that only judges can do. It may even become an Order when the failure to do so causes time-consuming disputes and other issues that could have been avoided by timely discussion of the checklist points. In most complex cases especially, attorneys would be well advised to agree to this list and not hide their head in the sands of wishful thinking. Better to be realistic and spend the time necessary for the proper use of the ESI List. The List is an excellent way to timely and efficiently comply with the rules.

Preparing for 26(f) conferences and talking about all of the items on the list may increase the costs somewhat upfront, but this expense will almost certainly cause substantial cost-savings dividends down the road. Attorneys and their clients should not be penny wise and pound foolish. You can have your cake and eat it too. Case preparation does not drive up the costs of litigation. It allows you to win, even in the close cases, or at least to mitigate damages. The failure to prepare is not only a sure way to lose, but also to generate high fees from motion practice over discovery. Better to avoid and contain the disputes than to hope they will never happen. Hoping for the best, including incompetence by opposing counsel, is not what lawyers are paid to do.

ESI Checklist

This blog next will explore and make some comments on the 33-point checklist. I begin by reproduction below of the checklist itself in somewhat altered form. I have not revised any of the words in the checklist, but I have added numbers not found in the original to facilitate discussion (actually Roman Numeral letters). So it is fair to say my revisions are not of actual content, but of metadata only. I also add after each item a personal comment, put in parenthesis, italicized and in blue font, so as to be very clear on what is Losey and what is not.

If you want to see the original, bullet points and all,  the PDF version of the Checklist and Rules are published on the Court’s web. Go to the end of the document (currently pages 78-79) to find the ESI Checklist.

United States District Court
Southern District of Florida

Checklist for Rule 26(f) Conference
Regarding Electronically Stored Information (“ESI”)
(Original Bullet Points Changed to Letters and Losey Comments put in parenthesis after each list item, are shown in italics and blue font)

In connection with the Federal Rule of Civil Procedure 26(f) conference and in preparing the Local Rule 16.1(b)(2) conference report, the Court encourages the use of the following checklist. The usefulness of any particular topic may depend on the nature and complexity of the matter.

I. Preservation

A. The ranges of creation or receipt dates for any ESI to be preserved. (In almost every case there is a date before which the ESI is not relevant. In many there is also an after date. Disagreement between parties on date range should be resolved by phased discovery and reservation of rights to object after first phase is completed.)

B. The description of ESI from sources that are not reasonably accessible because of undue burden or cost and that will not be reviewed for responsiveness or produced, but that will be preserved in accordance with Federal Rule of Civil Procedure 26(b)(2)(B). (Backup ESI is  almost always so protected, unless it has the only copy of important information.)

C. The description of ESI from sources that: (a) the party believes could contain relevant information; but (b) has determined, under the proportionality factors, is not discoverable and should not be preserved. (The keyword here is “could.” Maybe it has relevant information, maybe it does not. Also important in determining discoverability under governing proportionaity rules is the “importance” of the information to material issues of fact in dispute. You must consider probative value. In my experience with big data most “merely relevant” information is a waste of time. There is too little probative value to most of it to even try to capture it all.)

D. Whether to continue any interdiction of any document-destruction program, such as ongoing erasures of e-mails, voicemails, and other electronically recorded material. (Typically the key custodians identified should have their email auto-delete functions turned off, and voice mail, but as to them only, not the whole enterprise. Plus, I cannot recall voice mail ever making a difference in a case. It typically has low probative value.)

E. The number and names or general job titles or descriptions of custodians for whom ESI will be preserved (e.g., “HR head,” “scientist,” “marketing manager”). (This is the broad list of key custodians. They are often divided in classes by probable importance of their ESI to the outcome of the case. Although all classes may be preserved, only the most important are actually reviewed, at least at first.)

F. The list of systems, if any, that contain ESI not associated with individual custodians and that will be preserved, such as enterprise databases. (A list not associated with custodians usually refers to department type servers where a number of people in the department could store documents, to document management systems, or to general databases, such as payroll.)

G. Any disputes related to scope or manner of preservation. (You should get these issues resolved asap. Typically you would want to preserve until the issue is resolved, unless the expense is too great or the other side’s position is too unreasonable. But even then you run some risk, and so quick adjudication on issues like this are important.)

II. Liaison

A. The identity of each party’s e-discovery liaison, who will be knowledgeable about and responsible for each party’s ESI. (I always like to see the role and name that I invented back in 2006 – “e-discovery liaison” – used by a court. One of my first e-Discovery “Liaisons” is now a U.S. Magistrate Judge in the Southern District, and a very good one at that, especially in e-discovery.)

III. Informal Discovery About Location and Types of Systems

A. Identification of systems from which discovery will be prioritized (e.g., e-mail, finance, HR systems). (Typically the communications between people, the contemporaneous writings, are the ESI with the highest probative value.)

B.  Descriptions and location of systems in which potentially discoverable information is stored. (Typically this means a description of all IT systems where relevant ESI might be stored, and not just the high value targets like communications. Document management systems and network drives might also be listed here.)

C.  How potentially discoverable information is stored. (This is a follow-up on the prior checklist item that describes how the ESI is stored. Usually it is stored manually at the discretion of listed custodians. They either save the documents or email or not. Where they save it may also be within their control. They may save it on personal thumb drives, or they may print it out to store. You have to interview the custodians to find out how they stored it. Sometimes the potentially discoverable information is stored automatically by other software systems, such as payroll systems, and sometimes the location is predetermined.)

D.  How discoverable information can be collected from systems and media in which it is stored. (Usually it is collected by copying. That needs to be done carefully so that metadata is not changed. Not hard to do, but IT expertise is usually required to do it correctly. Forensic collection is usually not necessary, especially collection of double-deleted files and unallocated space, as such ESI is usually protected under 26(b)(2)(B).)

IV. Proportionality and Costs

A.  The amount and nature of the claims being made by either party. (The monetary value should not be exaggerated by plaintiffs, but usually they feel the need to do so for posturing purposes and other reasons. Suggest this impediment be avoided by disclaimers and reservation of rights. Beyond amount issues, the “nature” of the claims should be carefully understood and discussed with an aim to identifying the actual disputed facts. Discovery should always be focused and have evidentiary value. It is never an end in itself, or at least should not be. Also, do not forget that subject matter discovery is no longer permitted under revised Rule 26(b)(1). It is now limited to claims and defenses that have actually been raised in the case.)

B.  The nature and scope of burdens associated with the proposed preservation and discovery of ESI. (Try to include actual monetary burden expected, usually with a range, but restrain the urge to exaggerate. Spend time to do this right and get into some detailed metrics. Consult an expert where necessary, but never b.s. the judge. They do not like that and will remember you.)

C.  The likely benefit of the proposed discovery. (The requesting party should spell it out. Fishing expeditions are not permitted. The old “reasonably calculated” jargon is gone from new Rule 26(b)(1), at least as a definition of scope, and that change voids a lot of case-law on the subject.)

D.  Costs that the parties will share to reduce overall discovery expenses, such as the use of a common electronic-discovery vendor or a shared document repository, or other cost saving measures. (In my experience this is very rare, Typically it only makes sense in very big cases and or between co-defendants or co-plaintiffs. There are usually too many confidentiality issues to share a vendor with opposing parties.)

E.  Limits on the scope of preservation or other cost-saving measures. (Cost savings should always be considered. This is required of all parties, attorneys and judges under the 2015 revision to Rule 1, FRCP. So too is “speedy” and “just.”)

F.  Whether there is relevant ESI that will not be preserved in accordance with Federal Rule of Civil Procedure 26(b)(1), requiring discovery to be proportionate to the needs of the case. (Typically the answer here is yes, or should be, and some discussion may be required. Preservation is required by law to be reasonable, not exhaustive or perfect. Reasonable means proportionate. Moreover, if ESI is not relevant under the proportionate definitions of revised Rule 26(b)(1) then it does not have to be preserved because only relevant ESI need be preserved.)

V. Search

A.  The search method(s), including specific words or phrases or other methodology, that will be used to identify discoverable ESI and filter out ESI that is not subject to discovery. (Please people, exchanging keywords should be just the beginning, not the whole process. It is only one of many possible search methods. Use the Hybrid Multimodal method, which all readers of my blog and books should know pretty well by now.)

B.  The quality-control method(s) the producing party will use to evaluate whether a production is missing relevant ESI or contains substantial amounts of irrelevant ESI. (The problem of missing relevant ESI is the problem of Recall, whereas the problem of too much irrelevant ESI is the problem of Precision, but also, to some extent, to the problem of duplication. All good electronic document review experts have a number of different quality control techniques to improve recall and precision. Not an expert? Then perhaps you should consult with one in your firm, or if you have none (pity), then ask your e-discovery vendor.)

VI. Phasing

A.  Whether it is appropriate to conduct discovery of ESI in phases. (Yes. It is a great way to resolve disagreements by postponing excessive demands for second or third phases. Chances are these other phases will not be necessary because all that is needed is produced in the first phase. Alternatively, the producing party might agree to them if the first production makes their necessity obvious.)

B.  Sources of ESI most likely to contain discoverable information and that will be included in the first phases of Federal Rule of Civil Procedure 34 document discovery. (Here is where the producing party lists what sources they will search, most often communication ESI such as Outlook Exchange email servers.)

C.  Sources of ESI less likely to contain discoverable information from which discovery will be postponed or not reviewed. (These are sources that are unlikely to have ESI with strong probative value, if any, but might. There may never be a need to review these sources. As a compromise where there is disagreement put these sources in a later phase. After the first phase is completed  it may not be necessary to look for more evidence in these secondary sources.)

D.  Custodians (by name or role) most likely to have discoverable information and whose ESI will be included in the first phases of document discovery. (Here is where you list the key custodians. In most lawsuits all you will ever need to search is the contents of the mailboxes of these key witnesses, the emails, attachments, calendar items, etc in their email system.)

E.  Custodians (by name or role) less likely to have discoverable information from whom discovery of ESI will be postponed or avoided. (These are secondary custodians that might possibly have important information, but it is less likely. Typically, if you cannot revolve disagreements on importance, you agree to postpone the disputed custodians to second phases.)

F.  The time period during which discoverable information was most likely to have been created or received. (Again, limit the review by timing and if you cannot agree, then postpone disputed additional times for second phases.)

VII. Production

A.  The formats in which structured ESI (database, collaboration sites, etc.) will be produced. (Typically database production is done by spreadsheet reports, or sometimes native. The person in charge of the structured ESI should know.)

B.  The formats in which unstructured ESI (e-mail, presentations, word processing, etc.) will be produced. (Producing parties should follow the requesting parties format request most of the time, except if they ask for paper production. Paper production is ridiculous and expensive for ESI. Otherwise format should not matter. It is, or should be, a non-issue.)

C.  The extent, if any, to which metadata will be produced and the fields of metadata to be produced. (A non-issue too. If metadata is part of the document, then produce it. Your vendor can give you a standard list.)

D.  The production format(s) that ensure(s) that any inherent searchability of ESI is not degraded when produced. (This is a must. In my court it can be sanctionable to change an electronic document so that it is no longer searchable.)

VIII. Privilege

A.  How any production of privileged or work-product protected information will be handled. (Of course you do not produce it, but you log it.)

B.  Whether the parties can agree on alternative ways to identify documents withheld on the grounds of privilege or work product to reduce the burdens of such identification. (Look for ways to streamline your privilege log. For instance, under other Southern District local rule you never have to log communications made after suit was filed.)

C.  Whether the parties will enter into a Federal Rule of Evidence 502(d) stipulation and order that addresses inadvertent or agreed production. (You should always have a 502(d) Order whenever you are making an electronic production. Mistakes happen and this is the closest thing we have in the law to a fail-safe. There is no valid reason to oppose this order. Clear enough for you?)

 

 

 

 


%d bloggers like this: