Six Sets of Draft Principles Are Now Listed at AI-Ethics.com

October 8, 2017

Arguably the most important information resource of AI-Ethics.com is the page with the collection of Draft Principles underway by other AI Ethics groups around the world. We added a new one that came to our attention this week from an ABA article, A ‘principled’ artificial intelligence could improve justice (ABA Legal Rebels, October 3, 2017). They listed six proposed principles from the talented Nicolas Economou, the CEO of electronic discovery search company, H5.

Although Nicolas Economou is an e-discovery search pioneer and past Sedona participant, I do not know him. I was, of course, familiar with H5’s work as one of the early TREC Legal Track pioneers, but I had no idea Economou was also involved with AI ethics. Interestingly, I recently learned that another legal search expert, Maura Grossman, whom I do know quite well, is also interested in AI ethics. She is even teaching a course on AI ethics at Waterloo. All three of us seem to have independently heard the Siren’s song.

With the addition of Economou’s draft Principles we now have six different sets of AI Ethics principles listed. Economou’s new list is added at the end of the page and reproduced below. It presents a decidedly e-discovery view with which all readers here are familiar.

Nicolas Economou, like many of us, is an alumni of The Sedona Conference. His sixth principle is based on what he calls thoughtful, inclusive dialogue with civil society. Sedona was the first legal group to try to incorporate the principles of dialogue into continuing legal education programs. That is what first attracted me to The Sedona Conference. AI-Ethics.com intends to incorporate dialogue principles in conferences that it will sponsor in the future. This is explained in the Mission Statement page of AI-Ethics.com.

The mission of AI-Ethics.com is threefold:

  1. Foster dialogue between the conflicting camps in the current AI ethics debate.
  2. Help articulate basic regulatory principles for government and industry groups.
  3. Inspire and educate everyone on the importance of artificial intelligence.

First Mission: Foster Dialogue Between Opposing Camps

The first, threshold mission of AI-Ethics.com is to go beyond argumentative debates, formal and informal, and move to dialogue between the competing camps. See eg. Bohm Dialogue, Martin Buber and The Sedona Conference. Then, once this conflict is resolved, we will all be in a much better position to attain the other two goals. We need experienced mediators, dialogue specialists and judges to help us with that first goal. Although we already have many lined up, we could always use more.

We hope to use skills in both dialogue and mediation to transcend the polarized bickering that now tends to dominate AI ethics discussions. See eg. AI Ethics Debate. We need to move from debate to dialogue, and we need to do so fast.

_____

Here is the new segment we added to the Draft Principles page.

6. Nicolas Economou

The latest attempt at articulating AI Ethics principles comes from Nicolas Economou, the CEO of electronic discovery search company, H5. Nicolas has a lot of experience with legal search using AI, as do several of us at AI-Ethics.com. In addition to his work with legal search and H5, Nicholas is involved in several AI ethics groups, including the AI Initiative of the Future Society at Harvard Kennedy School and the Law Committee of the IEEE’s Global Initiative for Ethical Considerations in AI.

Nicolas Economou has obviously been thinking about AI ethics for some time. He provides a solid scientific, legal perspective based on his many years of supporting lawyers and law firms with advanced legal search. Economou has developed six principles as reported in an ABA Legal Rebels article dated October 3, 2017, A ‘principled’ artificial intelligence could improve justice. (Some of the explanations have been edited out as indicated below. Readers are encouraged to consult the full article.) As you can see the explanations given here were written for consumption by lawyers and pertain to e-discovery. They show the application of the principles in legal search. See eg TARcourse.com. The principles have obvious applications in all aspects of society, not just the Law and predictive coding, so their value goes beyond the legal applications here mentioned.

Principle 1: AI should advance the well-being of humanity, its societies, and its natural environment. The pursuit of well-being may seem a self-evident aspiration, but it is a foundational principle of particular importance given the growing prevalence, power and risks of misuse of AI and hybrid intelligence systems. In rendering the central fact-finding mission of the legal process more effective and efficient, expertly designed and executed hybrid intelligence processes can reduce errors in the determination of guilt or innocence, accelerate the resolution of disputes, and provide access to justice to parties who would otherwise lack the financial wherewithal.

Principle 2: AI should be transparent. Transparency is the ability to trace cause and effect in the decision-making pathways of algorithms and, in hybrid intelligence systems, of their operators. In discovery, for example, this may extend to the choices made in the selection of data used to train predictive coding software, of the choice of experts retained to design and execute the automated review process, or of the quality-assurance protocols utilized to affirm accuracy. …

Principle 3: Manufacturers and operators of AI should be accountable. Accountability means the ability to assign responsibility for the effects caused by AI or its operators. Courts have the ability to take corrective action or to sanction parties that deliberately use AI in a way that defeats, or places at risk, the fact-finding mission it is supposed to serve.

Principle 4: AI’s effectiveness should be measurable in the real-world applications for which it is intended. Measurability means the ability for both expert users and the ordinary citizen to gauge concretely whether AI or hybrid intelligence systems are meeting their objectives. …

Principle 5: Operators of AI systems should have appropriate competencies. None of us will get hurt if Netflix’s algorithm recommends the wrong dramedy on a Saturday evening. But when our health, our rights, our lives or our liberty depend on hybrid intelligence, such systems should be designed, executed and measured by professionals with the requisite expertise. …

Principle 6: The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society. …  The societal dialogue relating to the use of AI in electronic discovery would benefit from being even more inclusive, with more forums seeking the active participation of political scientists, sociologists, philosophers and representative groups of ordinary citizens. Even so, the realm of electronic discovery sets a hopeful example of how an inclusive dialogue can lead to broad consensus in ensuring the beneficial use of AI systems in a vital societal function.

Nicolas Economou believes, as we do, that an interdisciplinary approach, which has been employed successfully in e-discovery, is also the way to go for AI ethics. Note his use of the word “dialogue” and mention in the article of The Sedona Conference, which pioneered the use of this technique in legal education. We also believe in the power of dialogue and have seen it in action in multiple fields. See eg. the work of physicist, David Bohm and philosopher, Martin Buber. That is one reason that we propose the use of dialogue in future conferences on AI ethics. See the AI-Ethics.com Mission Statement.

_____

__

 

 


New Homework Added to the TAR Course and a New Video Added to AI-Ethics

September 3, 2017

We have added a homework assignment to Class Sixteen of the TAR Course. This is the next to last class in the course. Here we cover the eighth step of our eight-step routine, Phased Production. I share the full homework assignment below for those not yet familiar with our instructional methods, especially our take on homework. Learning is or should be a life-long process.

But before we get to that I want to share the new video added to the AI-Ethics.com web at the end of the Intro/Mission page. Here I articulate the opinion of many in the AI world that an interdisciplinary team approach is necessary for the creation of ethical codes to regulate artificial intelligence. This team approach has worked well for electronic discovery and Losey is convinced it will work for AI Law as well. AI Ethics is one of the most important issues facing humanity today. It is way too important for lawyers and government regulators alone. It is also way too important to leave to AI coders and professors to improvise on their own. We have to engage in true dialogue and collaborate.

______

Now back to the more mundane world of homework and learning the Team’s latest process for the application of machine learning to find evidence for trial. Here is the new homework assignment for Class Sixteen of the TAR Course.

____

Go on to the Seventeenth and last class, or pause to do this suggested “homework” assignment for further study and analysis.

SUPPLEMENTAL READING: It is important to have a good understanding of privilege and work-product protection. The basic U.S. Supreme Court case in this area is Hickman v. Taylor, 329 US 495 (1947). Another key case to know is Upjohn Co., v. U.S. 449 U.S. 383 (1981).  For an authoritative digest of case law on the subject with an e-discovery perspective, download and study The Sedona Conference Commentary on Protection of Privileged ESI 2015.pdf (Dec. 2015).

EXERCISES: Study Judge Andrew Peck’s form 502(d) order.  You can find it here. His form order started off as just two sentences, but he later added a third sentence at the end:

The production of privileged or work-product protected documents, electronically stored information (“ESI”) or information, whether inadvertent or otherwise, is not a waiver of the privilege or protection from discovery in this case or in any other federal or state proceeding. This Order shall be interpreted to provide the maximum protection allowed by Federal Rule of Evidence 502(d).
Nothing contained herein is intended to or shall serve to limit a party’s right to conduct a review of documents, ESI or information (including metadata) for relevance, responsiveness and/or segregation of privileged and/or protected information before production.

Do you know the purpose of this additional sentence? Why might someone oppose a 502(d) Order? What does that tell you about them? What does that tell the judge about them? My law firm has been opposed a few times, but we have never failed. Well, there was that one time, where both sides agreed, and the judge would not enter the stipulated order, saying it was not necessary, that he would anyway provide such protection. So, mission accomplished anyway.

Do you think it is overly hyper for us to recommend that a 502(d) Order be entered in every case where there is ESI review and production? Think that some cases are too small and too easy to bother with that? That it is o.k. to just have a claw-back agreement? Well take a look at this opinion and you may well change your mind. Irth Solutions, LLC v. Windstream Communications, LLC, (S.D. Ohio, E Div., 8/2/17). Do you think this was a fair decision? What do you think about the partner putting all of the blame on the senior associate (seven-year) for the mistaken production of privileged ESI? What do you think of the senior associate who in turn blamed the junior associate (two-year)? The opinion does not state who signed the Rule 26(g) response to the request to produce. Do you think that should matter? By the way, having been a partner in a law firm since at least 1984, I think this kind of blame-game behavior was reprehensible!

Students are invited to leave a public comment below. Insights that might help other students are especially welcome. Let’s collaborate!

 


How the Hacker Way Guided Me to e-Discovery, then AI Ethics

August 13, 2017

This new ten minute video on Hacker Way and Legal Practice Management was added to my Hacker Way and AI-Ethics pages this week. It explains how one led to another. It also provides more insight into why I think the major problems of e-discovery have now been solved, with a shout-out to all e-discovery vendors and the team approach of lawyers working with them. This interdisciplinary team approach is how we overcame e-discovery challenges and, if my theory is correct, will also allow us to meet the regulatory challenges surrounding artificial intelligence. Hopefully my video disclosures here will provide useful insights into how the Hacker Way management credo used by most high-tech companies can also be followed by lawyers.

__

__

 


E-DISCOVERY IS OVER: The big problems of e-discovery have now all been solved. Crises Averted. The Law now has bigger fish to fry.

July 30, 2017

Congratulations!

We did it. We survived the technology tsunami. The time of great danger to Law and Justice from  e-Discovery challenges is now over. Whew! A toast of congratulations to one and all.

From here on it is just a matter of tweaking the principles and procedures that we have already created, plus never-ending education, a good thing, and politics, not good, but inevitable. The team approach of lawyers and engineers (vendors) working together has been proven effective, so have the new Rules and case law, and so too have the latest methods of legal search and document review.

I realize that many will be tempted to compare my view to that of a famous physicist in 1894 who declared:

There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.

Lord Kelvin (1824-1907)

Then along came Einstein. Many attribute this humorously mistaken assertion to Lord Kelvin aka William Thomson, 1st Baron Kelvin. According to Quora, scholarship shows that it was probably said by the American physicist, Albert Michelson, behind the famous Michelson–Morley experiment on the speed of light.

Still, even mindful of the dangers of boasting, I still think that most of the really tough problems in electronic discovery have now been solved.

The time of great unknowns in e-discovery are past. The rules, principles, case law, procedures, software, methods, quality controls vendor services are now well-developed. All that remains is more and more precise measurement.

The Wild West days are way gone. Certainly new problems will arise and experiments will continue, but they will not be on the same level or intensity as before. They will be minor problems. They will likely be very similar to issues we have already addressed, just with exponential magnification or new twist and turns typical of the common law.

This is a tremendous accomplishment. The crises we all saw coming around the corner at the turn of the century has been averted. Remember how the entire legal profession was abuzz in emergency mode in 2005 because of the greats dangers and burdens of e-discovery?  Yes, thanks to the hard work and creativity of many people, the big problems have now been solved, especially the biggest problem of them all, finding the needles of relevance in cosmic-sized haystacks of irrelevant noise. TARcourse.com. We now know what is required to do e-discovery correctly. EDBP.com. We have the software and attorney methods needed to find the relevant evidence we need, no matter what the volume of information we are dealing with.

We have invented, implemented and perfected procedures than can be enhanced and altered as needed to accommodate the ever growing complexity and exponential growth. We expect that. There is no data too big to handle. If fact, the more data we have, the better our active machine learning systems get, like, for instance, predictive coding. What an incredible difference from the world we faced in e-discovery just five years ago.

This success was a team effort by thousands of people around the world, including a small core group who devoted their professional lives to solving these problems. My readers have been a part of this and you can pat yourself on the back too. The paradigm shift has been made. Maybe it was the Sedona vortexes?

Now that the tough parts of e-discovery are over, the rest of the ride is downhill. Some of my readers have already moved on. I will not retire, not just yet. I will keep up the work of e-discovery, even as I watch it transition to just teaching and politics. These activities have there own unique challenges too, even if they are not really all that impact-full in the big scheme of things. Plus, I find politics disgusting. You will see tons of dirty pool in our field soon. I cannot talk about it now. We have some renegades with authority who never solved an e-discovery problem in their life. Posers with power.

But what is that new turbulence I hear in the distance? It is a bizarre new sound with vibrations never experienced before. It lies far outside of well trodden paths and sounds both discordant and harmonious, sirens-like at the same time. It lies on the outer, cutting edges of law, science and technology. It sounds like a new, more profound Technology and Law challenge has emerged. It is the splashing of bigger fish to fry. I am hearing the eerie smarts sounds of AI. A music of both exuberance and fear, utopia or extinction.

The Biggest Challenge Today is the Ethics of Artificial Intelligence.

Following my own advice of the Hacker Way approach I have given this considerable thought lately. I have found an area that has far more serious challenges and dangers than e-discovery – the challenges of AI Ethics.

I think that my past hacks, my past experiences with law and technology, have prepared me to step-up to this last, really big hack, the creation of a code of ethics for AI. A code that will save humanity from a litany of possible ills arising out of AI’s inevitable leap to super-intelligence.  I have come to see that my work in the new area of AI Ethics could have a far greater impact than my current work with active machine learning and the discovery of evidence in legal proceedings. AI Ethics is the biggest problem that I see right now where I have some hand-on skills to contribute. AI Ethics is concerned with artificial intelligence, both special and general, and the need for ethical guidelines, including best practices, principles, laws and regulations.

This new direction has led to my latest hack, AI-Ethics.com. Here you will find 3,866 words, many of them quotes; 19 graphics, including a photo of Richard Braman; and 9 videos with several hours worth of content. You will find quotes and videos on AI Ethics from the top minds in the world, including:

  • Steven Hawking
  • Elon Musk
  • Bill Gates
  • Ray Kurzweil
  • Mark Zuckerberg
  • Sam Harris
  • Nick Bostrom
  • Oren Etzioni
  • 2017 Asilomar conference
  • Sam Altman
  • Susumu Hirano
  • Wendell Wallach

Please come visit at AI-Ethics.com. The next big thing. Lawyers are needed, as the web explains. I look forward to any recommendations you may have.

I have done the basic research for AI Ethics, at least the beginning big-picture research of the subject. The AI-Ethics.com website shares the information that had biggest impact for me personally. The web I hacked together also provides numerous links to resources where you can continue and customize your study.

I have been continuously improving the content since this started just over a week ago. This will continue as my study continues.

As you will see, a proposal has already emerged to have an International Conference in Florida on AI Ethics as early as 2018. We would assemble some of the top experts and concerned citizens from all walks of life. I hope especially to get Elon Musk to attend and will time the event to correspond with one of SpaceX’es many launches here. My vision for the conference is to facilitate dialogue with high-tech variations appropriate for the AI environment.

The Singularity of superintelligent AIs may come soon. We may live long enough to see it. When it does, we want a positive future to emerge, not a dystopia. Taking action now on AI ethics can help a positive future come to pass.

Here is one of many great videos on the subject of AI in general. This technology is really interesting. Kevin Kelly, the co-founder of Wired, does a good job of laying out some of its characteristics. Kelly takes an old-school approach and does not speak about superintelligence in an exponential sense.

 


%d bloggers like this: