I’ve escaped the e-Discovery Niche After 15 Years of Super-Specialization
Ralph Losey, January 25, 2022
After fifteen years of writing weekly blogs on e-discovery, I took three years off to focus on implementation of all those words. Now I’m back, back to where I once belonged. Writing again, but writing not just about my Big Law niche, the fun little AI corner that I had painted myself into, but back to writing about ALL of my interests in Law and Technology. That has been my home since I started in legal practice in 1980 and at the same time started coding, mostly games, but also music software, midi creations and law office technology. Proud to recall that I was one of the first computer lawyers in the country. (Also one of the first to get in trouble with the Bar for my Internet Website, FloridaLawFirm.com, which they thought was a television broadcast!)
Ralph in the early 90s
Anyway, when not haggling with the Bar and fellow attorneys who would tease me, the first nerd, and call me a “secretary” (ooh how terrible) for having a keyboard on my desk. I kid you not! I used PCs when they first came out in my law firm as the new associate. I have had them on my desk to try to work smarter ever since. Not PCs necessarily, but all kinds.
So I’m back to where I once belonged, in the great big world of technology law, making deals and giving advice. Oh yeah, I may still consult on e-discovery too, especially the AI parts that so fascinated me ever since my Da Silva Moore breakthrough days. (Thank you Judge Andrew Peck.) For my full story, some of which I had to hide in my Big Law role as a super-specialist, see: https://www.losey.law/our-people/25-uncategorized/108-ralph-losey Not many people know I was a Qui Tam lawyer too; and for both sides.
Wait, there is still more. I’ve left the best for last. I went back home, left Big Law for good, and am now practicing law with my son, Adam Losey, daughter in law, Cat Losey, and thirteen other, crazy tech lawyer types at Losey.law. Yes, that is the real domain name and the name of the firm itself is Losey. So of course I had to go there. Check it out. Practicing law with my son is a dream come true for both of us. I’m loving it. It was lonely being the only tech wiz in a giant firm. Adam knows tech better than me, is much faster in every respect (except maybe doc review with AI) and he and Cat are obviously a lot smarter.
To my long-time readers, thanks for your encouragement. I heard you and got back to my roots of general tech-law, and got back to blogging and home. To quote the Beatles the funny “Get Back” song in their great LET IT BE album:
Rosetta (who are you talking about?) about Sweet Loretta Fart. . . .
Stay tuned, because a new blog is coming at you soon. Feel free to drop me an email at Ralph at Losey dot Law. Humans only please. Robots not welcome (unless you’re from the future and don’t have weapons).
Playing games is a great way to learn. That’s one reason I’ve devised a game concerning the interesting and fairly complex issues involved in trying to determine what e-discovery activities are proportional and appropriate in various sized cases. Specifically, what should you do to prepare for federal court 26(f) conferences in small and medium sized cases, versus large, complicated cases? Small, Medium or Large? is kind of a Goldilocks game of proportionality.
I have had to give these proportionality questions a lot of thought as part of my practice as a lawyer supervising hundreds of e-discovery projects at a time, projects of all different sizes. I could simply give you my answer, but after five books, I’ve already been there and done that. So I thought I’d try something new and make this learning into a game where you consider and vote on what activities you think are appropriate for a Small, Medium or Large case.
The Hive Mind is different from Crowdsourcing, but related. To do properly, a Hive Mind requires Swarming, which we cannot really do properly in this game using polls. Louis Rosenberg, Super-Intelligence and the virtues of a “Hive Mind” (Singularity, 2/10/16). One Silicon Valley startup Unanimous A.I., is developing technologies that enable sophisticated online human swarming and thus better collective intelligence. I might try their free product for social research, UNO, if I further investigate the power of the Hive Mind. Maybe at NY LegalTech? (Email me if you’re interested.) In the meantime we are going to use simple polling for the e-Discovery Hive Mind Game: Small, Medium or Large?
Let’s play the game and see what collective intelligence emerges from many individuals giving their opinion about proportionality and e-discovery. I am playing this game now with all of the litigation associates and paralegals in my law firm, which is a pretty large swarm by itself. Your responses will join in the swarm, the collective intelligence.
In January I’ll share the results of all the polling and opine away as to how well the Hive Mind performed. You can win the game in one of two ways, either by matching the most popular Hive Mind responses or by matching my responses. (I assume there will be a difference because not enough experts will be playing, but who knows, maybe not. In theory, with enough experts swarming, the group, the Hive Mind, will always have the best answer.)
Background To Play the Game
In order to play the Small, Medium or Large? game, you first need to be familiar with the checklist of the Southern District Court of Florida of all of the things that you should do to prepare for e-discovery in a large case. It is a pretty good list and I have written about it before. Good New 33-Point e-Discovery Checklist From Miami (e-Discovery Team, October 1, 2017) (a must read to fully prepare for this game). My article contains comments and explanations about all checklist items, which is the beginning of a kind of swarm interaction, that is, if you take time to ponder the signals. The Court’s checklist incorporates the new provisions in the rules on relevance and proportionality (Rule 26(b)(1)) and on specific objections (Rule 34(b)(2)).
It is not a perfect list, but it is the best one now out there with a court pedigree. It is not too long and complex, like the older lists of some courts that are very detailed, and not too short, like the easy-peasy list that Bill Hamilton and I created for the Middle District Court of Florida many years ago. (Attorneys still complained about how burdensome it was!) In sum, the 33-Point Checklist out of Miami is a good list for legal practitioners all over the country to use to prepare for e-discovery, which means it is a good basis for our Small, Medium or Large?Hive Mind Game. Come play along.
Rules of the Game
The goal of our Hive Mind Game is to determine which of the thirty-three points on the checklist are applicable to big cases only, which are applicable to medium size cases and which to small cases. You are to assume that all thirty-three points apply to large cases, but that they are not all applicable to medium and small size cases. The Hive Mind voting is used to allow the swarm – that’s you – to identify which of the thirty-three only apply to small cases, and which only apply to medium size cases. If a checklist item applies to a medium size case, it automatically also applies to a small size case.
In other words, the game is to sort the thirty-three into three piles, Small, Medium or Large? Simple, eh? Well, maybe not. This is a matter of opinion and things are pretty vague. For instance, I’m not going to define the difference between large, small and medium size case. That is part of the Hive Mind.
The game is important because proportionality in the law is important. You do not prepare for a big case the same way you prepare for a small case. You just don’t. You could, but it would be a waste of your clients money to do so. So the real trick in e-discovery, like in all other aspects of the litigation, is to determine what you should do in any size case to prepare, including even the small cases. For instance, outside of e-discovery, most people would agree that you should take the parties depositions as a minimum to prepare for a trial of any size case, including small ones.
What are the equivalent items in the 33-point checklist? Which of them should be applied to all cases, even the small ones? Which of them are too complicated and expensive to apply in a small case, but not a medium sized case? Which too complicated and expensive to apply in a small or medium sized case, but not a large one? That is where the real skill and knowledge come in. That is the essence of the game.
Assume All 33 Items Apply to Big Cases
This Small, Medium or Large?Hive Game requires you to assume that all thirty-three items on the Court’s checklist apply to big cases, but not to all cases, that there are certain checklist items that only apply to medium size cases, and others, a smaller list, that only apply to small cases. You may question the reality of this assumption. After all, the Court does not say that. It does not say, here’s a checklist we made to guide your e-discovery, but you can ignore many of the items on this list if you have a small case, or even a medium size case. Still, that’s what they mean, but they do not go on to say what’s what. They know that the Bar will figure it out themselves in due time, meaning the next several years. And right they are. But why wait? Let’s figure it out ourselves now with this Small, Medium or Large? Hive Game, and condense years to weeks.
The Hive Game will allow us to fill in the blanks of what the Court did not say. But first, lets focus again on what the Court did say. It said that the checklist may be used by members of the Bar to guide the Rule 26(f) e-discovery conferences and Case Management Reports. It did not say shall be used. It is a suggestion, not a requirement. Still, as every long-term member of the court knows, what they mean is that you damn well should follow the checklist in a big case! Woe unto the lawyers who come before the judges in a big case with an e-discovery issue where they never even bothered to go through the checklist. If your issue is on that list, and chances are it will be, then dear slacker, prepare for a Miami style bench-slap. Ouch! It is going to hurt. Not only you and your reputation, but also your client.
I feel confident in making the game assumption that all thirty-three checklist items apply to big cases. I am also confident they do not all apply to medium and small size cases and that is the real reason for the court’s use of may instead of shall.
With that background we are almost ready to start playing the game and opine away as to which of the 33 are small and medium size only. But, there is still one more thing I have found very helpful when you try to really dig into the checklist and play the game, you need to have the numbers 1-33 added to the list. The one mistake the Court made on this list was in using headings and bullet points instead of numbers. I fix that in the list that follows, so that you can, if nothing else, more easily play the game.
Preservation
The ranges of creation or receipt dates for any ESI to be preserved.
The description of ESI from sources that are not reasonably accessible because of undue burden or cost and that will not be reviewed for responsiveness or produced, but that will be preserved in accordance with Federal Rule of Civil Procedure 26(b)(2)(B).
The description of ESI from sources that: (a) the party believes could contain relevant information; but (b) has determined, under the proportionality factors, is not discoverable and should not be preserved.
Whether to continue any interdiction of any document-destruction program, such as ongoing erasures of e-mails, voicemails, and other electronically recorded material.
The number and names or general job titles or descriptions of custodians for whom ESI will be preserved (e.g., “HR head,” “scientist,” “marketing manager”).
The list of systems, if any, that contain ESI not associated with individual custodians and that will be preserved, such as enterprise databases.
Any disputes related to scope or manner of preservation.
Liaison
The identity of each party’s e-discovery liaison, who will be knowledgeable about and responsible for each party’s ESI.
Informal Discovery About Location and Types of Systems
Identification of systems from which discovery will be prioritized (e.g., e-mail, finance, HR systems).
Descriptions and location of systems in which potentially discoverable information is Stored.
How potentially discoverable information is stored.
How discoverable information can be collected from systems and media in which it is stored.
Proportionality and Costs
The amount and nature of the claims being made by either party.
The nature and scope of burdens associated with the proposed preservation and discovery of ESI.
The likely benefit of the proposed discovery.
Costs that the parties will share to reduce overall discovery expenses, such as the use of a common electronic-discovery vendor or a shared document repository, or other costsaving measures.
Limits on the scope of preservation or other cost-saving measures.
Whether there is relevant ESI that will not be preserved in accordance with Federal Rule of Civil Procedure 26(b)(1), requiring discovery to be proportionate to the needs of the case.
Search
The search method(s), including specific words or phrases or other methodology, that will be used to identify discoverable ESI and filter out ESI that is not subject to discovery.
The quality-control method(s) the producing party will use to evaluate whether a production is missing relevant ESI or contains substantial amounts of irrelevant ESI.
Phasing
Whether it is appropriate to conduct discovery of ESI in phases.
Sources of ESI most likely to contain discoverable information and that will be included in the first phases of Federal Rule of Civil Procedure 34 document discovery.
Sources of ESI less likely to contain discoverable information from which discovery will be postponed or not reviewed.
Custodians (by name or role) most likely to have discoverable information and whose ESI will be included in the first phases of document discovery.
Custodians (by name or role) less likely to have discoverable information from whom
discovery of ESI will be postponed or avoided.
The time period during which discoverable information was most likely to have been
created or received.
Production
The formats in which structured ESI (database, collaboration sites, etc.) will be produced.
The formats in which unstructured ESI (e-mail, presentations, word processing, etc.) will be produced.
The extent, if any, to which metadata will be produced and the fields of metadata to be produced.
The production format(s) that ensure(s) that any inherent searchability of ESI is not degraded when produced.
Privilege
How any production of privileged or work-product protected information will be handled.
Whether the parties can agree on alternative ways to identify documents withheld on the grounds of privilege or work product to reduce the burdens of such identification.
Whether the parties will enter into a Federal Rule of Evidence 502(d) stipulation and order that addresses inadvertent or agreed production.
One Example Before the Games Begin
We are almost ready to play the e-Discovery Small, Medium or Large? Hive Mind Game. We will do so with thirty-two polls that are presented to the player in the same order as the Court’s checklist. To make sure the rules are clear (this is, after all, a game for lawyers, not kids) we start with an example, the first of the thirty-three items on the checklist. The court’s first item is to suggest that you Determine the range of creation or receipt dates for any ESI to be preserved.
The “right answer” to this first item is that this should be done in every case, even the small ones. You should always determine the date range of data to be preserved. In most cases that is very easy to do, and, as every lawyer should know, when in doubt, when it comes to preservation, always err on the side of inclusion. That means you should check the Small Case answer as shown in the “dummy poll” graphic below.
Checklist 1
We have set these polls up so that you cannot see the results, but you can leave private comments. We may do this again later and experiment with what happens when you can see the results. We will share the results (and some comments) when the game ends on January 1, 2019.
Now for the live polls and game proper. Note that several of the checklist items, including number two and three, which are the first two polls shown below, are so long that we had to paraphrase and shorten them to fit in the space allocated in the polling software. To see the original of all thirty-three items on the checklist, go to my prior blog explaining the list (highly recommended) or the court’s page.
Let the Games Begin!
We are now ready to begin playing the e-Discovery Hive Mind Game. So get ready to plug-in. Select an answer to each of the thirty-two polls that follow. After you vote, you also have a chance to leave a private comment to each poll, but that is optional and will not impact your score.
Checklist 2
________________________________________
Checklist 3
________________________________________
Checklist 4
________________________________________
Checklist 5
________________________________________
Checklist 6
________________________________________
Checklist 7
________________________________________
Checklist 8
________________________________________
Checklist 9
________________________________________
Checklist 10
________________________________________
Checklist 11
________________________________________
Checklist 12
________________________________________
Checklist 13
________________________________________
Checklist 14
________________________________________
Checklist 15
________________________________________
Checklist 16
________________________________________
Checklist 17
________________________________________
Checklist 18
________________________________________
Checklist 19
________________________________________
Checklist 20
________________________________________
Checklist 21
________________________________________
Checklist 22
________________________________________
Checklist 23
________________________________________
Checklist 24
________________________________________
Checklist 25
________________________________________
Checklist 26
________________________________________
Checklist 27
________________________________________
Checklist 28
________________________________________
Checklist 29
________________________________________
Checklist 30
________________________________________
Checklist 31
________________________________________
Checklist 32
________________________________________
Checklist 33
________________________________________
________________________________________
Congratulations! You have finished the Game and made your contribution to the e-Discovery Hive Mind. Look for results sometime in early 2018. You can then determine how your answers compared with the collective Hive Mind.
I will also let you know how the Hive Mind answers compared with my own. So you will have two chances to win. Anyone who matches all of my answers wins a free lunch with me in Orlando. Other prizes have yet to be determined. Vendors care to contribute some goodies? Perhaps Elon will donate a free trip to Mars, where I for one hope we don’t run into any Borg cubes, I don’t care how good their Hive Mind is.
In the meantime, please encourage your e-discovery friends and colleagues to join in the game. Teachers and Partners are invited to require their students and associates, paralegals to play too. Resistance is futile! Digging deep into this checklist is a great way to expand your knowledge and expertise of electronic discovery law and practice.
I have spoken several times before concerning the Hacker Way philosophy. I have always focused on my work as a lawyer specializing in e-discovery. I have also included this philosophy in my teachings in this area of the law, including the use of AI in document review. See: the TAR Course;HackerWay.org and HackerLaw.org.
The video talk in this blog takes it outside of the legal community so it can have maximum impact. I think it is important for everyone to understand the credo behind Facebook and most other 21st Century software tech companies. No one else seems to be talking about it, or sharing the secret sauce behind their success. That is contrary to the fundamental Hacker principle of Openness, so, as an old Hacker myself, I am stepping in to fill the gap. That’s just what I do. (Stepping-In is discussed in Davenport and Kirby, Only Humans Need Apply,and by Dean Gonsowski,A Clear View or a Short Distance? AI and the Legal Industry, and A Changing World: Ralph Losey on “Stepping In” for e-Discovery. Also see: Losey, Lawyers’ Job Security in a Near Future World of AI,Part Two.)
Facebook’ corp headquarters photo with symbols added.
In this below eleven minute video I am taking this sharing and openness to the next step. Here I address the five principles and related ideas of the Hacker Way as applied to life in general, not just my legal specialties. Hope you find this provides some value to our fast evolving computer culture. Please leave some comments, either here or at my new Facebook site: HackerWay.org.
___
___
If you have not already read Mark Zuckerberg’s original treatise on the Hacker Way, contained in his initial public offering Letter to Investors, I suggest you do so now. Also see my related ideas on history and social progress at Info→Knowledge→Wisdom.
I cannot believe that the best document review vendors in the world, the ones that include active machine learning in their software, still include secret Control Sets in their built-in methodology. It was a mistake made by most vendors when predictive coding was first released years ago. It is well past time for vendors to own up to the mistake. Please modify your software to eliminate it before you do any more damage, both to yourself and, more importantly, the whole profession. Lose your fear of academic institutions and do what’s right. I am not naming names yet, but I may have to eventually. My patience is wearing thin. Maybe you can tell that from my video rant below, where I get so worked up that I use the “H” word. This is another new video for the e-Discovery Team’s TAR Course. It is included in the new First Class that we just added to the course.
Every day that vendors keep phony control set procedures is another day that lawyers are mislead on recall calculations based on them; another day lawyers are frustrated by wasting their time on overly large random samples; another day everyone has a false sense of protection from the very few unethical lawyers out there, and the very many not fully competent lawyers; and another day clients pay too much for document review. Stop shooting yourself in the foot software vendors. And lawyers, stop using control sets in your methods. Do not just do what vendors tell you to do. Demand that your vendor change its software or at least show you how to use it without secret control sets.
The method of predictive coding taught here has been purged of vendor hype and bad science and proven effective many times. We know that the secret control set almost never works and it is high time it be expressly abandoned. Here are the main reasons why: (1) relevance is never static, it changes over the course of the review; (2) the random selection size was typically too small for statistically meaningful calculations; (3) the random selection was typically too small in low prevalence collections (the last majority in legal search) for complete training selections; and (4) it supposedly required a senior SME’s personal attention for days of document review work, a mission impossible for most e-discovery teams.
The e-Discovery Team calls on all vendors of advanced AI software for document review to stop using secret control sets and phase it out of their software.
New First Class Added to the TAR Course
We also added a new class on the historical background of the development of predictive coding. We felt that it was important to clarify the many conflicting claims and procedures still out there in the e-discovery marketplace. We made this historical discussion the First Class to the TAR Course. This class also includes a discussion of predictive coding patents. Yes. They are lots of fun.
This new First Class expands the number of classes from sixteen to seventeen (a prime number). We feel pretty good about this expansion. Sure, with more time we could have made the class writings a little shorter, but still we think it is an improvement over my prior writings on the subject of control sets. Hopefully repetition will help in your learning of this initial, difficult material in the TAR Course.
Ralph Losey is an Arbitrator, Special Master, Mediator of Computer Law Disputes and Practicing Attorney, partner in LOSEY PLLC. Losey is a high tech law firm with three Loseys and a bunch of other cool lawyers. We handle projects, deals, IP of all kinds all over the world, plus litigation all over the U.S. For more details of Ralph's background, Click Here
All opinions expressed here are his own, and not those of his firm or clients. No legal advice is provided on this web and should not be construed as such.
Ralph has long been a leader of the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world. Ralph has written over two million words on e-discovery and tech-law subjects, including seven books.
Ralph has been involved with computers, software, legal hacking and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: Commercial Litigation; E-Discovery and Information Management Law; Information Technology Law; and, Employment Law - Management.
Ralph is the proud father of two children, Eva Losey Grossman, and Adam Losey, a lawyer with incredible cyber expertise (married to another cyber expert lawyer, Catherine Losey), and best of all, husband since 1973 to Molly Friedman Losey, a mental health counselor in Winter Park.
1. Electronically stored information is generally subject to the same preservation and discovery requirements as other relevant information.
2. When balancing the cost, burden, and need for electronically stored information, courts and parties should apply the proportionality standard embodied in Fed. R. Civ. P. 26(b)(2)(C) and its state equivalents, which require consideration of importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the burden or expense of the proposed discovery outweighs its likely benefit.
3. As soon as practicable, parties should confer and seek to reach agreement regarding the preservation and production of electronically stored information.
4. Discovery requests for electronically stored information should be as specific as possible; responses and objections to discovery should disclose the scope and limits of the production.
5. The obligation to preserve electronically stored information requires reasonable and good faith efforts to retain information that is expected to be relevant to claims or defenses in reasonably anticipated or pending litigation. However, it is unreasonable to expect parties to take every conceivable step or disproportionate steps to preserve each instance of relevant electronically stored information.
6. Responding parties are best situated to evaluate the procedures, methodologies, and technologies appropriate for preserving and producing their own electronically stored information.
7. The requesting party has the burden on a motion to compel to show that the responding party’s steps to preserve and produce relevant electronically stored information were inadequate.
8. The primary source of electronically stored information to be preserved and produced should be those readily accessible in the ordinary course. Only when electronically stored information is not available through such primary sources should parties move down a continuum of less accessible sources until the information requested to be preserved or produced is no longer proportional.
9. Absent a showing of special need and relevance, a responding party should not be required to preserve, review, or produce deleted, shadowed, fragmented, or residual electronically stored information.
10. Parties should take reasonable steps to safeguard electronically stored information, the disclosure or dissemination of which is subject to privileges, work product protections, privacy obligations, or other legally enforceable restrictions.
11. A responding party may satisfy its good faith obligation to preserve and produce relevant electronically stored information by using technology and processes, such as data sampling, searching, or the use of selection criteria.
12. The production of electronically stored information should be made in the form or forms in which it is ordinarily maintained or in a that is reasonably usable given the nature of the electronically stored information and the proportional needs of the case.
13. The costs of preserving and producing relevant and proportionate electronically stored information ordinarily should be borne by the responding party.
14. The breach of a duty to preserve electronically stored information may be addressed by remedial measures, sanctions, or both: remedial measures are appropriate to cure prejudice; sanctions are appropriate only if a party acted with intent to deprive another party of the use of relevant electronically stored information.