Understanding the Legal Assessment of AI-Generated Content
Introduction
Copyright law is based on the idea that to be protected by copyright, a work must be original. So, if AI-generated content is considered to be original, does that mean it is eligible for copyright protection?
As AI technology advances, it is becoming easier and easier for machines to create content that is indistinguishable from that created by humans. This has raised questions about the authenticity of AI-generated content, and whether it can truly be considered the original work of a human author.
There are a number of factors to consider when assessing the authenticity of AI-generated content. Below are some key considerations.
A Look at the Authenticity of AI-generated Content
When it comes to copyrighted material, determining who the author is can be a complex task.
In the past, copyright law has been used to protect the original creations of human authors. This means that, in order to qualify for copyright protection, the work must be created by a human being.
However, with the rise of AI-generated content, it is becoming increasingly difficult to determine who the author is. This is because machines are able to create content that is indistinguishable from that created by humans. As a result, there is a growing need for clarification about what constitutes original authorship under copyright law.
Considering AI-Generated Content and Copyright Law
It is important to consider how existing copyright law will be applied to AI-generated content.
When a human creates content, that content is automatically copyrighted. However, when a machine creates content, it is not automatically protected by copyright law. To receive copyright protection, the content must be deemed “original” by a court of law. This is a difficult distinction to make, as it is often difficult to determine whether the content was created by a human or a machine.
In many, if not mist cases, it is likely that AI-generated content will not be protected by copyright law. This is because machine-created content is not generally considered to be original work. However, if a human adds their own original thoughts to the AI-generated content, then copyright law may apply.
Naruto v. Slater: Case About Monkey Business Spoils It for AI
In the case of Naruto v. Slater, a monkey named Naruto took a selfie with a camera that was left in his enclosure. The photographer, David Slater, claimed copyright on the photo and sued Wikimedia for using the image without permission. Ultimately the appeals court ruled that only humans can hold copyrights, not monkeys. Naruto v. Slater, 888 F. 3d 418 (9th Cir. 2018).
The ruling against handsome Naruto shown here likely means that any content created purely by AI is automatically in the public domain and anyone can use it without permission.
The U.S. Copyright Office for one reads the case that way. In 2019, the USCO denied an application filed by Dr. Stephen Thaler to register a work of art that “was autonomously created by a computer algorithm running on a machine,” called “A Recent Entrance to Paradise” The USCO did so because the creation lacked “the human authorship necessary to support a copyright claim.”
AI written Article that was Generated From a Human Created Outline may be protected
So, if you create an outline and a machine creates the content based on that outline, is the resulting work truly an original work of authorship? The answer may depend on how much humans are involved in the process. If you are simply providing an idea or topic for the machine to write about, it is unlikely that you would be considered the author of the resulting work. But maybe creating an outline and letting the robot do the rest is enough for legal protection? I would love to help make the law on that as I did with predictive coding.
Conclusion
There is no doubt that AI technology is rapidly evolving and becoming more sophisticated. This raises questions about the authenticity of content generated by machines and its impact on copyright law.
At this point, it is still difficult to create AI-generated content that is indistinguishable from that created by humans. However, as AI technology continues to evolve, it is likely that this will become increasingly easier to do.
It is important to assess the impact of AI-generated content on copyright law, and determine how best to protect the interests of both content creators and consumers. Here lawyers can be of help, preferably human ones, even if they do look like a Simpson’s character.
Inadvertently Disclosed Warrant Application Against Apple in a Criminal Investigation Against Retired Marine General Reveals Latest DOJ Search Procedures, the Dangers of Pacer and Too Much Court Record Transparency, and Much More – Part Two
This article is Part Two of the blog Examining a Leaked Criminal Warrant for Apple iCloud Data in a High Profile Case. See here for Part 1.
In Attachment B to the Application, entitled, Items To Be Seized, the government describes in Section I the Search Procedures they want Apple to follow. That’s where it gets really interesting for anyone in ediscovery. The fun continues in Section II, Information to be Disclosed by Provider, Section III, Information to be Seized by the Government, and Section IV Provider Procedures.
Section I starts off by directing Apple to make a forensic copy, i.w., bit by bit. The language for this is informative. Note how this intrusive request is characterized as a kind of nice courtesy to all of us other Apple iCloud users.
2 . To minimize any disruption of service to third parties, the PROVIDER’s employees and / or law enforcement personnel trained in the operation of computers will create an exact duplicate of the information described in Section II below.
Skipping to paragraph four of the Search Procedures section, the government talks about the search tools they may use. One would hope it is not an exhaustive list. There are so many other good tools out there. Just peruse around EDRM.net and you will see many of the best,
The search shall extract and seize only the specific items to be seized under this warrant (see Section III below ). The search team may use forensic examination and searching tools, such as “Encase” and “FTK” (Forensic Tool Kit), which tools may use hashing and other sophisticated techniques. The review of the electronic data may be conducted by any government personnel assisting in the investigation, who may include, in addition to law enforcement officers and agents, attorneys for the government, attorney support staff, and technical experts.
In the next paragraph five, you see a “this crime only” type relevance limitation put on the search. That should keep it from being a general fishing expedition, of oh, gee, look what I found, yet another new crime.
The search team will not seize contraband or evidence relating to other crimes outside the scope of the items to be seized without first obtaining a further warrant to search for and seize such contraband or evidence.
In the next paragraph six a time limit for the search is self-imposed by the government, but of course a back door is provided to ask the court for more time, which, I hear, is the rule, not the exception. In other words, this time limit is about as flexible as one of Dali’s clocks.
The search team will complete its search of the content records as soon as is practicable but not to exceed 120 days from the date of receipt from the PROVIDER of the response to this warrant. The government will not search the content records beyond this 120-day period without first obtaining an extension of time order from the Court .
In paragraph seven, it is explained that after the search team completes its review of the data, the original production by Apple, the provider here, will then be “sealed and preserved” by the government, not returned and destroyed. The reasons given for this procedure is what you would expect, “authenticity and chain of custody purposes.”
In paragraph nine of the Search Procedures, the Application asserts that “Pursuant to 18 U.S.C. 2703(g) the presence of an agent is not required for service or execution of this warrant.” I am sure the search team of ediscovery experts who will actually do the work here are relieved to know that they won’t have to have an FBI agent looking over their shoulders the whole time. But it does raise the question as to who watches the watchers, or in their case, the seekers. I assume they will do a better job with cybersecurity that the NSA did with Snowden, or the Clerk here did with the sealed Applicatioin. Thumb drive cuff links anyone? Only $39.95 on Amazon.
Information to be Disclosed by Provider
Attachment B to the Application is entitled, Items To Be Seized. Section II of Attachment B describes the Information to be Disclosed by Provider, in this case Apple. This is paragraph ten of the Application. First of all, the Application makes clear that Apple must disclose the information, no matter where in the world Apple may have the ESI stored. So much for international privacy laws. This is a criminal warrant by the DOJ, so you do what the government says, the US government, or else. This is a real problem for countries with strong ESI privacy rights, such as those located in the EU. For good background on this, see The Ultimate Guide to GDPR and Ediscovery by Zapproved (EDRM 5/19/22) (Order in a civil case forbidding the forensic examination of the computers in China as “out of proportion with the needs of this case,” citing Rule 26 (b)(1), Federal Rules of Civil Procedure.)
Now comes the typical including without limitation laundry list of in paragraph 10 a. i-iv. It is quite an extensive list, including “buddy lists.” (I can’t believe anyone still uses that feature. I don’t even see it on my apple devices.) I quote this part 10. a. in full, except for subparagraph iii, which is provider specific, in case you want to use something obnoxiously long and complete like this yourself some day when subpoenaing a private party.
i . All e-mails , communications , or messages of any kind associated with the SUBJECT ACCOUNT, including stored or preserved copies of messages sent to and from the account, deleted messages, and messages maintained in trash or any other folders or tags or labels, as well as all header information associated with each e-mail or message, and any related documents or attachments.
ii. All records or other information stored by subscriber of the SUBJECT ACCOUNT including address books, contact and buddy lists, calendar data, pictures, videos, notes, texts, links, user profiles, account settings, access logs, and files. . . .
iv. All stored files and other records stored on iCloud for the SUBJECT ACCOUNT, including all device backups, all Apple and third-party app data (such as third-party provider emails and Whatsapp application chats backed up via iCloud), all files and other records related to iCloud Mail, iCloud Photo Sharing, My Photo Stream, iCloud Photo Library, iCloud Drive, iWork (including Pages, Numbers, and Keynote) , iCloud Tabs, and iCloud Keychain, and all address books, contact and buddy lists, notes, reminders, calendar entries, images, videos, voicemails, device settings, and bookmarks;
Just in case that list is not exhaustive enough for you, the government goes on to make the list even longer by adding a part b, specifically 10. b. i-iii found at pages 7-9 of 77 of the Application. Most of this is information that a provider might have about the subscriber, the target of the investigation. I quote below the subsection iii on encryption and keybags, which is pretty interesting and could have other uses by practitioners.
b. iii. All files, keys, or other information necessary to decrypt any data produced in an encrypted form, when available to Apple (including, but not limited to, the keybag.txt and fileinfolist.txt files);
Here is Apple’s explanation of what a keybag.txt file should contain, basically the passwords, and how it is used. It gets very complicated. The fileinfolist.txt is not explained by Apple, but appears to be a device file directory.
For background on the related issues of encryption in criminal wiretaps, and the problems this has been causing criminal investigations lately, see the excellent article by Zuckerman Spaeder LLP, in JD Supra, 6/10/22, entitled Warranted wiretapping? What to look for in this year’s Wiretap Report. Zuckerman cites the government Wiretap Report that in 2020 encryption was encountered in 398 wiretaps, and the plain text of the messages could not be decrypted in 383 of those. Yikes, that a 96% failure rate! Moreover, the expenses per wiretap reached an all-time high of $119,418 in 2020, up 183% from $42,216 in 2015. United State Courts, 2020 Wiretap Report. Also see the interesting article on a criminal ESI discovery case with bizarre facts to match the title, Despite Estimate of 37 Years to Crack iPhone, Government Doesn’t Have to Return it – Yet: eDiscovery Case Law, (EDRM, Cloud Nine, 3/27/20). Wonder if people will still even use phones in 37 years? I kind of doubt it.
_________________
To be continued . . . Part three of this Blog will examine Section III of the Application, namely Information To Be Seized by the Government, and Section IV, Provider Procedures. The last part of the blog will focus on the dangers of too much information, the dangers of Pacer, suggestions for its reform, the complex transparency of online court records, privacy rights and speculation on how the leak to the API in this case could have happened. In the meantime, please leave some comments below.
I want to start off Part 2 with a quote from Andrew Woods in the Introduction to his article, Robophobia, 93 U. Colo. L. Rev. 51 (Winter, 2022). Footnotes omitted.
Deciding where to deploy machine decision-makers is one of the most important policy questions of our time. The crucial question is not whether an algorithm has any flaws, but whether it outperforms current methods used to accomplish a task. Yet this view runs counter to the prevailing reactions to the introduction of algorithms in public life and in legal scholarship. Rather than engage in a rational calculation of who performs a task better, we place unreasonably high demands on robots. This is robophobia – a bias against robots, algorithms, and other nonhuman deciders.
Robophobia is pervasive. In healthcare, patients prefer human diagnoses to computerized diagnoses, even when they are told that the computer is more effective. In litigation, lawyers are reluctant to rely on – and juries seem suspicious of – [*56]computer-generated discovery results, even when they have been proven to be more accurate than human discovery results. . . .
In many different domains, algorithms are simply better at performing a given task than people. Algorithms outperform humans at discrete tasks in clinical health, psychology, hiring and admissions, and much more. Yet in setting after setting, we regularly prefer worse-performing humans to a robot alternative, often at an extreme cost.
Woods, Id. at pgs. 55-56
Bias Against AI in Electronic Discovery
Electronic discovery is a good example of the regular preference of worse-performing humans to a robot alternative, often at an extreme cost. There can be no question now that any decent computer assisted method will significantly outperform human review. We have made great progress in the law through the outstanding leadership of many lawyers and scientists in the field of ediscovery, but there is still a long way to go to convince non-specialists. Professor Woods understands this well and cites many of the leading legal experts on this topic at footnotes 137 to 148. Even though I am not included in his footnotes of authorities (what do you expect, the article was written by a mere human, not an AI), I reproduce them below in the order cited as a grateful shout-out to my esteemed colleagues.
Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient than Exhaustive Manual Review, 17 Rich. J.L. & Tech. 1 (2011).
Sam Skolnik, Lawyers Aren’t Taking Full Advantage of AI Tools, Survey Shows, Bloomberg L. (May 14, 2019) (reporting results of a survey of 487 lawyers finding that lawyers have not well utilized useful new tools).
Moore v. Publicis Groupe, 287 F.R.D. 182, 191 (S.D.N.Y. 2012) (“Computer-assisted review appears to be better than the available alternatives, and thus should be used in appropriate cases.”) Judge Andrew Peck.
Bob Ambrogi, Latest ABA Technology Survey Provides Insights on E-Discovery Trends, Catalyst: E-Discovery Search Blog (Nov. 10, 2016) (noting that “firms are failing to use advanced e-discovery technologies or even any e-discovery technology”).
Doug Austin, Announcing the State of the Industry Report 2021, eDiscovery Today (Jan. 5, 2021),
David C. Blair & M. E. Maron, An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System, 28 Commc’ns ACM 289 (1985).
Thomas E. Stevens & Wayne C. Matus, Gaining a Comparative Advantage in the Process, Nat’l L.J. (Aug. 25, 2008) (describing a “general reluctance by counsel to rely on anything but what they perceive to be the most defensible positions in electronic discovery, even if those solutions do not hold up any sort of honest analysis of cost or quality”).
Rio Tinto PLC v. Vale S.A., 306 F.R.D. 125, 127 (S.D.N.Y. 2015). Judge Andrew Peck.
See The Sedona Conference, The Sedona Conference Best Practices Commentary on the Use of Search & Information Retrieval Methods in E-Discovery, 15 Sedona Conf. J. 217, 235-26 (2014) (“Some litigators continue to primarily rely upon manual review of information as part of their review process. Principal rationales [include] . . . the perception that there is a lack of scientific validity of search technologies necessary to defend against a court challenge . . . .”).
Doug Austin, Learning to Trust TAR as Much as Keyword Search: eDiscovery Best Practices, eDiscovery Today (June 28, 2021).
Robert Ambrogi, Fear Not, Lawyers, AI Is Not Your Enemy, Above Law (Oct. 30, 2017).
Robophobia Article Is A First
Robophobia is the first piece of legal scholarship to address our misjudgment of algorithms head-on. Professor Woods makes this assertion up front and I believe it. The Article catalogs different ways that we now misjudge poor algorithms. The evidence of our robophobia is overwhelming, but, before Professor Woods work, it had all been in silos and was not seriously considered. He is the first to bring it all together and consider the legal implications.
His article goes on to suggests several reforms, also a first. But before I get to that, a more detailed overview is in order. The Article is in six parts. Part I provides several examples of robophobia. Although a long list, he says it is far from exhaustive. Part II distinguishes different types of robophobia. Part III considers potential explanations for robophobia. Part IV makes a strong, balanced case for being wary of machine decision-makers, including our inclination to, in some situations, over rely on machines. Part V outlines the components of his case against robophobia. The concluding Part VI offers “tentative policy prescriptions for encouraging rational thinking – and policy making – when it comes to nonhuman deciders.“
Part II of the Article – Types of Robophobia
Professor Woods identifies five different types of robophobia.
Elevated Performance Standards: we expect algorithms to greatly outperform the human alternatives and often demand perfection.
Elevated Process Standards: we demand algorithms explain their decision-making processes clearly and fully; the reasoning must be plain and understandable to human reviewers.
Harsher Judgments: algorithmic mistakes are routinely judges more severely than human errors. A corollary of elevated performance standards.
Distrust: our confidence in automated decisions is week and fragile. Would you rather get into an empty AI Uber, or one driven by a scruffy looking human?
Prioritizing Human Decisions: We must keep “humans in the loop” and give more weight to human input than algorithmic.
Part III – Explaining Robophobia
Professor Woods considers seven different explanations for robophobia.
Fear of the Unknown
Transparency Concerns
Loss of Control
Job Anxiety
Disgust
Gambling for Perfect Decisions
Overconfidence in Human Decisions
I’m limiting my review here, since the explanations for most of these should be obvious by now and I want to limit the length of my blog. But the disgust explanation was not one I expected and a short quote by Andrew Woods might be helpful, along with the robot photo I added.
Uncannily Creepy Robot
[T]he more that robots become humanlike, the more they can trigger feelings of disgust. In the 1970s, roboticist Masahiro Mori hypothesized that people would be more willing to accept robots as the machines became more humanlike, but only up to a point, and then human acceptance of nearly-human robots would decline.[227] This decline has been called the “uncanny valley,” and it has turned out to be a profound insight about how humans react to nonhuman agents. This means that as robots take the place of humans with increasing frequency—companion robots for the elderly, sex robots for the lonely, doctor robots for the sick—reports of robots’ uncanny features will likely increase.
For interesting background on the uncanny valley, see these You Tube videos and experience robot disgust for yourself. Uncanny Valley by Popular Science 2008 (old, but pretty disgusting). Here’s a more recent and detailed one, pretty good, by a popular twenty-something with pink hair. Why is this image creepy? by TUV 2022.
Parts IV and V – The Cases For and Against Robophobia
Part IV lays out all the good reasons to be suspect of delegating decision to algorithms. Part V is the new counter-argument, one we have not heard before, why robophobia is bad for us. This is probably the heart of the article and suggest you read this part for sure.
Here is a good quote at the end of Part IV to put the pro versus anti-robot positions into perspective:
Pro-robot bias is no better than antirobot bias. If we are inclined both to over- and underrely on robots, then we need to correct both problems—the human fear of robots is one piece of the larger puzzle of how robots and humans should coexist. The regulatory challenge vis-à-vis human-robot interactions then is not merely minimizing one problem or the other but rather making a rational assessment of the risks and rewards offered by nonhuman decision-makers. This requires a clear sense of the key variables along which to evaluate decision-makers.
In the first two paragraphs of Part V of his article Professor Woods deftly summarizes the case against robophobia.
We are irrational in our embrace of technology, which is driven more by intuition than reasoned debate. Sensible policy will only come from a thoughtful and deliberate—and perhaps counterintuitive—approach to integrating robots into our society. This is a point about the policymaking process as much as it is about the policies themselves. And at the moment, we are getting it wrong—most especially with the important policy choice of where to transfer control from a human decider to a robot decider.
Specifically, in most domains, we should accept much more risk from algorithms than we currently do. We should assess their performance comparatively—usually by comparing robots to the human decider they would replace—and we should care about rates of improvement. This means we should embrace robot decision-makers whenever they are better than human decision-makers. We should even embrace robot decision-makers when they are less effective than humans, as long as we have a high level of confidence that they will soon become better than humans. Implicit in this framing is a rejection of deontological claims—some would say a “right”—to having humans do certain tasks instead of robots.[255] But, this is not to say that we should prefer robots to humans in general. Indeed, we must be just as vigilant about the risks of irrationally preferring robots over humans, which can be just as harmful.[256]
The concluding Part Three of my review of Robophobia is coming soon.In the meantime, take a break and think about Professor Woods policy-based perspective. That is something practicing lawyers like me do not do often enough. Also, it is of value to consider Andrew’s reference to “deontology“, not a word previously in my vocabulary. It is a good ethics term to pick up. Thank you Immanuel Kant.
I’ve escaped the e-Discovery Niche After 15 Years of Super-Specialization
Ralph Losey, January 25, 2022
After fifteen years of writing weekly blogs on e-discovery, I took three years off to focus on implementation of all those words. Now I’m back, back to where I once belonged. Writing again, but writing not just about my Big Law niche, the fun little AI corner that I had painted myself into, but back to writing about ALL of my interests in Law and Technology. That has been my home since I started in legal practice in 1980 and at the same time started coding, mostly games, but also music software, midi creations and law office technology. Proud to recall that I was one of the first computer lawyers in the country. (Also one of the first to get in trouble with the Bar for my Internet Website, FloridaLawFirm.com, which they thought was a television broadcast!)
Ralph in the early 90s
Anyway, when not haggling with the Bar and fellow attorneys who would tease me, the first nerd, and call me a “secretary” (ooh how terrible) for having a keyboard on my desk. I kid you not! I used PCs when they first came out in my law firm as the new associate. I have had them on my desk to try to work smarter ever since. Not PCs necessarily, but all kinds.
So I’m back to where I once belonged, in the great big world of technology law, making deals and giving advice. Oh yeah, I may still consult on e-discovery too, especially the AI parts that so fascinated me ever since my Da Silva Moore breakthrough days. (Thank you Judge Andrew Peck.) For my full story, some of which I had to hide in my Big Law role as a super-specialist, see: https://www.losey.law/our-people/25-uncategorized/108-ralph-losey Not many people know I was a Qui Tam lawyer too; and for both sides.
Wait, there is still more. I’ve left the best for last. I went back home, left Big Law for good, and am now practicing law with my son, Adam Losey, daughter in law, Cat Losey, and thirteen other, crazy tech lawyer types at Losey.law. Yes, that is the real domain name and the name of the firm itself is Losey. So of course I had to go there. Check it out. Practicing law with my son is a dream come true for both of us. I’m loving it. It was lonely being the only tech wiz in a giant firm. Adam knows tech better than me, is much faster in every respect (except maybe doc review with AI) and he and Cat are obviously a lot smarter.
To my long-time readers, thanks for your encouragement. I heard you and got back to my roots of general tech-law, and got back to blogging and home. To quote the Beatles the funny “Get Back” song in their great LET IT BE album:
Rosetta (who are you talking about?) about Sweet Loretta Fart. . . .
Stay tuned, because a new blog is coming at you soon. Feel free to drop me an email at Ralph at Losey dot Law. Humans only please. Robots not welcome (unless you’re from the future and don’t have weapons).
Ralph Losey is an Arbitrator, Special Master, Mediator of Computer Law Disputes and Practicing Attorney, partner in LOSEY PLLC. Losey is a high tech law firm with three Loseys and a bunch of other cool lawyers. We handle projects, deals, IP of all kinds all over the world, plus litigation all over the U.S. For more details of Ralph's background, Click Here
All opinions expressed here are his own, and not those of his firm or clients. No legal advice is provided on this web and should not be construed as such.
Ralph has long been a leader of the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world. Ralph has written over two million words on e-discovery and tech-law subjects, including seven books.
Ralph has been involved with computers, software, legal hacking and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: Commercial Litigation; E-Discovery and Information Management Law; Information Technology Law; and, Employment Law - Management.
Ralph is the proud father of two children, Eva Losey Grossman, and Adam Losey, a lawyer with incredible cyber expertise (married to another cyber expert lawyer, Catherine Losey), and best of all, husband since 1973 to Molly Friedman Losey, a mental health counselor in Winter Park.
1. Electronically stored information is generally subject to the same preservation and discovery requirements as other relevant information.
2. When balancing the cost, burden, and need for electronically stored information, courts and parties should apply the proportionality standard embodied in Fed. R. Civ. P. 26(b)(2)(C) and its state equivalents, which require consideration of importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the burden or expense of the proposed discovery outweighs its likely benefit.
3. As soon as practicable, parties should confer and seek to reach agreement regarding the preservation and production of electronically stored information.
4. Discovery requests for electronically stored information should be as specific as possible; responses and objections to discovery should disclose the scope and limits of the production.
5. The obligation to preserve electronically stored information requires reasonable and good faith efforts to retain information that is expected to be relevant to claims or defenses in reasonably anticipated or pending litigation. However, it is unreasonable to expect parties to take every conceivable step or disproportionate steps to preserve each instance of relevant electronically stored information.
6. Responding parties are best situated to evaluate the procedures, methodologies, and technologies appropriate for preserving and producing their own electronically stored information.
7. The requesting party has the burden on a motion to compel to show that the responding party’s steps to preserve and produce relevant electronically stored information were inadequate.
8. The primary source of electronically stored information to be preserved and produced should be those readily accessible in the ordinary course. Only when electronically stored information is not available through such primary sources should parties move down a continuum of less accessible sources until the information requested to be preserved or produced is no longer proportional.
9. Absent a showing of special need and relevance, a responding party should not be required to preserve, review, or produce deleted, shadowed, fragmented, or residual electronically stored information.
10. Parties should take reasonable steps to safeguard electronically stored information, the disclosure or dissemination of which is subject to privileges, work product protections, privacy obligations, or other legally enforceable restrictions.
11. A responding party may satisfy its good faith obligation to preserve and produce relevant electronically stored information by using technology and processes, such as data sampling, searching, or the use of selection criteria.
12. The production of electronically stored information should be made in the form or forms in which it is ordinarily maintained or in a that is reasonably usable given the nature of the electronically stored information and the proportional needs of the case.
13. The costs of preserving and producing relevant and proportionate electronically stored information ordinarily should be borne by the responding party.
14. The breach of a duty to preserve electronically stored information may be addressed by remedial measures, sanctions, or both: remedial measures are appropriate to cure prejudice; sanctions are appropriate only if a party acted with intent to deprive another party of the use of relevant electronically stored information.