Sedona on Quality: a Must-Read Commentary

Judge Oliver Holmes, Jr.The Sedona Conference® Commentary on Achieving Quality in the E-Discovery Process is a must read for anyone seeking to improve their skills in project management, especially in the core functions of search and review. One of its most important insights is that metrics and statistics are now indispensable tools of discovery. The importance of statistics to the law is actually an old insight that has taken a long time to materialize. The Sedona Commentary quotes the great jurist Oliver Wendell Holmes, Jr., shown right, who said in 1897: 

For the rational study of the law the black letter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.

The future has taken a lot longer to reach most lawyers than Justice Holmes expected. Statistics is still a stranger to most litigators, especially as a tool of discovery. But by the time you finish studying this new Commentary on Quality, you will see that statistics is a powerful tool of the here and now.

We can thank the hardworking Sedona Editors-in-Chief for this excellent new publication, Jason R. Baron and Macyl A. Burke. They were assisted by Senior Contributing Editor, Thomas Y. Allman, and Executive Editors, Richard G. Braman and Kenneth J. Withers, with input from Members of Working Group 1. This commentary is a project of The Sedona Conference® Working Group on Best Practices for Document Retention and Production. 

Here is the opening paragraph of the Executive Summary, which should entice you, like it did me, to read more:

The legal profession is at a crossroads: the choice is between continuing to conduct discovery as it has “always been practiced” in a paper world — before the advent of computers, the Internet, and the exponential growth of electronically stored information (ESI) — or, alternatively, embracing new ways of thinking in today’s digital world. Cost-conscious clients and over-burdened judges are demanding that parties now undertake new approaches to solving litigation problems. The central aim of the present Commentary is to introduce and raise awareness about a variety of processes, tools, techniques, methods, and metrics that fall broadly under the umbrella term “quality measures,” and that may be of assistance in taming the ESI beast during the various phases of the discovery workflow process. These include greater use of project management, sampling, and other means to verify the accuracy of what constitutes the “output” of e-discovery. Such collective measures, drawn from a wide variety of scientific and management disciplines, are intended only as an entry-point for further discussion, rather than any type of all-inclusive checklist or cookie-cutter solution to all e-discovery issues.

The truth is, there can be no cookie-cutter solution or all-inclusive checklist for a subject as complex and dynamic as e-discovery. Yet, those of us who specialize in this area get asked for such easy-buttons all of the time. The best that can be hoped for is competency training and the gift of this writing: quality control procedures.

Critique of the Five Reasons Stated for Quality Control

Why is quality so important to the e-discovery process? The obvious answer is to avoid mistakes and the sanctions that can come with mistakes. The Sedona Commentary agrees that such risk management is the primary factor, but then, at page one lists four other reasons quality control is important:

  1. “Failure to employ a quality e-discovery process can result in failure to uncover or disclose relevant evidence which can affect the outcome of litigation.” I call this the “adequate recall factor” where you find enough of the truth for justice to be done in a case.
  2. “An inadequate e-discovery process may allow privileged or confidential information to be inadvertently produced.” I call this the “adequate confidentiality factor” where you protect enough of the confidential information to meet your client’s specifications. Some may be quite paranoid regarding disclosure of any privileged or confidential information, and willing to spend vast sums of money to avoid it. Others may not care as much if a few attorney-client emails, out of thousands, slip through the cracks and are more willing to rely upon the claw-back protections of Evidence Rule 502 to save money.
  3. “Procedures that measure the quality of an e-discovery process allow timely course corrections and provide greater assurance of accuracy, especially of innovative processes.” I consider this to be a secondary “adequate recall factor,” which allows for quality adjustment of search protocols and other processes as results are measured, new facts uncovered, issues evolve, and insights gained.
  4. “A poorly planned effort can also cost more money in the long run if the deficiencies ultimately require that e-discovery must be redone.” I call this the “do-over avoidance factor,” where if you do something right the first time, you do not have to pay to do it again. This, for me, is just one aspect of a larger “money-savings factor” that can result from quality processes.

OLIVER WENDELL HOLMESI agree that all five factors are important, but I am inclined to think that the economic savings that can result from quality control are equally important to the e-discovery process as risk management, although perhaps not as obvious. As Justice Holmes said over 100 years ago, we lawyers of his future must not only be men and women of statistics, but also “masters of economics.” The savings can not only be realized by avoiding costly do-overs, as the Commentary points out, but also by increasing culling quantity and review speed. Quality controls make it possible to significantly reduce the amount of ESI to review, and to reduce the ESI volume in a manner that is legally defensible. This culling process, as shown generally in the diagram below, is key to dramatic reduction of the costs of e-discovery.

data-filter2

The process of intelligent reduction of data size prior to review is an essential component of what is now being called “early case assessment.” One of the Senior Editors of the Commentary on Quality, Jason R. Baron, wrote about this type of case assessment culling in another short paper he wrote with Ronni D. Solomon for use at a Sedona Conference Institute CLE entitled Bake Offs, Demos & Kicking the TiresIn spite of this title, it is well worth reading. See especially their Tip 5 where they talk about utilizing keyword “black lists” to reduce ESI size before full review. They also mention the critical need to see the results of keywords, not just hit totals, in order to make an intelligent choice of effective keywords. It is no longer an acceptable practice to choose keywords in the blind. It results in weak culling and thus excessive review. We need to see the results of keyword filtering to be able to aggressively reduce the volume of  ESI and still maintain quality.

This type of early case assessment culling is critical because the review stage is by far the most expensive step in e-discovery. Anything that cuts the amount of ESI to be reviewed has a direct, substantial impact on the bottom line. Of course, better, faster, and more efficient review of the ESI can also reduce costs. See Bake Offs, Tip 3, and the discussion of clustering tools. With proper quality controls, the costs savings from culling can be realized without sacrificing the other four goals stated in the Commentary on Quality.

In today’s economy, the money-saving aspects of quality control are just as important as risk management, and, in my view, more important than the secondary benefits of “adequate recall factors” and “confidentiality factors.” E-discovery costs must be significantly reigned in for the civil justice system to avoid the danger of replacement by private arbitration, or worse, by self-help. For that reason, we must leave the old paradigm of total-recall in favor of a more realistic, cost-controlled view. As I often say these days, how much of the truth can a particular dispute afford in view of the constraints of proportionality and Rule 26(b)(2)(C)? 

Oliver Wendell Holmes, Jr.

This is the public comment version of the Sedona Commentary on Quality and it is my hope that the final version will place a greater emphasis on the economic savings possible with quality controls in an early case assessment environment. As Justice Holmes said: “It is the province of knowledge to speak and it is the privilege of wisdom to listen.”

Four Guiding Principles Behind the Commentary

The Executive Summary also contains an explanation of the four guiding principles behind the commentary:

Principle 1. In cases involving ESI of increasing scope and complexity, the attorney in charge should utilize project management and exercise leadership to ensure that a reasonable process has been followed by his or her legal team to locate responsive material.

Principle 2. Parties should employ reasonable forms or measures of quality at appropriate points in the ediscovery process, consistent with the needs of the case and their legal and ethical responsibilities.

Principle 3. Implementing a well thought out e-discovery “process” should seek to enhance the overall quality of the production in the form of: (a) reducing the time from request to response; (b) reducing cost; and (c) improving the accuracy and completeness of responses to requests.

Principle 4. Practicing cooperation and striving for greater transparency within the adversary paradigm are key ingredients to obtaining a better quality outcome in e-discovery. Parties should confer early in discovery, including, where appropriate, exchanging information on any quality measures which may be used.

I was pleased to see that cooperation was included as the fourth principle behind quality control. As I explained in my blog, There Can Be No Justice Unless Lawyers Maintain High Ethical Standards, cooperation is an ethical imperative that will necessarily result in substantial costs savings by the avoidance of litigation churning. The Commentary correctly notes that cooperation can also cause a significant increase in quality.

All four of these Principles behind Quality are,  in turn, based on the fourteen Sedona Principles, but especially on Principle 11, which states:

A responding party may satisfy its good faith obligation to preserve and produce relevant electronically stored information by using electronic tools and processes, such as data sampling, searching, or the use of selection criteria, to identify data reasonably likely to contain relevant information.

For that reason, much of the Commentary on Achieving Quality in the E-Discovery Process is focused on electronic tools and processes, including sampling and other search methods.

The Commentary Is Not A Recipe Book

This Commentary does not purport to give specific advice on how to maintain quality. So, if you are looking for forms and recipes, you will be disappointed. Sedona was quite correct to so limit this project. Here is their explanation, with which I fully agree:

This Commentary is not intended to serve as a comprehensive roadmap covering all possible uses of quality measures and metrics throughout the e-discovery process. The creativity of the vendors and the bar will ensure that concrete applications of quality techniques will be advanced. Nor have we any bias towards particular methods, tools or technologies or a point of view that asserts that sampling or other types of quality measures are invariably required in every type of litigation. Indeed, the drafters believe that the solution to problems created by scale are not solved by technology per se, which is merely a tool, but by better use of team leader skills, project management, and quality measures.

Although the Commentary discusses many tools including my favorite — sampling — it does not set out a step-by-step process for handling e-discovery projects. There is too much variability in cases and facts for that, and frankly, it is still too early for any such best practices specifications. Electronic discovery project management is still in its early stages and many of us who have developed methods tend to keep them close to the vest for competitive purposes. Plus, it is not Sedona’s role to endorse one particular technique over another, but rather to address issues on a high level and facilitate further study and dialogue. If you read this Commentary with this expectation, you will not be disappointed. Indeed, you will find that it contains many valuable treasures and insights. As Oliver Wendell Holmes, Jr., said: “A moment’s insight is sometimes worth a lifetime’s experience.”

Oliver. Wendell Holmes, Jr. was the class Poet at Harvard in 1861

Key Elements of Successful Project Management

The Commentary lists seven elements that Sedona considers critical for successful project management. They do not claim that the list is exhaustive, nor do they purport to rank them in importance. These elements are:

1. Leadership.
2. Tailoring.
3. Expertise.
4. Adaptability.
5. Measurement.
6. Documentation.
7. Transparency.

Although Sedona will not rank these seven, I will. I think that Leadership and Measurement are the two most important factors and the two most difficult. A close third is Documentation, which is key to protection, especially if any of the million decisions made in the course of a project prove to be a mistake. The courts do not expect perfection from attorneys, only expertise, reasonability, and good faith. Good documentation of the process can make it easier to recreate what happened. It will help you to convince the court that you thought you were doing the right thing at the time, even though later events may suggest otherwise. Personally, I do not like the documentation process at all and find it to be as odious as filing. But I know from experience just how valuable it can be, not only to later show what you did, but also in real time to help you to keep track of what you are doing.

Leadership is obviously important. Someone has to be in charge of a project and it needs to be an attorney with special expertise. It is a mistake for the trial attorney to try to fill this role because they probably do not have the necessary skills nor time. But it is also a mistake to delegate the job to a non-lawyer. I continue to believe that lawyers must remain masters of the discovery process and they abdicate their responsibilities when they over-delegate to vendors. The Sedona Commentary (at page 7) agrees that attorneys should retain the “Team Leader” position and should only look to outside vendors for competent assistance, which, it correctly states, is “often essential.”

Measurement

The importance of “Measurement” may not be as obvious as Leadership, but the Commentary does a good job of explaining just how indispensible it is to quality control. It lists five types of quality measurements that are especially useful in e-discovery:

1. Judgmental Sampling
2. Independent Testing
3. Reconciliation Techniques
4. Inspection to verify and report discrepancies
5. Statistical Sampling

Of these five measures of quality, the two types of sampling are, in my opinion, the most important. The two types of sampling are defined and explained in detail in Appendix A: Sampling 101 for the E-Discovery Lawyer:

Judgmental Sampling: Sampling performed on a sample set that was selected based on the judgment of the person doing the sampling. … A common example in the e-discovery context would be keyword searching itself, which is a more-or less informed technique universally used by lawyers and legal professionals to produce a sample slice of a given ESI universe of data, based on the a priori judgment of those selecting the keyword terms.

Statistical Sampling: Probability sampling, or random sampling, is a sampling technique in which the probability of getting any particular sample may be calculated. … A random sample is one chosen by a method involving an unpredictable component.

In the body of the text at page eleven, the Commentary explains that “statistical sampling can serve as a check on the effectiveness of search terms and other automated tools in identifying responsive information.” You can, for instance, use random sampling to test small subsets of the data selected by judgmental sampling, such as keyword culling. Then, by quick reviews of random sample subsets, you can determine the effectiveness of the keywords to identify responsive information. You can then adjust your keywords accordingly and try the new search cull terms again on a new sample. Here is how the Commentary describes this process at page thirteen: “Trial or pilot runs of combinations of words may be tested in an iterative fashion to extrapolate the effectiveness of the chosen set.”

You can use this iterative method to increase the culling rate of ESI to a size where cost projections of final review expenses finally come within the project budget. This use of both judgmental and random sampling methods, coupled with cost estimations, is close to the kind of early case assessment quality control procedure that I have developed to control e-discovery expenses in a legally defensible manner. I would tell you more, but it gets extremely complicated, is case sensitive, and frankly, leads into proprietary territory.

Applying Quality Measures

Part Three of the Commentary, entitled “Applying Quality Measures in E-Discovery,” is probably the section that will be of most interest to practitioners. It divides the analysis into two segments: the Data Collection Phase; and, the Review and Production Phase. The Data Collection Phase is examined in three segments: “Building on Traditional Approaches to Document Collection; Applying Measures of Quality to the Data Collection Process; and, Best Practice Guidelines.”

The Best Practices Guidelines at page fifteen begins with the following good advice:

The selection, organization and filtering of ESI through the use of a search protocol is a critical element in reducing the volume of information to be collected and thus the time and cost of collection. In addition, keyword search techniques are well known and may be used for this purpose. More advanced technologies have emerged that employ complex algorithms for ESI filtering and organization and may, in some cases, be useful at the collection stage. Regardless of the technology chosen, all filtering methods require a well-defined process. Without these basic steps, the use of any filtering technology will likely result in gross over- or under-inclusion of responsive ESI. The process includes several steps:
• Understanding the composition of source ESI;
• Defining the goals of the filtering;
• Applying the filter and testing the results.

On this last filtration/testing step the Commentary makes the basic, yet important point missed by most practitioners still using negotiated keyword searching, that:

The filtering process should be iterative and needs to be repeated  until the desired goals are met. It is not sufficient to blindly run a filtering tool and trust that it is achieving the desired results. One must evaluate the outcome of the search, looking to identify errors in how the filter rules were set up or applied. Key metrics, such as the number of included or excluded documents by keyword or filtering criteria, can be used to evaluate the outcome. Examining the high and low number of search hits can uncover issues with how the search was constructed, the choice of terms, or even issues with the data.

Oliver Wendell Holmes Jr. circa 1930No one is good enough to pick good keywords off the top of their head, much less negotiate a good set of keywords. Words are, after all, so maliable and differ tremendously between person to person. As Oliver Wendell Holmes said:

A word is not a crystal, transparent and unchanging, it is the skin of a living thought and may vary greatly in colour and content according to the circumstances and time in which it is used.

Yet keyword search is still the practice used by most lawyers today and is often ordered by the court. American Family Mutual Ins. Co. v. Gustafson, 2009 WL 641297 at *3 (D.Co. March 10, 2009) (“the parties shall forthwith meet, confer, and agree upon the search terms”). This is a mistake. The “skin of living thoughts” is not so easily snared. Testing of proposed search terms should always be required. Otherwise, your review will be haphazard at best, and a complete waste of time and money at worst. As Oliver Wendall Holmes also said: “Lawyers spend a great deal of their time shoveling smoke.”

Quality controls in the review and production phase are also examined at length in the Commentary. The discussion includes: automated methods to reduce the initial burden of review; “clawback” agreements, Rule 502, and reliance on automated methods; quality control guidelines for responsiveness and privilege; and, final quality checking at production.

Conclusion

gibsonThe Commentary conclusion begins with a quote I like a lot by William Gibson: “The future is already here – it’s just not evenly distributed yet.” I know many people in this field feel like that is the story of their life. Certainly there is a wide variation in the U.S. and around the world in how the discovery of written evidence is conducted. The Commentary ends with these fine words of wisdom:

In the end, cost-conscious firms, corporations, and institutions of all kinds intent on best practices, as well as over-burdened judges, will demand that parties undertake new ways of thinking about how to solve discovery problems — including employing better project management and better measures of quality to achieve optimum results, as outlined here. The technical and management-orientated quality processes discussed above need to be incorporated into every trial lawyer’s continuing education and daily practice. These processes also dovetail with, and support The Sedona Conference® Cooperation Proclamation — which calls for incorporation of the best thinking of “disciplines outside the law” to achieve the goal of the “just, speedy, and inexpensive” determination of every action. In the end, striving to attain a quality outcome in the conduct of litigation is consistent with the highest ethical calling of the legal profession.

I agree with these noble aspirations, but think it is unrealistic to think that these processes will, or even should, “be incorporated into every trial lawyer’s continuing education and daily practice.” Not every trial lawyer will be interested in random sampling, iteration, linguistic analysis, the latest concept-search engines, ESI architecture, leadership of complex e-discovery projects, and ESI architecture, not to mention the ever changing technologies that create and store electronic information. I agree that all trial lawyers should have some exposure to this and to the idea of quality control, in the same way that all lawyers should have some exposure to antitrust law. But I doubt very much that the subjects in the Quality Commentary are going to be part of “every trial lawyer’s daily practice” anytime soon (if ever). Instead, they will be front and center in the practice of attorneys who specialize in e-discovery.

In the inconsistent future here-now that I see, e-discovery specialists will work closely with trial lawyer specialists. Team-work will be common, even on small cases. Some trial lawyers may have the time and inclination to handle e-discovery themselves, especially in less complicated situations. But, for the foreseeable future, they are likely to be few and far between. As William Gibson says: “Time moves in one direction, memory in another.” Instead, most trial lawyers will work with, not replace, the e-discovery lawyers. This could be a very small team of just two persons, like Perry Mason and Paul Drake, with a clear division of labor and skills, or, in the largest cases, it could be a team of many lawyers, paralegals, technicians, engineers, and information scientists.

The new age of information is too complicated to continue the old practices and traditions where both trial and discovery skills were combined and held by all trial lawyers. It worked when the documents were paper and few in number. But those days are nearly gone. Now we have ephemeral electronic paper that throws itself away when you are not looking. We have needles of relevant evidence hidden in vast electronic haystacks that are larger and more complicated that you can imagine; haystacks that daily change and grow. As our best experts tell us, search is hard. To use Gibson’s words:

I don’t have to write about the future. For most people, the present is enough like the future to be pretty scary.

As a consequence, document discovery is far more complicated than it was before and requires special skills to be done right. It is time for the profession to change. As Justice Holmes said:

I find the great thing in this world is not so much where we stand, as in what direction we are moving — we must sail sometimes with the wind and sometimes against it — but we must sail, and not drift, nor lie at anchor.

high tech sailing vessel

The basics of e-discovery can and should be taught to all trial lawyers. Since they are generally a very smart group, they can learn the basics, if they will take the time and effort needed to do so. (So far, not many have been inclined to make this effort. Most seem to hate e-discovery, but this will change soon.) These basic skills, once learned, can suffice for many small cases, with just an occasional assist from a 21st Century version of Paul Drake. But the larger, more important cases will need the skills of a specialist; skills such as those outlined in The Sedona Conference® Commentary on Achieving Quality in the E-Discovery Process.

In today’s world of dispute resolution, the client with a sophisticated matter is better served by specialized services with a division of labor. The e-discovery lawyers will possess the skills and quality control techniques discussed in this Commentary, as well as the many other skills discussed in the many other Sedona Commentaries and other books and articles on the subject. These skills take time to learn and time to practice and maintain. There is not enough time to also learn the many, very different skills needed by a trial lawyer. As a result, in complex cases the discovery lawyers will go through the electronic maze to find the facts and their trial lawyer partners will present them to the court and argue their significance.

Litigation is already a team effort in most law firms. This trend will continue to grow and the clients will be better served because of it. Far from being more expensive, as you might think because more people are involved, the discovery and trial lawyer team will save money. The Fannie Mae type cases of outrageous e-discovery expenses only happen when trial lawyers dabble in e-discovery and make huge mistakes. A true specialist will not only do things right the first time, with quality, but do them quicker and less expensively. In this way, the e-discovery teams of the future will help preserve our system of justice by making discovery affordable again.

11 Responses to Sedona on Quality: a Must-Read Commentary

  1. Another super analysis on a fantastic and welcome commentary. I’m so pleased to see The Sedona Conference® and EDRM both recently recognize the necessity of project management techniques for consistent, defensible discovery processes. There seems to have been a growing consensus in this direction by practitioners, or maybe I was just noticing. Having quickly recognized the need to establish workflows and metrics to contain costs, I’ve been working to incorporate project management into my firm’s discovery processes for the last few months and often felt out in the wilderness when looking for discovery-specific project management information and resources. I suspect those will grow now that The Sedona Conference®, EDRM, and you have pointed everyone in that direction. Kudos!

  2. Thank you SO much for this article. It’s excellent.

    “One of its most important insights is that metrics and statistics are now indispensable tools of discovery.”

    This is interesting, because I fear that the tendency will to focus on metrics at the expense of quality. Sure, numbers are shiny and sparkly, but if at the end, you’ve turned over a client’s trade secret or smoking gun, that horse ain’t going back in the barn no matter what sort of protective order the parties have devised. Not sure we disagree here, but I firmly believe we have to be careful when lawyers try to tell clients that the truth is in numbers they really don’t understand.

    Wot’s a Baynesian filter? Why do I want to know about matrix factorization? Wot’s a bigram? What on EARTH does all of this have to do with Twitter?

    How easy is it to convince people you’re right when they have no idea what you’re talking about? Pretty darned easy! So, I think the emphasis needs to be on something lawyers can be taught, but don’t yet understand: good old-fashioned project management.

    E-Discovery is NOT the practice of law and ought to be professionalized as a discipline. The tale litigators need to tell on either side will not be served by “barely good enough” practices that get past a less-than-savvy judge. This means, taking the model apart and not too many people are willing to do that. But, i believe the ones who ARE will be well ahead of the pack.

    That, in part, is why I continue to quibble about the draconian culling advocated above. If what these numbers are telling people is that only 10% of their ESI is helpful, that’s a great sales tool, but I think it’s misleading. To a computer, a TB of data isn’t a big deal and cloud computing is already creating an environment where vendors can scale with *relative* ease. Besides, no one reviews a TB at a time anyway: the bottleneck is the human, not the computer.

    Sure, it looks great on paper, but if the goal is to tell the client’s story, massive pre-review culling is like telling the story of Hansel & Grethel without mentioning that the birds ate the crumbs. It’s like leaving out the balloons in Up.

    “Electronic discovery project management is still in its early stages and many of us who have developed methods tend to keep them close to the vest for competitive purposes.”

    Yup! But, then again, the software industry has long published all of this stuff, thank goodness! :o))

    Better discovery solutions and best practices are not long in coming.

    And with that, I’m back!

  3. Gabe Acevedo says:

    Great post as always Ralph. Look forward to seeing you in NYC at the EMC conference tomorrow and Fri.

    Regina, I disagree with your statement about e-discovery not being the practice of law. The project management aspect may not be considered practicing law, but the review process is still discovery. It does not matter whether an attorney is reviewing online or on paper, that is still part of the practice of law and would require a high level of supervision if not performed by an attorney.

    Thankfully though, project management is becoming more and more professionalized every day, just not as fast as some of us would like.

  4. John Turner says:

    First I must declare that I come from a consulting and technology background, not a legal background and hence my perspective may be a little different. Project and Program Management (there is a difference) has been a key part of of all my activities for as long as I can remember. It is interesting to see this focus now coming into eDiscovery.

    From a process perspective I have always found this project management model useful:

    Evaluate
    Understand
    Document
    Communicate
    Execute
    Verify

    The seven key elements of project management success listed by Sedona would not feature high in a formal project management education, although many of them could be mapped to elements of formal project management. Of more value is a comparison to the most common reasons why projects fail. A key analysis was carried out many years ago by the Standish Group. They studied failure in many projects and so came up with a with the following recommendations for project success, which were weighted

    SUCCESS CRITERIA
    POINTS
    1. User Involvement 19
    2. Executive Management Support 16
    3. Clear Statement of Requirements 15
    4. Proper Planning 11
    5. Realistic Expectations 10
    6. Smaller Project Milestones 9
    7. Competent Staff 8
    8. Ownership 6
    9. Clear Vision & Objectives 3
    10. Hard-Working, Focused Staff 3
    TOTAL 100

    It is gratifying to see that this aligns somewhat with the Sedona recommendations.

    I would also recommend anyone with an interest to follow up with the Project Management Institute (PMI)who have been at the forefront of project management education for many years.

    I will be following this trend with great interest.

  5. John Turner says:

    Ralph asked me to add to my previous comments and to go into a little more detail. I hope you will not find I have gone too deep. Please remember that my comments derive from my Program and Project management perspective first, and my eDiscovery perspective second.

    A Program and Project Management Perspective

    There are many different approaches to Program and Project management, but they have much in common with regard to content. It is with regard to process that they differ. The reason for this is that Construction Projects differ from IT Projects, differ from eDiscovery Projects, differ from Technology Driven Transformational Programs differ from Marketing Programs. The same things need to be done in each in order to produce the desired result, just in different ways.

    First I want to differentiate, in a very simple manner, between Programs and Projects. This is important because they are managed in different ways and using the incorrect methodology could result in issues. The commonly held view is that a Program is just a collection of Projects – projects which may have dependencies such that the deliverables from one project may not be particularly useful, or even usable, without the deliverables from other projects. However, that is more the What. I will focus on the Why, and at the same time try to align with what is happening in eDiscovery. Very simply Programs focus on Outcomes, while projects focus on Outputs. To align with the eDiscovery discussion, this means that the Lead Attorney is a Program Manager rather than a Project Manager. The Lead Attorney focuses on a predictable outcome for his client, and then puts in place a collection of activities (which could be managed as Projects), each of which have defined deliverables (outputs) to deliver that outcome.

    Further confirmation of this also comes from an understanding of the Governance Models of Programs and Projects (I don’t even know if Law Firms Govern their Projects?). While Project Managers are normally accountable to a Sponsor, who could be a Program Manager, Program Managers are normally accountable to a cross functional Steering Committee. In terms of eDiscovery this could be a mix of Managing Partner or Partner Committee, as well as GC, CEO and Board of the Corporation.

    If the Program/Project model seems to fit the eDiscovery process, then the methodology should also fit and so the Legal World could learn a great deal overnight from fellow professionals who are key to Corporate success.

    Let me deal with Project Management first, as there is greater consensus around the commonly accepted Project Management Institute (PMI) model. This has eight main focuses:

    • Scope
    • Time (schedule)
    • Cost
    • Quality
    • Information/Communications
    • Contracts/Procurement
    • People
    • Risk

    The first four are particularly important because they are interrelated in what is often termed the Project Kite. Change one and one or all of the others also have to be changed. If the schedule needs to be reduced then scope can be reduced, resource added hence increasing cost, or quality can be reduced. If quality is a constant, they there are only three variables that come into play, Scope, Time and Cost. However, if any compromises are made with these three, then Quality will be resulted adversely.

    Hidden within these eight functional areas are such things as Issues Management and Task Management which I will now break out within the wider context of Program Management.

    A very useful model of Program Management focuses on six streams, which are as follows:
    • Leadership and Communication
    • Governance
    • Business Benefits (including metrics)
    • Resource (human and other)
    • Contracts/Procurement
    • Integration

    In a small program it is possible for one person to manage all streams. In a smaller program a single resource may manage more than one stream, but in a large program it is normal to have a manager for each stream of activity, sometimes with multiple staff in support.

    Each of these streams of activity can be broken down further, for example, governance comprises of:
    • Financial Management
    • Impact Management – the combination of issues, risks and change management
    • Status Reporting (as differentiated from Progress Reporting which is handled by the Integration Stream so that there checks and balances are integrated within the model to identify mis-communication, deliberate or otherwise)
    • Quality Management
    • Methods, Tools and Techniques (including infrastructure)
    • Project Audit functions.

    Each of these six streams goes through three main phases:
    • Design
    • Establish
    • Manage

    In addition Manage divides into three iterative phases:
    • Achieve
    • Assess
    • Transition

    I do not mean to take you through Program and Program management 101, but hopefully you can see that others before you have made the mistakes, learned from them, and put together a model that works which I believe can be readily adapted for eDiscovery.

    (Note: I can go into as much or as little detail on each stream as anyone requires, but I have kept it at a minimal level for now.)

    Let me return to the Commentary.

    I do have an area of disagreement, but it is small. It deals with the difference between Quality Assurance and Quality Control. For me Quality Assurance means building quality into the process so that, ultimately, Quality Control is not required. This was always Deming’s focus especially when he talks about the four “pillars” of his System of Profound Knowledge:

    • An Appreciation for a System
    • Some Knowledge on the Theory of Variation (statistical theory) – and in particular the difference between special and common cause variance
    • Theory of Knowledge (which I often refer to as the ability of Management to predict and measure the outcome of their actions)
    • Knowledge of Psychology (or how management will stop people doing their best)

    Deming would have looked at eDiscovery as a system, much in the same way as the human body is a system, with multiple processes supporting, interacting and impacting the success of the whole (note the similarity with a Program). To Deming quality control was a waste. Cosby reiterated this in his book “Quality is Free”. Why implement a process to filter out what is bad (quality control) if you can implement a process where everything is good?

    This also aligns with the four Principles, and my comments on them:

    Principle 1 – I am in broad agreement with the exception that I think that the active leader is a Program and not a Project manager and should use the appropriate model.

    Principle 2 – Again, I am in broad agreement, but would emphasize the use of a Quality Assurance approach rather than a Quality Control approach.

    Principle 3 – In agreement. This speaks to Quality Assurance in building a robust process and to learning. A good technique to apply, and one that Deming and others have advocated, is the Plan, Do, Study, Act Cycle. A good reference for this and other Deming related techniques focused on application is “Fourth Generation Management” by Joiner. I would also advocate obtaining an understanding of special cause variance and common cause variance in the context of statistical process control which can aid the interpretation of metrics and so make sure that follow up action is valid. Look up Deming’s “funnel experiment” to understand this.

    Principle 4 – Now I get excited when meet and confer is referred to as a “Process”.

    As an aside, rather than read Deming’s books I suggest you start with Mary Walton’ s, “The Deming Management Method”, also Brian Joiner’s book referenced above. Deming can be quite heavy as a first time read. Another book that address various quality techniques, as well as sampling techniques is “Quality Improvement Tools and Techniques” by Mears.

    Let’s talk a little about metrics and why they are needed. The collection of metrics does nothing but increase costs unless they are analyzed to improve the process.

    First of all I want to differentiate metrics from scalars. Scalars are measures, but generally are just a count. They keep on increasing, maybe not at the same rate. Number of documents reviewed is a scalar. Number of documents reviewed per reviewer is a metric. This is an important differentiation as different conclusions can be derived from Number of Documents reviewed when we start to analyze in terms of other factors (which is what makes it a metric) like number of reviewers, number of pages, per hour, privilege or other type of review, etc. An increase in Number of Documents reviewed by itself tells very little unless the context is explained. So don’t loose the context for why metrics are important:
    • Data with context leads to Information
    • Information with Understanding leads to Knowledge
    • Knowledge with application leads to Competitive Advantage

    Second, it is generally accepted that there are three different types of metrics, of which the first two are the most common:
    Results Metrics – the number of documents reviewed at the end of each day
    Process Metrics – the current rate at which documents are being reviewed
    Predictive Metrics – the effect on document review of a non-linear review process.

    It is hence important to collect the correct metric at the correct time in the process.

    There is a great deal published on the establishment of a system of metrics. This is normally done through a Balanced Scorecard type of approach and the use of a presentation of process logic like a Benefits Logic Analysis which will make sure that metrics do not conflict (Number of Documents reviewed drops because page count is increasing – a page count metric could be more appropriate), and that process metrics, which are available sooner, compliment results metrics. The best book on the subject, in the context of a corporation, is “The Balanced Scorecard” by Kaplan and Norton. Additional background reading is “Intellectual Capital” by Stewart and “Working Knowledge – How Organizations Manage What They Know” by Davenport and Prusak.

    Metrics is a huge area in itself. They can be very expensive to collect. Ideally they should be built into the process so that they are collected automatically. Once the correct metrics are in place and reports flowing the two main benefits will be the time saving of not asking what is happening and hence disrupting productive work; and the ability to see what can be improved (or not) by incremental changes to the process.

    The changes in eDiscovery require changes in approach, which in turn demand organizational changes. A very good model for organizational change first of all focuses on why organizational change fails. (“Leading Change: Why Transformation Efforts Fail;” John P. Kotter; Harvard Business Review; March-April, 1995). This has subsequently been published in two books by Kotter, “Leading Change” and “The Heart of Change”. The model, which is the central theme of the books as well as the paper, suggests that the main reasons for failure are:
    • Not Establishing a Great Enough Sense of Urgency.
    • Not Creating a Powerful Enough Guiding Coalition.
    • Lack of a Vision.
    • Under-communicating the Vision by a Factor of Ten.
    • Not Removing Obstacles to the New Vision.
    • Not Systematically Planning for and Creating Short-Term Wins.
    • Declaring Victory Too Soon.
    • Not Anchoring Changes to the Corporation’s Culture.

    This brings us back to Quality in eDiscovery and the need to deploy a process that can produce the desired quality in a reproducible manner. All “Quality Approaches”, whether from Deming, Juran, Lean or Six Sigma, have elements in common. The key elements are to be able to evaluate the deliverables required, understand how to achieve them, document the process, communicate it to those who are working in it, execute it and then verify that it is reproducible. There is no sense in collecting metrics until the process is stable and reproducible. Flow charting is where to start in order to make this happen.

    Flow charting makes us define how we carry out our activities. In defining them we stabilize them. In stabilizing them we make them reproducible. In looking at them we collect data on them and in analyzing the data we see how to improve them.

    I always like to build a flow chart from the end result and then work backwards to the start. This focuses on defining what is necessary as a deliverable from the previous step in order to get over the finish line. It will ensure that the process is built without any waste or needless activities. Each step should document:
    • The objective of that step described in action terms that are easily understandable and start with a verb. “Document Review” is not an acceptable description. “Review a document to designate Privilege”, is acceptable.
    • The input or inputs necessary to start the step.
    • The output or outputs necessary for the following next stage or stages.
    • The processes that will be applied to turn the inputs into outputs.
    • The resource required
    • How observations, including metrics, will be collected for use in the improvement process for the stage
    • How the metrics will be analyzed, normally in consultation with those involved in the preceding and following steps as well as the manager of the overall system and others, in order to improve it.

    A good and very light read is “What is Six Sigma process management” by Hayler and Nichols.

    Much has been written lately about the merge of Lean Manufacturing techniques with Six Sigma and other quality initiatives. Lean Manufacturing focuses on improvement in manufacturing performance, not just quality. Its focus is on the elimination of batches which, in manufacturing terms, is inventory that cannot be sold. Reduction of this “work in process” by speeding it through the process in a transactional manner can result in dramatic cost savings and the elimination of errors. A good read is “Lean Six Sigma” by George.

  6. I must respectfully disagree. eDiscovery is absolutely the practice of law. As Gabe says, the review must be done by trained attorneys who understand the legal signifance of factual statements contained in the documents. But it doesn’t end there. As everyone here knows, there is a growing body of case law, from the retention/lit hold and collection stages, to the production stage. You can find case law outlining in detail what data must be preserved, how to and who should conduct the science and art of keyword searches (as long as the practice continues) and even in what format documents must be produced. A body of jurisprudence which you must analyze, understand and work to ensure your client is in compliance with during litigation equals the practice of law. Not that non-lawyer experts are not essential, as Mr. Losey says, lawyers must still oversee the process.

  7. […] in Atlanta, and I did up for The Sedona Institute earlier this year. Ralph has been kind to cite it previously in this space even if he didn’t care for our cute […]

  8. […] that is looking for the attorney in charge to take on the role of project leader so as to enforce quality control and avoid sanctions. How does this trend help us to understand and define clear team […]

  9. […] Commentary on Achieving Quality in the E-Discovery Process (2009).  (See my prior blog: Sedona on Quality: a Must-Read Commentary). These are the three things that we need to do in e‑discovery (and I’ll be coming out with a […]

  10. […] Achieving Quality in the E-Discovery Process (2009). See my prior article on this  important work: Sedona on Quality: a Must-Read Commentary. Again, let me stress that Jason was just arguing a position here, and the mock argument obviously […]

  11. […] elements in that evolving standard. See for instance the important Sedona Commentary that I have written about before entitled The Sedona Conference® Commentary on Achieving Quality in the E-Discovery Process. The […]

Leave a Reply

Discover more from e-Discovery Team

Subscribe now to keep reading and get access to the full archive.

Continue reading