Announcing the e-Discovery Team’s TAR Training Program: 16 Classes, All Online, All Free – The TAR Course

March 19, 2017

We launch today a sixteen class online training program on Predictive Coding: the e-Discovery Team TAR Course. This is a “how to” course on predictive coding. We have a long descriptive name for our method, Hybrid Multimodal IST Predictive Coding 4.0. By the end of the course you will know exactly what that means. You will also understand the seventeen key things you need to know to do predictive coding properly, shown this diagram.


Hands-on
 hacking of predictive coding document reviews has been my obsession since Da Silva went viral. Da Silva Moore v. Publicis Groupe & MSL Group, 27 F.R.D. 182 (S.D.N.Y. 2012). That is the case where I threw Judge Peck the softball opportunity to approve predictive coding for the first time. See: Judge Peck Calls Upon Lawyers to Use Artificial Intelligence and Jason Baron Warns of a Dark Future of Information Burn-Out If We Don’t

Alas, because of my involvement in Da Silva I could never write about it, but I can tell you that none of the thousands of commentaries on the case have told the whole nasty story, including the outrageous “alternate fact” attacks by plaintiff’s counsel on Judge Andrew Peck and me. I guess I should just take the failed attempts to knock me and the Judge out of the case as flattery, but it still leaves a bad taste in my mouth. A good judge like Andy Peck did not deserve that kind of treatment. 

At the time of Da Silva, 2012, my knowledge of predictive coding was mostly theoretical, informational. But now, after “stepping-in” for five years to actually make the new software work, it is practical. For what “stepping-in” means see the excellent book on artificial intelligence and future employment by Professor Thomas Davenport and Julia Kirby, titled Only Humans Need Apply (HarperBusiness, 2016). Also see: Dean Gonsowski, A Clear View or a Short Distance? AI and the Legal Industry, and, Gonsowski, A Changing World: Ralph Losey on “Stepping In” for e-Discovery (Relativity Blog). 

If you are looking to craft a speciality in the law that rides the new wave of AI innovations, then electronic document review with TAR is a good place to start. See Part Two of my January 22, 2017 blog, Lawyers’ Job Security in a Near Future World of AI. This is where the money will be.

 

Our TAR Course is designed to teach this practical, stepping-in based knowledge. The link to the course will always be shown on this blog at the top of the page. The TAR page next to it has related information.

Since Da Silva we have learned a lot about the actual methods of predictive coding. This is hands-on learning through actual cases and experiments, including sixty-four test runs at TREC in 2015 and 2016.

We have come to understand very well the technical details, the ins and outs of legal document review enhanced by artificial intelligence, AI-enhanced review. That is what TAR and predictive coding really mean, the use of active machine learning, a type of specialized artificial intelligence, to find the key documents needed in an investigation. In the process I have written over sixty articles on the subject of TAR, predictive coding and document review, most of them focused on what we have learned about methods.

The TAR Course is the first time we have put all of this information together in a systematic training program. In sixteen classes we cover all seventeen topics, and much more. The result is an online instruction program that can be completed in one long weekend. After that it can serve as a reference manual. The goal is to help you to step-in and improve your document review projects.

The TAR Course has sixteen classes listed below. Click on some and check them out. All free. We do not even require registration. No tests either, but someday soon that may change. Stay tuned to the e-Discovery Team. This is just the first step dear readers of my latest hack of the profession. Change we must, and not just gradual, but radical. That is the only way the Law can keep up with the accelerating advances in technology. Taking the TAR Course is a minimum requirement and will get you ready for the next stage.

  1. First Class: Introduction
  2. Second Class: TREC Total Recall Track
  3. Third Class: Introduction to the Nine Insights Concerning the Use of Predictive Coding in Legal Document Review
  4. Fourth Class: 1st of the Nine Insights – Active Machine Learning
  5. Fifth Class: Balanced Hybrid and Intelligently Spaced Training
  6. Sixth Class: Concept and Similarity Searches
  7. Seventh Class: Keyword and Linear Review
  8. Eighth Class: GIGO, QC, SME, Method, Software
  9. Ninth Class: Introduction to the Eight-Step Work Flow
  10. Tenth Class: Step One – ESI Communications
  11. Eleventh Class: Step Two – Multimodal ECA
  12. Twelfth Class: Step Three – Random Prevalence
  13. Thirteenth Class: Steps Four, Five and Six – Iterate
  14. Fourteenth Class: Step Seven – ZEN Quality Assurance Tests
  15. Fifteenth Class: Step Eight – Phased Production
  16. Sixteenth Class: Conclusion

This course is not about the theory or law of predictive coding. You can easily get that elsewhere. It is about learning the latest methods to do predictive coding. It is about learning how to train an AI to find the ESI evidence you want. The future looks bright for attorneys with both legal knowledge and skills and software knowledge and skills. The best and brightest will also be able to work with various kinds of specialized AI to do a variety of tasks, including AI-enhanced document review. If that is your interest, then jump onto the TAR Course and start your training today. Who knows where it may take you?

________

__

.

 

 


Judge Peck Orders All Lawyers in NY to Follow the Rules when Objecting to Requests for Production, or Else …

March 5, 2017

peckJudge Peck has issued a second wake-up call type of opinion in Fischer v. Forrest,  _ F. Supp. 3rd _, 2017 WL 773694 (S.D.N.Y. Feb. 28, 2017). Judge Peck’s first wake-up call in 2009 had to do with the basics of keyword search – William A. Gross Construction Associates, Inc. v. American Manufacturers Mutual Insurance Co., 256 F.R.D. 134 (S.D.N.Y. 2009), which is still one of my favorite all time e-discovery opinions. His second wake-up call has to do with the basics of Rule 34, specifically subsections (b)(2)(B) and (b)(2)(C):

It is time, once again, to issue a discovery wake-up call to the Bar in this District:1/ the Federal Rules of Civil Procedure were amended effective December 1, 2015, and one change that affects the daily work of every litigator is to Rule 34. Specifically (and I use that term advisedly), responses to discovery requests must:

* State grounds for objections with specificity;

* An objection must state whether any responsive materials are being withheld on the basis of that objection; and

* Specify the time for production and, if a rolling production, when production will begin and when it will be concluded.

Most lawyers who have not changed their “form file” violate one or more (and often all three) of these changes.

Judge Peck is right about that. But there are so many technical rules that lawyers do not exactly follow. Nothing new here. Boring. Right? Wrong. Why. Because Judge Peck has added teeth to his observation.

dino teachersIt is well known that most lawyers will continue to use their old forms unless they are pried out of their dying hands. Mere changes in the rules and resulting technical violations are not about to interfere with the basic lethargy inherent in legal practice. The law changes so slowly, even big money saving improvements like predictive coding are met with mere lip service praise followed by general neglect (hey, it requires learning and change).

judge_peckSo how to get lawyers attention? Judge Peck knows a way, it involves threats. Here is his conclusion in Fischer.

Conclusion
The December 1, 2015 amendments to the Federal Rules of Civil Procedure are now
15 months old. It is time for all counsel to learn the now-current Rules and update their “form” files. From now on in cases before this Court, any discovery response that does not comply with Rule 34’s requirement to state objections with specificity (and to clearly indicate whether responsive material is being withheld on the basis of objection) will be deemed a waiver of all objections (except as to privilege).

waived-stampYes. Judge Peck used the “w” word that lawyers all fear – WAIVER. And waiver of all objections no less. Lawyers love their objections and do not want anyone taking them away from them. They may even follow the rules to protect them. At least, that is Judge Peck’s thinking.

Rule 34(b)(2)

34b2What does that rule say that so many are thoughtlessly violating. What has made dear Judge Peck so hot under the collar? You can read his opinion to get the answer, and I strongly recommend that you do, but here are the two subsections of Rule 34(b)(2) that no one seems to be following. You might want to read them through carefully a few times.

Rule 34. Producing Documents, Electronically Stored Information, and Tangible Things, or Entering onto Land, for Inspection and Other Purposes

(b) Procedure.

(2) Responses and Objections.

(B) Responding to Each Item. For each item or category, the response must either state that inspection and related activities will be permitted as requested or state with specificity the grounds for objecting to the request, including the reasons. The responding party may state that it will produce copies of documents or of electronically stored information instead of permitting inspection. The production must then be completed no later than the time for inspection specified in the request or another reasonable time specified in the response.

(C) Objections. An objection must state whether any responsive materials are being withheld on the basis of that objection. An objection to part of a request must specify the part and permit inspection of the rest.

As to what that means you have no higher authority than the official Comments of the Rules Committee itself, which Judge Peck also quotes in full and adds some underlines for emphasis:

Rule 34(b)(2)(B) is amended to require that objections to Rule 34 requests be stated with specificity. This provision adopts the language of Rule 33(b)(4), eliminating any doubt that less specific objections might be suitable under Rule 34. The specificity of the objection ties to the new provision in Rule 34(b)(2)(C) directing that an objection must state whether any responsive materials are being withheld on the basis of that objection. An objection may state that a request is overbroad, but if the objection recognizes that some part of the request is appropriate the objection should state the scope that is not overbroad. Examples would be a statement that the responding party will limit the search to documents or electronically stored information created within a given period of time prior to the events in suit, or to specified sources. When there is such an objection, the statement of what has been withheld can properly identify as matters “withheld” anything beyond the scope of the search specified in the objection.

Rule 34(b)(2)(B) is further amended to reflect the common practice of producing copies of documents or electronically stored information rather than simply permitting inspection. The response to the request must state that copies will be produced. The production must be completed either by the time for inspection specified in the request or by another reasonable time specifically identified in the response. When it is necessary to make the production in stages the response should specify the beginning and end dates of the production.

Rule 34(b)(2)(C) is amended to provide that an objection to a Rule 34 request must state whether anything is being withheld on the basis of the objection. This amendment should end the confusion that frequently arises when a producing party states several objections and still produces information, leaving the requesting party uncertain whether any relevant and responsive information has been withheld on the basis of the objections. The producing party does not need to provide a detailed description or log of all documents withheld, but does need to alert other parties to the fact that documents have been withheld and thereby facilitate an informed discussion of the objection. An objection that states the limits that have controlled the search for responsive and relevant materials qualifies as a statement that the materials have been “withheld.”

2015 Adv. Comm. Notes to Rule 34 (emphasis added by Judge Peck).

Going back to Judge Peck’s analysis of the objections made in Fischer v. Forrest:

Let us count the ways defendants have violated the Rules:

First, incorporating all of the General Objections into each response violates Rule 34(b)(2)(B)’s specificity requirement as well as Rule 34(b)(2)(C)’s requirement to indicate whether any responsive materials are withheld on the basis of an objection. General objections should rarely be used after December 1, 2015 unless each such objection applies to each document request (e.g., objecting to produce privileged material).

Second, General Objection I objected on the basis of non-relevance to the “subject matter of this litigation.” (See page 3 above.) The December 1, 2015 amendment to Rule 26(b)(1) limits discovery to material “relevant to any party’s claim or defense . . . .” Discovery about “subject matter” no longer is permitted. General Objection I also objects that the discovery is not “likely to lead to the discovery of relevant, admissible evidence.” The 2015 amendments deleted that language from Rule 26(b)(1), and lawyers need to remove it from their jargon. See In re Bard IVC Filters Prod. Liab. Litig., 317 F.R.D. 562, 564 (D. Ariz. 2016) (Campbell, D.J.) (“The 2015 amendments thus eliminated the ‘reasonably calculated’ phrase as a definition for the scope of permissible discovery. Despite this clear change, many courts [and lawyers] continue to use the phrase. Old habits die hard. . . . The test going forward is whether evidence is ‘relevant to any party’s claim or defense,’ not whether it is ‘reasonably calculated to lead to admissible evidence.”‘).

Third, the responses to requests 1-2 stating that the requests are “overly broad and unduly burdensome” is meaningless boilerplate. Why is it burdensome? How is it overly broad? This language tells the Court nothing. Indeed, even before the December 1, 2015 rules amendments, judicial decisions criticized such boilerplate objections. See, e.g., Mancia v. Mayflower Textile Servs. Co., 253 F.R.D. 354, 358 (D. Md. 2008) (Grimm, M.J.) (“[B]oilierplate objections that a request for discovery is ‘over[broad] and unduly burdensome, and not reasonably calculated to lead to the discovery of material admissible in evidence,’ persist despite a litany of decisions from courts, including this one, that such objections are improper unless based on particularized facts.” (record cite omitted)).

Finally, the responses do not indicate when documents and ESI that defendants are producing will be produced.

________________

34b2-please_peck

POSTSCRIPT

Attorneys everywhere, not just in Judge Peck’s Court in New York, would be well advised to follow his wake-up call. Other courts around the country have already begun to follow in his footsteps. District Court Judge Mark Bennett has bench-slapped all counsel in Liguria Foods, Inc. v. Griffith Laboratories, Inc., 2017 BL 78800, N.D. Iowa, No. C 14-3041, 3/13/17), and, like Judge Peck, Iowa D.C. Judge Bennett warned all attorneys of future sanctions. Opinion found at Google Scholar at: https://scholar.google.com/scholar_case?case=13539597862614970677&hl=en&as_sdt=40006

The quick take-aways from the lengthy Liguria Foods opinion are:

  • Obstructionist discovery responses” in civil cases are a “menacing scourge” that must be met in the future with “substantial sanctions.
  • Attorneys are addicted to “repetitive discovery objections” that are “devoid of individualized factual analysis.”
  • “Judges need to push back, get our judicial heads out of the sand, stop turning a blind eye to the ‘boilerplate’ discovery culture and do our part to solve this cultural discovery ‘boilerplate’ plague.” 
  • Only by “imposing increasingly severe sanctions” will judges begin to change the culture of discovery abuse.
  • Instead of sanctions, the court accepted the “sincere representations” from the lead attorneys that they will be “ambassadors for changing the ‘boilerplate’ discovery objection culture” in both of their law firms.
  • NO MORE WARNINGS. IN THE FUTURE, USING ‘BOILERPLATE’ OBJECTIONS TO DISCOVERY IN ANY CASE BEFORE ME PLACES COUNSEL AND THEIR CLIENTS AT RISK FOR SUBSTANTIAL SANCTIONS,” the court said in ALL CAPS as the closing sentence.

This opinion is full of colorful language, not only about bad forms, but also over-contentious litigation. Judges everywhere are tired of the form objections most attorneys still use, especially if they don’t conform to new rules (or any rules). You should consider becoming the ambassador for changing the ‘boilerplate’ discovery objection culture in your firm.

Judge Bennett’s analysis points to many form objections, including Interrogatory objections, not just those saying “overbroad” that were discussed by Judge Peck under Rule 34 in Fischer. Here are a few general boilerplate objections that he points, which all look familiar to me:

  • Objection “to the extent they seek to impose obligations on it beyond those imposed by the Federal Rules of Civil Procedure or any other applicable rules or laws.”
  • Objection “to the extent they call for documents protected by the attorney-client privilege, the work product rule, or any other applicable privilege.”
  • Objection “to the extent they request the production of documents that are not relevant, are not reasonably calculated to lead to the discovery of admissible evidence or are not within their possession, custody and control.”
  • Objection “overbroad, unduly burdensome”
  • Objection “insofar as they seek information that is confidential or proprietary.”
  • “subject to [and without waiving] its general and specific objections”
  • Objection “”as the term(s) [X and Y] are not defined.”

Here are a few quotes to give you the flavor of how many judges feel about form objections:

This case squarely presents the issue of why excellent, thoughtful, highly professional, and exceptionally civil and courteous lawyers are addicted to “boilerplate” discovery objections.[1] More importantly, why does this widespread addiction continue to plague the litigation industry when counsel were unable to cite a single reported or non-reported judicial decision or rule of civil procedure from any jurisdiction in the United States, state or federal, that authorizes, condones, or approves of this practice? What should judges and lawyers do to substantially reduce or, more hopefully and optimistically, eliminate this menacing scourge on the legal profession? Perhaps surprisingly to some, I place more blame for the addiction, and more promise for a cure, on the judiciary than on the bar.

Indeed, obstructionist discovery practice is a firmly entrenched “culture” in some parts of the country, notwithstanding that it involves practices that are contrary to the rulings of every federal and state court to address them. As I remarked at an earlier hearing in this matter, “So what is it going to take to get . . . law firms to change and practice according to the rules and the cases interpreting the rules? What’s it going to take?”

On January 27, 2017, I entered an Order To Show Cause Why Counsel For Both Parties Should Not Be Sanctioned For Discovery Abuses And Directions For Further Briefing. In the Order To Show Cause, I directed that every attorney for the parties who signed a response to interrogatories or a response to a request for documents in this case, with the exception of local counsel, appear and show cause, at a hearing previously scheduled for March 7, 2017, why he should not be sanctioned for discovery abuses.

Judge Bennett held a long evidentiary hearing before issuing this Order where he questioned many attorneys for both sides under oath. (Amazing, huh?) This lead to the following comments in the Opinion:

As to the question of why counsel for both sides had resorted to “boilerplate” objections, counsel admitted that it had a lot to do with the way they were trained, the kinds of responses that they had received from opposing parties, and the “culture” that routinely involved the use of such “standardized” responses. Indeed, one of the attorneys indicated that some clients—although not the clients in this case—expect such responses to be made on their behalf. I believe that one of the attorneys hit the nail squarely on the head when he asserted that such responses arise, at least in part, out of “lawyer paranoia” not to waive inadvertently any objections that might protect the parties they represent. Even so, counsel for both parties admitted that they now understood that such “boilerplate” objections do not, in fact, preserve any objections. Counsel also agreed that part of the problem was a fear of “unilateral disarmament.” This is where neither party’s attorneys wanted to eschew the standard, but impermissible, “boilerplate” practices that they had all come to use because they knew that the other side would engage in “boilerplate” objections. Thus, many lawyers have become fearful to comply with federal discovery rules because their experience teaches them that the other side would abuse the rules. Complying with the discovery rules might place them at a competitive disadvantage.

Heed these calls and be your law firm’s ambassador for changing the ‘boilerplate’ discovery objection culture. Stop lawyers from making form objections, especially general objections, or risk the wrath of your local judge. Or, to put it another way, using other quaint boilerplate: Please Be Governed Accordingly.

 


e-Discovery Team’s 2016 TREC Report: Once Again Proving the Effectiveness of Our Standard Method of Predictive Coding

February 24, 2017

Team_TRECOur Team’s Final Report of its participation in the 2016 TREC ESI search Conference has now been published online by NIST and can be found here. TREC stands for Text Retrieval Conference. It is co-sponsored by a group within the National Institute of Standards and Technology (NIST), which is turn is an agency of the U.S. Commerce Department. The stated purpose of the annual TREC conference is to encourage research in information retrieval from large text collections.

The other co-sponsor of TREC is the United States Department of Defense. That’s right, the DOD is the official co-sponsor of this event, although TREC almost never mentions that. Can you guess why the DOD is interested? No one talks about it at TREC, but I have some purely speculative ideas. Recall that the NSA is part of the DOD.

We participated in one of several TREC programs in both 2015 and 2016, the one closest to legal search, called the Total Recall Track. The leaders, administrators of this Track were Professors Gordon Cormack and Maura Grossman. They also participated each year in their own track.

One of the core purposes of all of the Tracks is to demonstrate the robustness of core retrieval technology. Moreover, one of the primary goals of TREC is:

[T]o speed the transfer of technology from research labs into commercial products by demonstrating substantial improvements in retrieval methodologies on real-world problems.

Our participation in TREC in 2015 and 2016 has demonstrated substantial improvements in retrieval methodologies. That is what we set out to do. That is the whole point of the collaboration between the Department of Commerce and Department of Defense to establish TREC.

clinton_emailThe e-Discovery Team has a commercial interest in participation in TREC, not a defense or police interest. Although from what we saw with the FBI’s struggles to search email last year, the federal government needs help. We were very unimpressed by the FBI’s prolonged efforts to review the Clinton email collection. I was one of the few e-discovery lawyers to correctly call the whole Clinton email server “scandal” a political tempest in a teapot. I still do and I am still outraged by how her email review was handled by the FBI, especially with the last-minute “revelations.”

prism_nsaThe executive agencies of the federal government have been conspicuously absent from TREC. They seem incapable of effective search, which may well be a good thing. Still, we have to believe that the NSA and other defense agencies are able to do a far better job at large-scale search than the FBI. Consider their ongoing large-scale metadata and text interception efforts, including the once Top Secret PRISM operation. Maybe it is a good thing the NSA doe not share it abilities with the FBI, especially these days. Who knows? We certainly will not.

Mr_EDRThe e-Discovery Team’s commercial interest is to transfer Predictive Coding technology from our research labs into commercial products, namely transfer our Predictive Coding 4.0 Method using KrolL Discovery EDR software to commercial products. In our case at the present time “commercial products” means our search methods, time and consultations. But who knows, it may be reduced to a robot product someday like our Mr. EDR.

The e-Discovery Team method can be used on other document review platforms as well, not just Kroll’s, but only if they have strong active machine learning features. Active machine learning is what everyone at TREC was testing, although we appear to have been the only participant to focus on a particular method of operation. And we were the only team led by a practicing attorney, not an academic or software company. (Catalyst also fielded a team in 2015 and 2106 headed by Information Science Ph.D., Jeremy Pickens.)

Olympics-finish-line-Usain-Bolt-winsThe e-Discovery Team wanted to test the hybrid multimodal software methods we use in legal search to demonstrate substantial improvements in retrieval methodologies on real-world problems. We have now done so twice; participating in both the 2015 and 2016 Total Recall Tracks. The results in 2016 were even better than 2015. We obtained remarkable results in document review speed, recall and precision; although, as we admit, the search challenges presented at TREC 2016 were easier than most projects we see in legal discovery. Still, to use the quaint language of TREC, we have demonstrated the robustness of our methods and software.

These demonstrations, and all of the reporting and analysis involved, have taken hundreds of hours of our time, but there was no other venue around to test our retrieval methodologies on real-world problems. The demonstrations are now over. We have proven our case. Our standard Predictive Coding method has been tested and its effectiveness demonstrated. No one else has tested and proven their predictive coding methods as we have done. We have proven that our hybrid multimodal method of AI-Enhanced document review is the gold standard. We will continue to make improvements in our method and software, but we are done with participation in federal government programs to prove our standard, even one run by the National Institute of Standards and Technology.

predictive_coding_4-0_web

To prove our point that we have now demonstrated substantial improvements in retrieval methodologies, we quote below Section 5.1 of our official TREC report, but we urge you to read the whole thing. It is 164 pages. This section of our report covers our primary research question only. We investigated three additional research questions not included below.

__________

Section 5.1 First and Primary Research Question

What Recall, Precision and Effort levels will the e-Discovery Team attain in TREC test conditions over all thirty-four topics using the Team’s Predictive Coding 4.0 hybrid multimodal search methods and Kroll Ontrack’s software, eDiscovery.com Review (EDR).

Again, as in the 2015 Total Recall Track, the Team attained very good results with high levels of Recall and Precision in all topics, including perfect or near perfect results in several topics using the corrected gold standard. The Team did so even though it only used five of the eight steps in its usual methodology, intentionally severely constrained the amount of human effort expended on each topic and worked on a dataset stripped of metadata. The Team’s enthusiasm for the record-setting results, which were significantly better than its 2015 effort, is tempered by the fact that the search challenges presented in most of the topics in 2016 were not difficult and the TREC relevance judgments had to be corrected in most topics.  …

This next chart uses the corrected standard. It is the primary reference chart we use to measure our results. Unfortunately, it is not possible to make any comparisons with BMI standards because we do not know the order in which the BMI documents were submitted.

trec-16_revised-all-results

The average results obtained across all thirty-four topics at the time of reasonable call using the corrected standard are shown below in bold. The average scores using the uncorrected standard are shown for comparison in parentheses.

  • 88.17% Recall (75.46%)
  • 64.94% Precision (57.12%)
  • 69.15% F1 (57.69%)
  • 124 Docs Reviewed Effort (124)

At the time of reasonable call the Team had recall scores greater than 90% in twenty-two of the thirty-four topics and greater than 80% in five more topics. Recall of greater than 95% was attained in fourteen topics. These Recall scores under the corrected standard are shown in the below chart. The results are far better than we anticipated, including six topics with total recall – 100%, and two topics with both total recall and perfect precision, topic 417 Movie Gallery and topic 434 Bacardi Trademark.

recall-scores-amended-2016

At the time of reasonable call the Team had precision scores greater than 90% in thirteen of the thirty-four topics and greater than 75% in three more topics. Precision of greater than 95% was attained in nine topics. These Precision scores under the corrected standard are shown in the below chart. Again, the results were, in our experience, incredibly good, including three topics with perfect precision at the time of the reasonable call.

precision-scores-amended-2016

At the time of reasonable call the Team had F1 scores greater than 90% in twelve of the thirty-four topics and greater than 75% in two more. F1 of greater than 90% was attained in eight topics. These F1 scores under the corrected standard are shown in the below chart. Note there were two topics with a perfect score, Movie Gallery (100%) and Bacardi Trademark (100%) and three more that were near perfect: Felon Disenfranchisement (98.5%), James V. Crosby (97.57%), and Elian Gonzalez (97.1%).

f1-scores-amended_2016

We were lucky to attain two perfect scores in 2016 (we attained one in 2015), in topic 417 Movie Gallery and topic 434 Bacardi Trademark. The perfect score of 100% F1 was obtained in topic 417 by locating all 5,945 documents relevant under the corrected standard after reviewing only 66 documents. This topic was filled with form letters and was a fairly simple search.

The perfect score of 100% F1 was obtained in topic 434 Bacardi Trademark by locating all 38 documents relevant under the corrected standard after reviewing only 83 documents. This topic had some legal issues involved that required analysis, but the reviewing attorney, Ralph Losey, is an SME in trademark law so this did not pose any problems. The issues were easy and not critical to understand relevance. This was a simple search involving distinct language and players. All but one of the 38 relevant documents were found by tested, refined keyword search. One additional relevant document was found by a similarity search. Predictive coding searches were run after the keywords searches and nothing new was uncovered. Here machine learning merely performed a quality assurance role to verify that all relevant documents had indeed been found.

The Team proved once again, as it did in 2015, that perfect recall and perfect precision is possible, albeit rare, using the Team’s methods and fairly simple search projects.

The Team’s top ten projects attained remarkably high scores with an average Recall of 95.66%, average Precision of 97.28% and average F-Measure: 96.42%. The top ten are shown in the chart below.

top-10_results

In addition to Recall, Precision and F1, the Team per TREC requirements also measured the effort involved in each topic search. We measured effort by the number of documents that were actually human-reviewed prior to submission and coded relevant or irrelevant. We also measured effort by the total human time expended for each topic. Overall, the Team human-reviewed only 6,957 documents to find all the 34,723 relevant documents within the overall corpus of 9,863,366 documents. The total time spent by the Team to review the 6,957 documents, and do all the search and analysis and other work using our Hybrid Multimodal Predictive Coding 4.0 method, was 234.25 hours. reviewed_data_pie_chart_2016

It is typical in legal search to try to measure the efficiency of a document review by the number of documents classified by an attorney in an hour. For instance, a typical contract review attorney can read and classify an average of 50 documents per hour. The Team classified 9,863,366 documents by review of 6,957 documents taking a total time of 234.25 hours. The Team’s overall review rate for the entire corpus was thus 42,106 files per hour (9,863,366/234.25).

In legal search it is also typical, indeed mandatory, to measure the costs of review and bill clients accordingly. If we here assume a high attorney hourly rate of $500 per hour, then the total cost of the review of all 34 Topics would be $117,125. That is a cost of just over $0.01 per document. In a traditional legal review, where a lawyer reviews one document at a time, the cost would be far higher. Even if you assume a low attorney rate of $50 per hour, and review speed of 50 files per hour, the total cost to review every document for every issue would be $9,863,366. That is a cost of $1.00 per document, which is actually low by legal search standards.13

Analysis of project duration is also very important in legal search. Instead of the 234.25 hours expended by our Team using Predictive Coding 4.0, traditional linear review would have taken 197,267 hours (9,863,366/50). In other words, the review of thirty-four projects, which we did in our part-time after work in one Summer, would have taken a team of two lawyers using traditional methods, 8 hours a day, every day, over 33 years! These kinds of comparisons are common in Legal Search.

Detailed descriptions of the searches run in all thirty-four topics are included in the Appendix.

___________

We also reproduce below Section 1.0, Summary of Team Efforts, from our 2016 TREC Report. For more information on what we learned in the 2016 TREC see alsoComplete Description in 30,114 Words and 10 Videos of the e-Discovery Team’s “Predictive Coding 4.0” Method of Electronic Document ReviewNine new insights that we learned in the 2016 research are summarized by the below diagram more specifically described in the article.

predictive_coding_six-three-2

_________

Excerpt From Team’s 2016 Report

1.1 Summary of Team’s Efforts. The e-Discovery Team’s 2016 Total Recall Track Athome project started June 3, 2016, and concluded on August 31, 2016. Using a single expert reviewer in each topic the Team classified 9,863,366 documents in thirty-four review projects.

The topics searched in 2016 and their issue names are shown in the chart below. Also included are the first names of the e-Discovery Team member who did the review for that topic, the total time spent by that reviewer and the number of documents manually reviewed to find all of the relevant documents in that topic. The total time of all reviewers on all projects was 234.25 hours. All relevant documents, totaling 34,723 by Team count, were found by manual review of 6,957 documents. The thirteen topics in red were considered mandatory by TREC and the remaining twenty-one were optional. The e-Discovery Team did all topics.

trec-2016-topics

They were all one-person, solo efforts, although there was coordination and communications between Team members on the Subject Matter Expert (SME) type issues encountered. This pertained to questions of true relevance and errors found in the gold standard for many of these topics. A detailed description of the search for each topic is contained in the Appendix.

In each topic the assigned Team attorney personally read and evaluated for true relevance every email that TREC returned as a relevant document, and every email that TREC unexpectedly returned as Irrelevant. Some of these were read and studied multiple times before we made our final calls on true relevance, determinations that took into consideration and gave some deference to the TREC assessor adjudications, but were not bound by them. Many other emails that the Team members considered irrelevant, and TREC agreed, were also personally reviewed as part of their search efforts. As mentioned, there was sometimes consultations and discussion between Team members as to the unexpected TREC opinions on relevance.

This contrasts sharply with participants in the Sandbox division. They never make any effort to determine where their software made errors in predicting relevance, or for any other reasons. They accept as a matter of faith the correctness of all TREC’s prior assessment of relevance. To these participants, who were all academic institutions, the ground truth itself as to relevance or not, was of no relevance. Apparently, that did not matter to their research.

All thirty-four topics presented search challenges to the Team that were easier, some far easier, than the Team typically face as attorneys leading legal document review projects. (If the Bush email had not been altered by omission of metadata, the searches would have been even easier.) The details of the searches performed in each of the thirty-four topics are included in the Appendix. The search challenges presented by these topics were roughly equivalent to the most simplistic challenges that the e-Discovery Team might face in projects involving relatively simple legal disputes. A few of the search topics in 2016 included quasi legal issues, more than were found in the 2015 Total Recall Track. This is a revision that the Team requested and appreciated because it allowed some, albeit very limited testing of legal judgment and analysis in determination of true relevance in these topics. In legal search relevancy, legal analysis skills are obviously very important. In most of the 2016 Total Recall topics, however, no special legal training or analysis was required for a determination of true relevance.

At Home participants were asked to track and report their manual efforts. The e-Discovery Team did this by recording the number of documents that were human reviewed and classified prior to submission. More were reviewed after submission as part of the Team’s TREC relevance checking. Virtually all documents human reviewed were also classified, although all documents classified were not used for active training of the software classifier. The Team also tracked effort by number of attorney hours worked as is traditional in legal services. Although the amount of time varied somewhat by topic, the average time spent per topic was only 6.89 hours. The average review and classification speed for each project was 42,106 files per hour (9,863,366/234.25).

Again, for the full picture and complete details of our work please see the complete 164 page report to TREC of the e-Discovery Team’s Participation in the 2016 Total Recall Track.

 

 

 

 


Lawyers’ Job Security in a Near Future World of AI, the Law’s “Reasonable Man Myth” and “Bagley Two” – Part Two

January 22, 2017

This is the second and concluding section to the two-part blog, Lawyers’ Job Security in a Near Future World of AI, the Law’s “Reasonable Man Myth” and “Bagley Two.” Click here to read Part One.

Robot_handshake

___________

Next consider Judge Haight’s closing words to the opinion dated December 22, 2016, Ruling On Plaintiff’s Motion To Compel; Bagely v. Yale, Civil Action No. 3:13-CV-1890 (CSH):

However, requiring this additional production, or a further deposition in case of need, is in keeping with a governing objective of the Federal Rules of Civil Procedure: “By requiring disclosure of all relevant information, the discovery rules allow ultimate resolution of disputed issues to be based on full and accurate understanding of true facts.” 6 Moore’s Federal Practice § 26.02 (Matthew Bender 3d ed.). 6

__________

6 While Yale may not welcome the measurement of its obligations in the case at bar by these principles, it is worth recalling that the treatise’s principal initial author, James Wm. Moore, was a towering figure on the faculty of Yale Law School. In his preface to the first edition (1938), Professor Moore referred to his effort “at all times to accord to the Rules the interpretation which is most likely to attain the general objective of the new practice: the settlement of litigation on the merits.” That is the interpretation this Ruling attempts to adopt.

william_moore_prof_yale

Prof. Moore (1905-1994)

Poor Yale. Moore’s Federal Practice is one of the most cited treatises in the law. James W. Moore was the author of the 34-volume Moore’s Federal Practice (2d ed., 1948) and the three-volume Moore’s Manual: Federal Practice & Procedure (1962). He was also the Sterling Professor Emeritus of Law at Yale University, where he taught for 37 years. Who else but Yale can have anything in Moore’s thirty-four volume treatise held against them personally? Seems kind of funny, but I am sure Yale’s attorneys were not laughing.

Getting back to the case and Judge Haight’s decision. Aside from showing the malleability and limits of reason, Bagley Two provides some important new precedent for e-discovery, namely his rulings on privilege and the discoverability of a party’s preservation efforts. Judge Haight starts by repeating what is now established law, that a party’s preservation efforts are not satisfied by mere issuance of a notice, that a whole process is involved and the process must be reasonable. He then goes on to provide a pretty good list of the facts and circumstances that should be considered to determine reasonability.

[A] party’s issuance of a litigation hold notice does not put an end to the party’s obligation to preserve evidence; it is, rather, the first in a series of related steps necessary to ensure that preservation. As Magistrate Judge Francis aptly observed in Mastr Adjustable Rate Mortgages Trust 2006 v. UBS Real Estate Securities Inc., 295 F.R.D. 77, 85 (S.D.N.Y. 2013): “A litigation hold is not, alone, sufficient; instead compliance must be monitored.”

In spoliation cases involving litigation hold notices, one can discern from Second Circuit and district court opinions a number of decisive questions:

1. When did a party’s duty to preserve evidence arise?
2. Did the party issue a litigation hold notice in order to preserve evidence?
3. When did the party issue a litigation hold notice, in relation to the date its duty to preserve the evidence arose?
4. What did the litigation hold notice say?
5. What did recipients of the litigation hold notice do or say, in response to or as result of, the notice?
6. After receiving recipients’ responses to the litigation hold notice, what further action, if any, did the party giving the notice take to preserve the evidence?

Questions 2 through 6 are entirely fact-specific to a given case. Question 1 is a mixed question of law and fact, whose legal element the Second Circuit defined in Fujitsu Ltd. v. Federal Express Corp., 247 F.3d 423, 436 (2d Cir. 2001): “The obligation to preserve evidence arises when the party has notice that the evidence is relevant to litigation or when a party should have known that the evidence may be relevant to future litigation.”

In the case at bar, I am unable to accept Yale’s argument that the litigation hold notices it issued about Bagley and the recipients’ responses to the notices are immune from discovery because (in the absence of proof that spoliation had in fact occurred) such documents “are subject to the attorney-client and to work product privileges,” Defendants’ Brief [Doc. 192], at 3. That contention is something of a stretch. … . Assuming that all of Clune’s litigation hold notices were sent to employees of Yale, Clune was in effect communicating with his client. However, the predominant purpose of that communication was to give recipients forceful instructions about what they must do, rather than advice about what they might do. 3

I like the list of six key facts to consider to weigh the reasonability of preservation efforts, especially the last one. But my primary point here is the malleability of reason in classifying the notice as unprotected. A letter from in-house counsel telling employees that the law requires them to preserve is not advice entitled to privilege protection? It’s predominant purpose was instead unprotected instructions? The language of the litigation hold notices was earlier quoted in the opinion. It’s language included the following:

[A]ll members of the Yale faculty and staff who have information in their possession or control relating or referring in any way to Professor Bagley, her employment and teaching at SOM, or the circumstances relating to the non-renewal of her faculty appointment (collectively “this Matter”) have a legal obligation to preserve that information. The law imposes this obligation to prevent the loss of potential evidence during litigation. You must preserve and retain, and not alter, delete, remove, discard or destroy, directly or indirectly, any information concerning this Matter. Failure to preserve information could seriously undermine Yale’s legal position and lead to legal sanctions.

The lawyer’s letter tells employees that they “have a legal obligation to preserve,” and the legal consequences if they do not. Yet this letter is not advice because the predominant purpose is just an unprotected instruction? That is the holding.

mental_impressionsJudge Haight gets rid of work product protection too.

As for the work product doctrine, it “is not actually a privilege, but rather a qualified immunity from discovery,” codified in Fed. R. Civ. P. Rule 26(b)(3), whose purpose “is to protect an attorney’s mental processes so that the attorney can analyze and prepare for the client’s case without interference from an opponent.” 6 Moore’s Federal Practice, § 26.70[1] (Matthew Bender 3d ed.). 4 That purpose is not implicated by the present exercise.

__________

4 Fed. R. Civ. P. 26 (b)(3) of Civil Procedure protects from disclosure those materials which reveal “the mental impressions, conclusions, opinions, or legal theories of a party’s attorney.” See also In re Steinhardt Partners, L.P., 9 F.3d 230, 234 (2d Cir. 1993) (“At its core, the work product doctrine shelters the mental processes of the attorney, providing a privileged area within which he can analyze and prepare his client’s case.”) (quoting United States v. Nobles, 422 U.S. 225, 238 (1975)) (emphasis added).

I do not agree with Judge Haight on this aspect of his ruling. I think both work product and attorney client apply to these particular notices and his “reasoning”on this issue is wrong. I do, however, agree with his final ruling requiring production. I think the protections had been waived by the circumstances and actions of defense counsel, which, by the way, they were correct in doing. I think the waiver on their part was necessary. Judge Haight also mentioned waiver, but as dicta alternative grounds in footnote three:

3 The Court also notes that to the extent that Yale’s litigation hold notices included the text of the exemplar provided to Plaintiff as “document preservation notices,” that text has already been revealed publicly in this case, so that secrecy or privilege relating to that language was destroyed or waived. See Doc. 191-1, Ex. F.

triggerJudge Haight then looks at the question of when Yale’s duty to preserve commenced. Recall Yale kept adding custodians in eight stages. The first were pre-litigation notices. They were made, I note, after Yale’s lawyer mental processes told him that litigation was reasonably likely. The last were made after suit was filed, again based on the lawyer’s mental processes causing him to believe that these additional witnesses might have relevant evidence. The mental processes of Plaintiff’s attorneys led them to believe that all of the notices, including the pre-litigation notices, were sent too late and thus spoliation was likely. Here is Judge Haight’s analysis of the trigger issue:

When, during the course of this melancholy chain of events, should Yale have known that evidence pertinent to Bagley’s reappointment might be relevant to future litigation? That is a crucial question in spoliation analysis. A state of reasonable anticipation clearly antedates the actual filing of a complaint; in Fujitsu, 247 F.3d at 436, the Second Circuit was careful to couple actual present and possible future litigation as catalysts of equal strength for the preservation of evidence.

Bagley has not yet formally moved for spoliation sanctions, and so the question is not yet before me for decision, but some preliminary, non-binding observations may be made. The record previously made in the case shows that Bagley’s personal distress and institutional disapproval and distrust grew throughout the winter and spring of 2012 (the last year of her five-year appointment), so that when on May 24, 2012, Dean Snyder told Bagley that she would not be reappointed, it would not be irrational to suppose that Bagley might soon transform herself from disheartened academic to vengeful litigant. In fact, Bagley filed an internal discrimination complaint against Yale during the following month of June 2012 (which had the effect of bringing Provost Salovey out of the wings and onto the stage).

Predictable_IrrationalNote the Judge’s use of the phrase not be irrational to suppose. What is the impact of hindsight bias on this supposedly objective, rational analysis? Bagley’s later actions made it obvious that she would sue. She did sue. The law suit has been very contentious. But was it really all that obvious back in 2012 that Yale would end up in the federal courthouse? I personally doubt it, but, admit it is a close judgment call. We lawyers say that a lot. All that phrase really means is that reason is not objective. It is in the eye of the beholder.

Judge Haight then wraps up his analysis in Bagley Two.

What happened in this case is that Yale identified 65 individuals who might have evidence relevant to Bagley’s denial of reappointment, and issued them litigation hold notices in eight separate batches, a process that took a considerable amount of time. The first nine notices were sent nine months after Snyder told Bagley she would not be reappointed. The last was sent eight months after Bagley filed this action. To characterize the pace of this notification process as culpable or even negligent would be premature on the present record, but it is fair to say that it was leisurely, to an extent making it impossible to dismiss as frivolous Bagley’s suggestion that she might move for a spoliation sanction. The six questions outlined supra arise in this case, and the factors pertinent to resolving them include an unreasonable delay in issuing the notices and a subsequent failure to implement and monitor the recipients’ responses. Judge Sweet said in Stimson that the Second Circuit has left open “the question of whether a sufficiently indefensible failure to issue a litigation hold could justify an adverse inference on its own,” and an additional factor would be “the failure to properly implement the litigation hold even after it was issued.” 2016 WL 54684, at *6. These are legitimate questions in the case at bar. Bagley is entitled to discovery with respect to them. 5 (footnote citations omitted)

I certainly agree with Judge Haight on all of those points and law. Those factual circumstances do justify the modest amount of discovery requested by the plaintiff in this motion.

gavelNow we get to the actual Order on the pending motion to compel:

Therefore I conclude that in the circumstances of this case, Bagley’s “Motion to Compel” [Doc. 190] is GRANTED. Bagley is entitled to examine the litigation hold notices issued by Yale, and the responsive survey forms that notice recipients returned to Yale. These documents bear directly upon the questions courts identify as dispositive in spoliation cases. Bagley is entitled to discovery in these areas, in order to discern the merit or lack of merit of a formal claim for spoliation claim. To the extent that Yale objects to production of these documents on the grounds of privilege or the work product doctrine, the objections are OVERRULED.

For the same reasons, Bagley is also entitled to an affidavit from a Yale officer or employee (not a notice recipient or recipients) which describes what non-ESI documents Yale received from notice recipients and what was done with them. On a spoliation claim, Bagley will ultimately bear the burden of showing that pertinent evidence was destroyed or rendered unavailable. This discovery may cast light on that disputed issue. Yale may prefer not to have to produce that information; Yale’s counsel miss no opportunity to remind the Court how much discovery effort the case has previously required.

Judge Haight then ended his opinion with the previously quoted zinger regarding Yale’s famous law Professor Moore. This zinger and comments about Yale’s leisurely efforts and Yale counsel’s missing no opportunities to remind the court tell a story of their own. It shows the emotional undertone. So too does his earlier noted comment about “spoliation” being a cardinal litigation vice, well known to practicing attorneys and judges, but “perhaps unfamiliar” to academics. I suspect this goes beyond humor.

Artificial Intelligence and the Future of Employment

robot_whispererI am sure legal reason will improve in the future and become less subjective, less subject to hidden irrationalities and prejudices. By using artificial intelligence our legal doctrines and decision making can be improved, but only if the human judges remain in charge. The same comment goes for all attorneys. In fact, it applies to all current employment.

The doom and gloom futurists disagree. They think AI will replace humans at their jobs, not empower them. They envision a future of cold automation, not man-machine augmentation. They predict wide-spread unemployment with a loss of half of our current employment. An University of Oxford study predicted that almost half of all U.S. jobs could be lost to automation in the next twenty years. Even the influential World Economic Forum predicts predicts that Five Million jobs could be lost by 2020. Five Million Jobs by 2020: the Real Challenge of the Fourth Industrial Revolution. Also seeThe Future of Jobs: Employment, Skills and Workforce Strategy for the Fourth Industrial Revolution (World Economic Forum, Jan. 2016).

A contrary view “augmentation” oriented group predicts the opposite, that at least as many new jobs will be created as lost. This is a subject of hot debate. See EgArtificial intelligence will save jobs, not destroy them (World Economic Forum, 1/19/17). Readers know I am in the half-full camp.

James Bessen: Law Prophet of the Future of Employment

james_bessonMany are like me and have an overall positive outlook, including James Bessen, an economist  and Lecturer in Law at the Boston University School of Law. Jim Bessen, who was a good hacker with an entrepreneurial background (he created the first WYSIWYG desktop publishing software), has researched the history of computer use and employment since 1980. Jim’s research has shown that for those who can keep up with technology, there will be new jobs to replace the ones lost. Bessen, How Computer Automation Affects Occupations: Technology, Jobs & Economics, Boston University School of Law Law & Economics Working Paper No. 15-49 (1/16/16). He also found that wages in occupations that use computers grow faster, not slower:

[B]ecause higher wage occupations use computers more, computer use tends to increase well-paid jobs and to decrease low-paid jobs. Generally, computer use is associated with a substantial reallocation of jobs, requiring workers to learn new skills to shift occupations.

Also see the article in The Atlantic magazine by Bessen, The Automation Paradox: When computers start doing the work of people, the need for people often increases, (The Atlantic, 1/19, 2016) where he said:

…workers will have greater employment opportunities if their occupation undergoes some degree of computer automation. As long as they can learn to use the new tools, automation will be their friend.

This is certainly consistent with what I have seen in the legal profession since I started practice in 1980.

james_bessenJames Bessen has also written a book on this, Learning by Doing: The Real Connection Between Innovation, Wages, and Wealth. (Yale U. Press 2015). In this book, Bessen, in his words:

… looks at both economic history and the current economy to understand how new technology affects ordinary workers and how society can best meet the challenges it poses.

He notes that major new technologies always require new human work skills and knowledge, and that today, as before, they are slow and difficult to develop. He also makes the observation, which is again consistent with my own experience as a tech-lawyer, that relevant technical knowledge “develops slowly because it is learned through experience, not in the classroom.” In his analysis that is because the new knowledge is not yet standardized. I agree. This is one reason my work has been focused on the standardization of the use of active machine learning in the search for electronic evidence; see for example Predictive Coding 4.0 and my experiments at the TREC conference on predictive coding methods sponsored by the National Institute of Standards and Technology. Also see: Electronic Discovery Best Practices. In spite of my efforts on standards and best practices for e-discovery, we are still in the early, rapidly changing, non-standardized stage of new technology. Bessen argues that employer policies and government policies should encourage such on-the-job learning and perfection of new methods.

Jim Bessen’s findings are starting to be discussed by many who are now concerned with the impact of AI on employment. See for instance, Andrea Willige’s article in the World Economic Forum concerning Davos for 2017Two reasons computers won’t destroy all the jobs (“jobs don’t disappear, they simply move up the skills and wage ladder. For workers to move up the ranks, they must acquire the necessary skillset.”).

Standardization v. On-the-Job Training

Moving on up requires new employment skills. It requires workers who can step-in, step-up, step-aside, step-narrowly, or step-forward. Only Humans Need Apply; Dean Gonsowski, A Clear View or a Short Distance? AI and the Legal Industry, and, Gonsowski, A Changing World: Ralph Losey on “Stepping In” for e-Discovery, (Relativity Blog) (Interview with references to the the 5-steps described in Only Humans Need Apply). Unless and until standardization emerges, and this is taught in a classroom, the new skills will be acquired by on-the-job learning only, sometimes with experienced trainers, but more often self-taught by trial and error.

Borg_Ralph_headI have been working on creating the perfect, standard method for electronic document review using predictive coding since Da Silva Moore. I have used trial and error and on-the-job learning, buttressed by spending a month a year over the last five years on scientific research and experiments with my own team (remember my Borg experiments and videos?) and with TREC, EDI and Kroll Ontrack. Borg Challenge: Report of my experimental review of 699,082 Enron documents using a semi-automated monomodal methodology (a five-part written and video series comparing two different kinds of predictive coding search methods); Predictive Coding Narrative: Searching for Relevance in the Ashes of EnronEDI-Oracle Study: Humans Are Still Essential in E-Discovery (LTN Nov., 2013); e-Discovery Team at TREC 2015 Total Recall Track, Final ReportTREC 2016 Total Recall Track NOTEBOOK.

predictive_coding_4-0_simpleAfter years we have finally perfected and standardized a highly effective method for document review using predictive coding. We call it Predictive Coding 4.0. This method is complete, well-tested, proven and standardized for my team, but not yet accepted by the industry. Unfortunately, industry acceptance of one lawyer’s method is very difficult (impossible?) in the highly competitive, still young and emerging field of electronic document review. I create a standard because I have to in my work, not because I unrealistically expect the industry to adopt it. The industry is still too young for that. I will continue with my on-the-job training, content with that, just as Bessen, Davenport and Kirby observe is the norm for all new technologies. Someday a standard will be generally accepted and taught in classrooms, but we are far from it.

Conclusion

There is more going on in Bagley Two than objective reason, even assuming such a thing exists. Experienced attorneys can easily read between the lines. Reasoned analysis is just the tip of the iceberg, or top of the pyramid, as I envisioned in the new model for Holistic Law outlined in my prior article, Scientific Proof.

There is far more to Senior District Judge Charles S. Haight, Jr., than his ability to be logical and apply reason to the facts. He is not just a “thinking machine.” He has wisdom from decades on the bench. He is perceptive, has feelings and emotions, good intuitions and, we can see, a sense of humor. The same holds true for most judges and lawyers, perhaps even law professors. We are all human and have many other capacities beyond what robots can be trained to do.

Jason_Ralph_RobotReason is just one of the things that we humans do, and, as the work of Professor Ariely has shown, it is typically full of holes and clouded by hidden bias. We need the help of computers to get reason done right, to augment our logic and reasoning skills. Do not try to compete with, nor exclude robots from tasks involving reason. You will ultimately lose that battle. Instead, work with the robots. Invite them in, but remain in control of the processes; use the AI’s abilities to enhance and enlarge your own.

I am sure legal reason will improve in the future and become less subjective. This will happen when more lawyers Step-In as discussed in Davenport and Kirby, Only Humans Need Apply and Dean Gonsowski, A Clear View or a Short Distance? AI and the Legal Industry, and A Changing World: Ralph Losey on “Stepping In” for e-Discovery

alex_hafezMany of us have stepped-in, to use Davenport and Kirby’s language, to manage the use of TAR and AI in document review, not just me. Consider, for instance attorney Alexander Hafez, currently a “Solutions Engineer” for FTI. He was the only other attorney featured in Only Humans Need Apply. Alex bootstrapped his way from minimum wage contract document reviewer, to his current large vendor consultant “step-in” job, by, in the book’s words, “educational bricolage” composed of on-the-job learning and “a specialized course of two and some autodidactic reading.” Id. pg. 144. There are thousands of lawyers in e-Discovery doing quite well in today’s economy. The use of AI and other advanced technologies is now starting to appear in other areas of the law too, including contract review, analysis and construction. See eg. Kira Systems, Inc.

Great-Depression_LitigatorsAs the other areas of the Law become as enhanced and augmented as e-discovery, we will see new jobs open up for the steppers. Old mechanistic law jobs will be replaced. That is for sure. There will be jobs lost in the legal economy. But if Davenport, Kirby and Bessen are correct, and I for one think they are, new, better paying jobs will be created to replace them. Still, for most luddite lawyers, young and old, who are unable to adapt and learn new technologies, the impact of AI on the Law could be devastating. 

Only the tech-savvy will be able to move up the skill and wage ladder by stepping-in to make the technology work right. I attained the necessary skill set to do this with legal technology by teaching myself, by “hacking around” with computers. Yes, it was difficult, but I enjoyed this kind of learning. My story of on the job self-learning is very common. Thus the name of Bessen’s book, Learning by DoingOthers might do better in a more structured learning environment, such as a school, but for the fact there currently is none for this sort of thing, at least in the Law. It falls between the cracks of law school and computer science. For now the self-motivated, self-learners will continue to lead the way.

brad_smith_microsoftNot only do we need to improve our thinking with machines, we need to contribute our other talents and efforts. We need to engage and expand upon the qualities of our job that are most satisfying to us, that meet our human nature. This uniquely human work requires what is sometimes called “soft skills.” This primarily includes the ability for good interpersonal communication, but also such things as the ability to work collaboratively, to adapt to a new set of demands, and to solve problems on the fly. Legal counseling is a prime example according to the general counsel of Microsoft, Brad Smith. Microsoft’s Top Lawyer Toasts Legal Secretaries (Bloomberg Law, 1/18/17). The top lawyer, once CEO of Microsoft, also opined:

Individuals need to learn new skills to keep pace, and this isn’t always easy.  Over the next decade this could become more daunting still, as technology continues to change rapidly.  There is a broadening need for new technical skills and stronger soft skills.  The ability – and opportunity – to continue learning has itself become more important.

Brad Smith, Constructing a Future that Enables all Americans to Succeed, (Dept. of Commerce guest blog, 11/30/16).

The Wikipedia article on “soft skills” lists ten basic skills as compiled by Heckman and Kautz, Hard Evidence on Soft Skills, Labour Econ. 2012 Aug 1; 19(4): 451–464.

  • Communication – oral, speaking capability, written, presenting, listening.
  • Courtesy – manners, etiquette, business etiquette, gracious, says please and thank you, respectful.
  • Flexibility – adaptability, willing to change, lifelong learner, accepts new things, adjusts, teachable.
  • Integrity – honest, ethical, high morals, has personal values, does what’s right.
  • Interpersonal skills – nice, personable, sense of humor, friendly, nurturing, empathetic, has self-control, patient, sociability, warmth, social skills.
  • Positive attitude – optimistic, enthusiastic, encouraging, happy, confident.
  • Professionalism – businesslike, well-dressed, appearance, poised.
  • Responsibility – accountable, reliable, gets the job done, resourceful, self-disciplined, wants to do well, conscientious, common sense.
  • Teamwork – cooperative, gets along with others, agreeable, supportive, helpful, collaborative.
  • Work ethic – hard working, willing to work, loyal, initiative, self-motivated, on time, good attendance.

soft-skills_cartoon

As Brad Smith correctly observed, the skills and tasks needed to keep pace with technology include these kinds of soft skills as well as new technological know-how, things like the best methods to implement new predictive coding software. The tasks, both soft and technical, are generally not overly repetitive and typically require some creativity, imagination, flexibility and inventiveness and, in my view, the initiative to exceed original parameters.

cute_robotA concerned lawyer with real empathy who counsels fellow humans is not likely to be replaced anytime soon by a robot, no matter how cute. There is no substitute for caring, human relationships, for comforting warmth, wit and wisdom. The calm, knowledgeable, confident presence of a lawyer who has been through a problem many times before, and assures you that they can help, is priceless. It brings peace of mind, relaxation and trust far beyond the abilities of any machine.

Stepping-in is one solution for those of us who like working with new technology, but for the rest of humanity, soft-skills are now even more important. Even us tech-types need to learn and improve upon our soft skills. The team approach to e-discovery, which is the basic premise of this e-Discovery Team blog, does not work well without them.

ralph_17_pallate_knife_2Brad Smith’s comment on the need for continued learning is key for everyone who wants to keep working in the future. It is the same thing that Bessen, Davenport and Kirby say. Continued learning is one reason I keep writing. It helps me to learn and may help others to learn too, as part of their “autodidactic reading” and “educational bricolage.” (How else would I learn those words?) According to Bessen’s, Davenport and Kirby’s research most of the key skills needed to keep pace can only be learned on-the-job and are usually self-taught. That is one reason online education is so important. It makes it easier than ever for otherwise isolated people to have access to specialized knowledge and trainers.


%d bloggers like this: