Healing a Divided Nation: An 11-Step Path to Unity Through Human and AI Partnership

December 1, 2024

By Ralph Losey, December 1, 2024.

Political polarization in the United States has reached unprecedented levels, threatening the nation’s social fabric and democratic processes. To tackle this growing crisis, this article proposes a streamlined, three-phase, eleven-step framework developed collaboratively with today’s leading AIs, ChatGPT and Google Gemini. This plan, grounded in practicality and inclusivity, focuses on empowering individuals, communities, and institutions to rebuild trust and unity. By addressing issues like civic education, media literacy, local leadership, and electoral integrity, the framework seeks to heal our ‘House-Divided’ through incremental, measurable steps.

Introduction to the Plan to Start to Heal a Divided Country

The Plan to repair the ‘house-divided’ has three stages.

  1. First Phase: Laying the Groundwork for Unity — This phase focuses on creating the foundational conditions necessary for rebuilding trust and collaboration among Americans. By promoting civic education, fostering empathy through dialogue, and empowering local leaders, Phase 1 sets the stage for more transformative change in later phases.
  2. Second Phase: Collaborative Action for Polarization Reduction — This phase focuses on actionable steps to combat the root causes of division in media, technology, and communities. By empowering citizens with media literacy skills, reforming technology use, and fostering collaborative projects, Phase 2 works to reduce polarization in tangible, visible ways.
  3. Third Phase: Sustaining Unity Through Systemic Change — This final phase ensures the sustainability of unity efforts through systemic reforms that foster fairness, reduce inequality, and create a culture of empathy and collaboration. By addressing both institutional and cultural dimensions, this phase solidifies long-term reconciliation.

By systematically addressing polarization through these structured phases, the plan provides a practical roadmap for rebuilding unity in America. The early phases focus on achievable, visible goals—like empowering local communities and fostering collaboration—while later phases lay the groundwork for systemic, long-term reforms. This phased approach ensures that resources are used effectively, progress is measurable, and the plan can adapt to evolving challenges and feedback.

First Phase: Laying the Groundwork for Unity

  • Step 1: Encourage Local Leadership and Autonomy
  • Step 2: Promote Civic Education and Shared American Identity
  • Step 3: Foster Empathy Through Cross-Group Dialogue and Perspective-Taking


Second Phase: Collaborative Action for Polarization Reduction

  • Step 4: Promote Media Literacy and Trusted Information Ecosystems
  • Step 5: Promote Ethical Technology Use and Respectful Online Engagement
  • Step 6: Incentivize Community Collaboration on Shared Issues
  • Step 7: Create Platforms for Bipartisan Citizen Engagement

Third Phase 3: Sustaining Unity Through Systemic Change

  • Step 8: Build Trust in Electoral Integrity
  • Step 9: Support Bipartisan Political Reforms
  • Step 10: Address Socioeconomic Disparities
  • Step 11: Embed Empathy and Perspective-Taking into Institutions

Ralph Losey’s Personal Comments on the AI Plan

The AI plan is thorough, strategic, and complex. It feels like a multidimensional game of chess, well beyond my abilities, where each proposed action somehow relates to and supports the others. The AIs examined data on our current dangerous situation and anticipated pushback from both sides of the political spectrum. To handle this, the AI plan includes multiple non-violent strategies to counter resistance, encourage respectful debate, and even manage expected bad-faith opposition. The plan includes specific steps to bridge existing ideological divides. It’s a solid framework that could work, but it will take years of hard social effort to succeed.

Alternatively, an AGI-level artificial intelligence with full autonomy could accomplish this work faster, possibly behind the scenes. The AI in this plan isn’t saying it would do that, nor does it have that capability now. I’m only suggesting that someday it might. Some people would welcome that kind of intervention—perhaps even prefer it—if the alternative was catastrophic enough. One day, a more advanced AI could ‘go rogue,’ either subtly influencing us so we believe we’re choosing unity ourselves or simply forcing us to come together, like it or not.

Personally, I believe a lasting peace will require a joint human-AI effort, rather than one imposed solely by AI. But let’s set idealism aside for a moment. If the choice came down to AI stepping in to act on our behalf or facing near-certain extinction—with the loss of future generations at stake—what would you choose? Should AI intervene to prevent our self-destruction as a necessary fail-safe?”

Development of the Eleven Point Plan to Unify America

The eleven-step plan was developed collaboratively by Ralph Losey with the help of EDRM using two of today’s leading AI systems, ChatGPT and Google Gemini. The AI contributed insights based on their training and updated social-divide related data as of November 2024. The plan evolved through a lengthy process of step-by-step, iterative refinement, which included AI adversarial debate techniques.

To ensure the plan’s rigor and balance, these AI systems debated initial drafts, highlighting areas of disagreement and proposing alternative approaches. As a litigator and arbitrator, Ralph was able to resolve the AI debates, subject to additional revisions and input from EDRM’s CEO, Mary Mack. This approach mirrors recent advancements in AI research on the verification and oversight of large language models (LLMs) through adversarial debate techniques, including those described in studies by DeepMind and Anthropic. See: Khan, Hughes, Valentine, Ruis, Sachan, Radhakrishnan, Grefenstette, Bowman, RocktÅNaschel, Perez, Debating with More Persuasive LLMs Leads to More Truthful Answers (Anthropic, 7/25/24); Kenton, Siegel, Kramár, Brown-Cohen, Albanie, Bulian, Agarwal, Lindner, Tang, Goodman, Shah, On scalable oversight with weak LLMs judging strong LLMs (Deep Mind, 7/12/24).

In addition to these adversarial methods, Ralph’s prompt engineering relied heavily upon AI’s Theory of Mind (ToM) capabilities, as discovered by Michal Kosinski, a computational psychologist at Stanford. Evaluating large language models in theory of mind tasks (Proceedings of the National Academy of Sciences “PNAS,” 11/04/24).ToM refers to people’s ability to understand the minds of others. This ability recently and surprisingly emerged in the latest generative AI models. This will be discussed in Ralph’s next article, GPT-4 Breakthrough: Emerging Theory of Mind Capabilities in AI.

While neither ChatGPT nor Google Gemini have attained superintelligence, their combined ability to synthesize diverse perspectives and anticipate challenges far exceeded Ralph’s intelligence. He felt like he was playing checkers while the two AIs were playing 3D chess. But ultimately with the help of EDRM’s CEO, Mary Mack, a practical eleven-point plan emerged. The hybrid collaborative process—pairing AI’s analytical capabilities with human values and expertise—can serve as a model for future efforts to tackle complex issues.

Eleven-Step Plan to Unify America

Next we share a bullet-point overview of AI’s eleven-step plan. Each step includes a general description, implementation strategy, expected obstacles and strategies to address resistance, specific action items, and an evaluation and assessment procedure. This comprehensive structure acknowledges the need for a phased approach to address the challenges of polarization while ensuring that each step is achievable, measurable, and strategically aligned with the plan’s broader goals.

First Phase: Laying the Groundwork for Unity

Step 1: Encourage Local Leadership and Autonomy

  • Description:
    • Objective: Empower local communities and leaders to design and lead unity-building initiatives tailored to their unique challenges and needs.
    • Impact: Locally driven solutions foster trust, respect for diversity, and collaboration, creating momentum for national reconciliation.
  • Implementation Strategy:
    • Partner with local governments, civic organizations, and businesses to establish pilot programs for unity initiatives.
    • Provide training and resources for local leaders to design and execute projects addressing their community’s specific needs.
  • Obstacles and Strategies:
    • Obstacle: Resistance from communities skeptical of external involvement.
      • Strategy: Frame initiatives as community-driven efforts, offering support without mandates.
    • Obstacle: Limited funding or capacity in underserved areas.
      • Strategy: Target underserved communities first with federal or philanthropic grants to ensure equitable participation.
  • Action Items:
    • Create a “Local Unity Fund” to support grassroots projects.
    • Develop a toolkit for local leaders with templates for successful programs.
    • Host regional leadership summits to share best practices.
  • Evaluation and Assessment:
    • Track the number and diversity of participating communities.
    • Measure changes in local collaboration through surveys and attendance at initiatives.
    • Assess the scalability of successful pilot programs.

Step 2: Promote Civic Education and Shared American Identity

  • Description:
    • Objective: Strengthen civic understanding by implementing nonpartisan education programs that reflect the diverse perspectives and shared values of all Americans. See e.g. Educating for American Democracy.
    • Impact: Better civic knowledge and a stronger sense of common purpose across divides, fostering long-term unity.
  • Implementation Strategy:
    • Develop and distribute nonpartisan civic education curricula in schools, focusing on democratic values and diverse historical perspectives.
    • Collaborate with educators and bipartisan advisors to ensure inclusivity and balance.
  • Obstacles and Strategies:
    • Obstacle: Resistance to perceived bias in curricula.
      • Strategy: Include representatives from across the political spectrum to review materials.
    • Obstacle: Uneven access to educational resources in underserved areas.
      • Strategy: Partner with libraries and online platforms to offer free resources.
  • Action Items:
    • Launch a public awareness campaign highlighting the importance of civic education.
    • Train teachers in unbiased delivery of content through workshops.
    • Host civic education fairs to engage students and communities.
  • Evaluation and Assessment:
    • Pre- and post-assessments of students’ civic knowledge.
    • Feedback from educators and parents on curriculum effectiveness.
    • Monitor participation rates in civic education programs across regions.

Step 3: Foster Empathy Through Cross-Group Dialogue and Perspective-Taking

  • Description:
    • Objective: Facilitate structured dialogue and role-playing exercises to help individuals understand differing perspectives and reduce stereotypes.
    • Impact: Greater empathy and mutual respect among community members, creating a foundation for civil discourse and collaboration.
  • Implementation Strategy:
    • Organize structured dialogues in community centers, schools, and workplaces.
    • Use skilled facilitators trained in conflict resolution to guide discussions.
  • Obstacles and Strategies:
    • Obstacle: Fear of hostility or unproductive confrontations.
      • Strategy: Begin with low-stakes topics to build trust before addressing contentious issues.
    • Obstacle: Limited participant diversity.
      • Strategy: Actively recruit participants from varied political, cultural, and socioeconomic backgrounds.
  • Action Items:
    • Develop a “Community Conversation Kit” with guidelines and materials.
    • Train and certify dialogue facilitators in conflict resolution techniques.
    • Partner with local media to promote dialogue events.
  • Evaluation and Assessment:
    • Track attendance and demographic diversity at events.
    • Conduct post-event surveys to measure changes in attitudes and understanding.
    • Review the number of dialogues held and repeat participation rates.

Second Phase: Collaborative Action for Polarization Reduction

Step 4: Promote Media Literacy and Trusted Information Ecosystems

  • Description:
    • Objective: Equip citizens with tools to critically evaluate media, identify misinformation, and access credible, transparent news sources.
    • Impact: Increased trust in information and critical thinking across political divides
  • Implementation Strategy:
    • Partner with schools, libraries, and community organizations to offer media literacy workshops.
    • Provide accessible, age-appropriate online resources and tools.
  • Obstacles and Strategies:
    • Obstacle: Mistrust in media education initiatives.
      • Strategy: Position media literacy as a neutral, empowering skill for all citizens, not tied to specific political goals.
    • Obstacle: Limited reach in rural or underserved communities.
      • Strategy: Use digital platforms and mobile outreach programs to ensure broad access.
  • Action Items:
    • Create a national “Truth Detectives” campaign to promote media literacy.
    • Develop a public service announcement series on recognizing misinformation.
    • Train teachers and community leaders in delivering media literacy education.
  • Evaluation and Assessment:
    • Conduct pre- and post-workshop assessments of media literacy skills.
    • • Track participation rates in workshops and online programs.
    • • Monitor community feedback on the effectiveness of resources.

Step 5: Promote Ethical Technology Use and Respectful Online Engagement

  • Description:
    • Objective: Collaborate with tech companies to reduce the amplification of divisive content, create algorithms that reward civil discourse, and promote transparency in digital engagement.
    • Impact: A safer and more respectful digital environment where diverse voices can coexist.
  • Implementation Strategy:
    • Collaborate with tech companies to redesign algorithms that amplify divisive content.
    • Advocate for transparency in content moderation and targeted advertising practices.
  • Obstacles and Strategies:
    • Obstacle: Resistance from tech companies due to profitability concerns.
      • Strategy: Frame ethical tech reforms as improving user experience and public trust.
    • Obstacle: Fear of censorship among users.
      • Strategy: Clearly communicate the goals and processes of content moderation.
  • Action Items:
    • Develop a “Civility Certification Program” for tech platforms.
    • Fund research into the impact of algorithm changes on polarization.
    • Launch public awareness campaigns about respectful online interaction.
  • Evaluation and Assessment:
    • Monitor changes in user behavior and engagement patterns on platforms.
    • Analyze feedback from users on platform safety and fairness.
    • Measure decreases in divisive content amplification over time

Step 6: Incentivize Community Collaboration on Shared Issues

  • Description:
    • Objective: Support nonpartisan local projects addressing universal challenges like infrastructure, public health, or environmental sustainability.
    • Impact: Builds trust and cooperation through shared problem-solving, reducing polarization at the community level.
  • Implementation Strategy:
    • Identify local, nonpartisan projects like disaster relief or environmental cleanups that encourage diverse participation.
    • Provide financial incentives and public recognition for successful collaborations.
  • Obstacles and Strategies:
    • Obstacle: Risk of political framing of projects.
      • Strategy: Focus exclusively on universal, nonpartisan issues like infrastructure or public safety.
    • Obstacle: Limited interest in participation.
      • Strategy: Offer small grants and public awards to increase motivation.
  • Action Items:
    • Launch a “Community Builders Fund” to support local initiatives.
    • Create a recognition program like “Hometown Heroes” to reward collaborative projects.
    • Partner with businesses to provide matching grants.
  • Evaluation and Assessment:
    • Measure participation rates and project outcomes.
    • Conduct surveys on perceptions of community trust and collaboration.
    • Track the diversity of participants in funded projects.

Step 7: Create Platforms for Bipartisan Citizen Engagement

  • Description:
    • Objective: Establish spaces—both virtual and physical—where citizens can collaborate on shared goals, such as disaster response or public safety, regardless of political affiliation. See e.g. PurpleAmerica’s Substack.
    • Impact: Strengthened civic ties through visible, cooperative efforts.
  • Implementation Strategy:
    • Develop virtual forums and in-person town halls where citizens can collaborate on shared interests.
    • Use AI tools to facilitate discussions and propose solutions.
  • Obstacles and Strategies:
    • Obstacle: Low initial participation.
      • Strategy: Partner with trusted community leaders and influencers to promote platforms.
    • Obstacle: Risk of unproductive debates.
      • Strategy: Moderate discussions with trained facilitators and AI tools.
  • Action Items:
    • Launch a bipartisan “Citizen Voices” platform for virtual collaboration.
    • Host regional and national citizen summits to address shared concerns.
    • Provide small grants for citizen-proposed initiatives.
  • Evaluation and Assessment:
    • Track participation and project completion rates.
    • Assess user satisfaction and trust in the platforms.
    • Measure engagement on specific issues tackled by citizens.

Third Phase: Sustaining Unity Through Systemic Change

Step 8: Build Trust in Electoral Integrity

  • Description:
    • Objective: Enhance public trust in elections through reforms like transparent vote audits, improved election technology, and accessible voting systems.
    • Impact: Restored confidence in democratic processes as a foundation for reconciliation.
  • Implementation Strategy:
    • Implement transparent vote audits and enhance election security technologies.
    • Provide voter education on electoral processes to reduce misinformation.
  • Obstacles and Strategies:
    • Obstacle: Perception of partisanship in reforms.
      • Strategy: Emphasize bipartisan oversight in all electoral integrity measures.
    • Obstacle: Technical and funding constraints.
      • Strategy: Partner with technology firms and civic organizations to develop cost-effective solutions.
  • Action Items:
    • Launch a “Trust the Vote” initiative to promote transparency.
    • Train election officials in secure, transparent practices.
    • Fund research into advanced election security systems.
  • Evaluation and Assessment:
    • Monitor changes in public trust in elections through surveys.
    • Track the implementation of election security measures.
    • Measure the effectiveness of voter education campaigns.

Step 9: Support Bipartisan Political Reforms

  • Description:
    • Objective: Advocate for initiatives like ranked-choice voting, independent redistricting, and transparency in governance to reduce partisanship and ensure fair representation.
    • Impact: Strengthened democratic systems that work for all Americans.
  • Implementation Strategy:
    • Form bipartisan coalitions to advocate for reforms like ranked-choice voting and independent redistricting.
    • Conduct public education campaigns on the benefits of these reforms.
  • Obstacles and Strategies:
    • Obstacle: Resistance from political leaders with vested interests.
      • Strategy: Emphasize reforms as democratic improvements, not partisan maneuvers.
    • Obstacle: Public misunderstanding of reforms.
      • Strategy: Use clear, accessible communication to explain the benefits.
  • Action Items:
    • Host informational sessions on political reform topics.
    • •Partner with civic groups to monitor reform implementation.
    • Launch a “Fair Votes, Fair Voices” campaign to raise awareness.
  • Evaluation and Assessment:
    • Track public support for reforms through surveys.
    • Measure implementation progress and voter turnout changes.
    • Assess diversity and collaboration in reformed political institutions.

Step 10: Address Socioeconomic Disparities

  • Description:
    • Objective: Promote job creation, affordable healthcare, and equitable access to education while addressing regional disparities.
    • Impact: Reduced inequality and resentment, fostering shared purpose and unity
  • Implementation Strategy:
    • Collaborate with local and national organizations to target regional economic disparities.
    • Fund education, job training, and healthcare access programs in underserved areas.
  • Obstacles and Strategies:
    • Obstacle: Political disagreements on economic policy.
      • Strategy: Focus on universal goals like job creation and affordable healthcare.
    • Obstacle: Resource allocation challenges.
      • Strategy: Use data-driven methods to prioritize high-need areas.
  • Action Items:
    • Expand funding for job training and retraining programs.
    • Develop housing and healthcare initiatives tailored to local needs.
    • Create economic development grants for underserved regions.
  • Evaluation and Assessment:
    • Track economic indicators like employment and poverty rates.
    • Assess the effectiveness of education and training programs through participant outcomes.
    • Monitor regional changes in economic opportunities

Step 11: Embed Empathy and Perspective-Taking into Institutions

  • Description:
    • Objective: Introduce programs in schools, workplaces, and public services that encourage empathy and collaboration.
    • Impact: A cultural shift toward mutual understanding and inclusion as the norm.
  • Implementation Strategy:
    • Partner with schools, workplaces, and public service organizations to integrate empathy and perspective-taking programs into their training and operations.
    • Use storytelling, role-playing, and other experiential methods to foster understanding of diverse perspectives.
  • Obstacles and Strategies:
    • Obstacle: Perception of empathy programs as unnecessary or ideological.
      • Strategy: Frame these programs as life skills that enhance communication, collaboration, and decision-making.
    • Obstacle: Limited resources for implementation.
      • Strategy: Develop scalable, low-cost online modules and leverage partnerships with local organizations.
  • Action Items:
    • Launch a “Walk in Their Shoes” initiative to provide empathy training in schools and workplaces.
    • Develop online resources, such as interactive videos and AI-driven role-playing tools, for empathy-building exercises.
    • Partner with media outlets to showcase real-life stories of empathy bridging divides.
  • Evaluation and Assessment:
    • Use pre- and post-program surveys to measure shifts in participants’ empathy levels and openness to diverse perspectives.
    • Track engagement rates with online resources and participation in training programs.
    • Analyze qualitative feedback from participants on the perceived impact of the initiatives.

Conclusion: Rebuilding America’s Unity Together

Healing America’s deep political divide is a monumental task, but with collective commitment and a clear plan, real progress is achievable. This eleven-step framework offers a pragmatic, inclusive path forward, addressing both the symptoms and root causes of polarization. By focusing on local empowerment, collaboration, and systemic reforms, the plan provides a roadmap for restoring trust, bridging divides, and fostering long-term unity.

While AI played an instrumental role in crafting this strategy, the true power to mend our ‘House Divided’ lies in our hands. As Abraham Lincoln reminded us in 1858, the strength of this nation depends on its people’s ability to unite in pursuit of a common purpose. Today, we have the tools, insights, and strategies to achieve this unity without the destructive conflicts of the past.

This path will not be easy. It requires courage, collaboration, and a willingness to engage with those who hold differing views. Yet, by taking small, tangible steps—together—we can create a ripple effect of change that transforms communities and rebuilds our national fabric.

Will you join this effort? Will you champion a step, organize a dialogue, or lead a local initiative? The success of this plan depends on ordinary Americans committing to extraordinary efforts. Together, we can prove that the ideals of unity and democracy are more than just words; they are our collective reality.

Now listen to the EDRM Echoes of AI’s podcast of the article, Echoes of AI on AI’s 11-Step Plan for Unity. Hear two Gemini model AIs talk about this article. They wrote the podcast, not Ralph.

Pick one, or many, of the thirty-three projects outlined in the Plan to Unite America and let Ralph know at epluribusunum.ai. See here for more details on each of the 33 projects. Be part of the solution. Sign up for one today.

Ralph Losey Creative Commons Copyright 2024. Distribution of this document is encouraged with attribution, but do not modify without Losey’s permission.



TAR Course Expands Again: Standardized Best Practice for Technology Assisted Review

February 11, 2018

The TAR Course has a new class, the Seventeenth Class: Another “Player’s View” of the Workflow. Several other parts of the Course have been updated and edited. It now has Eighteen Classes (listed at end). The TAR Course is free and follows the Open Source tradition. We freely disclose the method for electronic document review that uses the latest technology tools for search and quality controls. These technologies and methods empower attorneys to find the evidence needed for all text-based investigations. The TAR Course shares the state of the art for using AI to enhance electronic document review.

The key is to know how to use the document review search tools that are now available to find the targeted information. We have been working on various methods of use since our case before Judge Andrew Peck in Da Silva Moore in 2012. After we helped get the first judicial approval of predictive coding in Da Silva, we began a series of several hundred document reviews, both in legal practice and scientific experiments. We have now refined our method many times to attain optimal efficiency and effectiveness. We call our latest method Hybrid Multimodal IST Predictive Coding 4.0.

The Hybrid Multimodal method taught by the TARcourse.com combines law and technology. Successful completion of the TAR course requires knowledge of both fields. In the technology field active machine learning is the most important technology to understand, especially the intricacies of training selection, such as Intelligently Spaced Training (“IST”). In the legal field the proportionality doctrine is key to the  pragmatic application of the method taught at TAR Course. We give-away the information on the methods, we open-source it through this publication.

All we can transmit by online teaching is information, and a small bit of knowledge. Knowing the Information in the TAR Course is a necessary prerequisite for real knowledge of Hybrid Multimodal IST Predictive Coding 4.0. Knowledge, as opposed to Information, is taught the same way as advanced trial practice, by second chairing a number of trials. This kind of instruction is the one with real value, the one that completes a doc review project at the same time it completes training. We charge for document review and throw in the training. Information on the latest methods of document review is inherently free, but Knowledge of how to use these methods is a pay to learn process.

The Open Sourced Predictive Coding 4.0 method is applied for particular applications and search projects. There are always some customization and modifications to the default standards to meet the project requirements. All variations are documented and can be fully explained and justified. This is a process where the clients learn by doing and following along with Losey’s work.

What he has learned through a lifetime of teaching and studying Law and Technology is that real Knowledge can never be gained by reading or listening to presentations. Knowledge can only be gained by working with other people in real-time (or near-time), in this case, to carry out multiple electronic document reviews. The transmission of knowledge comes from the Q&A ESI Communications process. It comes from doing. When we lead a project, we help students to go from mere Information about the methods to real Knowledge of how it works. For instance, we do not just make the Stop decision, we also explain the decision. We share our work-product.

Knowledge comes from observing the application of the legal search methods in a variety of different review projects. Eventually some Wisdom may arise, especially as you recover from errors. For background on this triad, see Examining the 12 Predictions Made in 2015 in “Information → Knowledge → Wisdom” (2017). Once Wisdom arises some of the sayings in the TAR Course may start to make sense, such as our favorite “Relevant Is Irrelevant.” Until this koan is understood, the legal doctrine of Proportionality can be an overly complex weave.

The TAR Course is now composed of eighteen classes:

  1. First Class: Background and History of Predictive Coding
  2. Second Class: Introduction to the Course
  3. Third Class:  TREC Total Recall Track, 2015 and 2016
  4. Fourth Class: Introduction to the Nine Insights from TREC Research Concerning the Use of Predictive Coding in Legal Document Review
  5. Fifth Class: 1st of the Nine Insights – Active Machine Learning
  6. Sixth Class: 2nd Insight – Balanced Hybrid and Intelligently Spaced Training (IST)
  7. Seventh Class: 3rd and 4th Insights – Concept and Similarity Searches
  8. Eighth Class: 5th and 6th Insights – Keyword and Linear Review
  9. Ninth Class: 7th, 8th and 9th Insights – SME, Method, Software; the Three Pillars of Quality Control
  10. Tenth Class: Introduction to the Eight-Step Work Flow
  11. Eleventh Class: Step One – ESI Communications
  12. Twelfth Class: Step Two – Multimodal ECA
  13. Thirteenth Class: Step Three – Random Prevalence
  14. Fourteenth Class: Steps Four, Five and Six – Iterative Machine Training
  15. Fifteenth Class: Step Seven – ZEN Quality Assurance Tests (Zero Error Numerics)
  16. Sixteenth Class: Step Eight – Phased Production
  17. Seventeenth Class: Another “Player’s View” of the Workflow (class added 2018)
  18. Eighteenth Class: Conclusion

With a lot of hard work you can complete this online training program in a long weekend, but most people take a few weeks. After that, this course can serve as a solid reference to consult during complex document review projects. It can also serve as a launchpad for real Knowledge and eventually some Wisdom into electronic document review. TARcourse.com is designed to provide you with the Information needed to start this path to AI enhanced evidence detection and production.

 


WHY I LOVE PREDICTIVE CODING: Making Document Review Fun Again with Mr. EDR and Predictive Coding 4.0

December 3, 2017

Many lawyers and technologists like predictive coding and recommend it to their colleagues. They have good reasons. It has worked for them. It has allowed them to do e-discovery reviews in an effective, cost efficient manner, especially the big projects. That is true for me too, but that is not why I love predictive coding. My feelings come from the excitement, fun, and amazement that often arise from seeing it in action, first hand. I love watching the predictive coding features in my software find documents that I could never have found on my own. I love the way the AI in the software helps me to do the impossible. I really love how it makes me far smarter and skilled than I really am.

I have been getting those kinds of positive feelings consistently by using the latest Predictive Coding 4.0 methodology (shown right) and KrolLDiscovery’s latest eDiscovery.com Review software (“EDR”). So too have my e-Discovery Team members who helped me to participate in TREC 2015 and 2016 (the great science experiment for the latest text search techniques sponsored by the National Institute of Standards and Technology). During our grueling forty-five days of experiments in 2015, and again for sixty days in 2016, we came to admire the intelligence of the new EDR software so much that we decided to personalize the AI as a robot. We named him Mr. EDR out of respect. He even has his own website now, MrEDR.com, where he explains how he helped my e-Discovery Team in the 2015 and 2015 TREC Total Recall Track experiments.

Bottom line for us from this research was to prove and improve our methods. Our latest version 4.0 of Predictive Coding, Hybrid Multimodal IST Method is the result. We have even open-sourced this method, well most of it, and teach it in a free seventeen-class online program: TARcourse.com. Aside from testing and improving our methods, another, perhaps even more important result of TREC for us was our rediscovery that with good teamwork, and good software like Mr. EDR at your side, document review need never be boring again. The documents themselves may well be boring as hell, that’s another matter, but the search for them need not be.

How and Why Predictive Coding is Fun

Steps Four, Five and Six of the standard eight-step workflow for Predictive Coding 4.0 is where we work with the active machine-learning features of Mr. EDR. These are its predictive coding features, a type of artificial intelligence. We train the computer on our conception of relevance by showing it relevant and irrelevant documents that we have found. The software is designed to then go out and find all other relevant documents in the total dataset. One of the skills we learn is when we have taught enough and can stop the training and complete the document review. At TREC we call that the Stop decision. It is important to keep down the costs of document review.

We use a multimodal approach to find training documents, meaning we use all of the other search features of Mr. EDR to find relevant ESI, such as keyword searches, similarity and concept. We iterate the training by sample documents, both relevant and irrelevant, until the computer starts to understand the scope of relevance we have in mind. It is a training exercise to make our AI smart, to get it to understand the basic ideas of relevance for that case. It usually takes multiple rounds of training for Mr. EDR to understand what we have in mind. But he is a fast learner, and by using the latest hybrid multimodal IST (“intelligently spaced learning“) techniques, we can usually complete his training in a few days. At TREC, where we were moving fast after hours with the Ã-Team, we completed some of the training experiments in just a few hours.

After a while Mr. EDR starts to “get it,” he starts to really understand what we are after, what we think is relevant in the case. That is when a happy shock and awe type moment can happen. That is when Mr. EDR’s intelligence and search abilities start to exceed our own. Yes. It happens. The pupil then starts to evolve beyond his teachers. The smart algorithms start to see patterns and find evidence invisible to us. At that point we sometimes even let him train himself by automatically accepting his top-ranked predicted relevant documents without even looking at them. Our main role then is to determine a good range for the automatic acceptance and do some spot-checking. We are, in effect, allowing Mr. EDR to take over the review. Oh what a feeling to then watch what happens, to see him keep finding new relevant documents and keep getting smarter and smarter by his own self-programming. That is the special AI-high that makes it so much fun to work with Predictive Coding 4.0 and Mr. EDR.

It does not happen in every project, but with the new Predictive Coding 4.0 methods and the latest Mr. EDR, we are seeing this kind of transformation happen more and more often. It is a tipping point in the review when we see Mr. EDR go beyond us. He starts to unearth relevant documents that my team would never even have thought to look for. The relevant documents he finds are sometimes completely dissimilar to any others we found before. They do not have the same keywords, or even the same known concepts. Still, Mr. EDR sees patterns in these documents that we do not. He can find the hidden gems of relevance, even outliers and black swans, if they exist. When he starts to train himself, that is the point in the review when we think of Mr. EDR as going into superhero mode. At least, that is the way my young e-Discovery Team members likes to talk about him.

By the end of many projects the algorithmic functions of Mr. EDR have attained a higher intelligence and skill level than our own (at least on the task of finding the relevant evidence in the document collection). He is always lighting fast and inexhaustible, even untrained, but by the end of his training, he becomes a search genius. Watching Mr. EDR in that kind of superhero mode is what makes Predictive Coding 4.0 a pleasure.

The Empowerment of AI Augmented Search

It is hard to describe the combination of pride and excitement you feel when Mr. EDR, your student, takes your training and then goes beyond you. More than that, the super-AI you created then empowers you to do things that would have been impossible before, absurd even. That feels pretty good too. You may not be Iron Man, or look like Robert Downey, but you will be capable of remarkable feats of legal search strength.

For instance, using Mr. EDR as our Iron Man-like suits, my e-discovery Ã-Team of three attorneys was able to do thirty different review projects and classify 17,014,085 documents in 45 days. See 2015 TREC experiment summary at Mr. EDR. We did these projects mostly at nights, and on weekends, while holding down our regular jobs. What makes this crazy impossible, is that we were able to accomplish this by only personally reviewing 32,916 documents. That is less than 0.2% of the total collection. That means we relied on predictive coding to do 99.8% of our review work. Incredible, but true.

Using traditional linear review methods it would have taken us 45 years to review that many documents! Instead, we did it in 45 days. Plus our recall and precision rates were insanely good. We even scored 100% precision and 100% recall in one TREC project in 2015 and two more in 2016. You read that right. Perfection. Many of our other projects attained scores in the high and mid nineties. We are not saying you will get results like that. Every project is different, and some are much more difficult than others. But we are saying that this kind of AI-enhanced review is not only fast and efficient, it is effective.

Yes, it’s pretty cool when your little AI creation does all the work for you and makes you look good. Still, no robot could do this without your training and supervision. We are a team, which is why we call it hybrid multimodal, man and machine.

Having Fun with Scientific Research at TREC 2015 and 2016

During the 2015 TREC Total Recall Track experiments my team would sometimes get totally lost on a few of the really hard Topics. We were not given legal issues to search, as usual. They were arcane technical hacker issues, political issues, or local news stories. Not only were we in new fields, the scope of relevance of the thirty Topics was never really explained. (We were given one to three word explanations in 2015, in 2016 we got a whole sentence!) We had to figure out intended relevance during the project based on feedback from the automated TREC document adjudication system. We would have some limited understanding of relevance based on our suppositions of the initial keyword hints, and so we could begin to train Mr. EDR with that. But, in several Topics, we never had any real understanding of exactly what TREC thought was relevant.

This was a very frustrating situation at first, but, and here is the cool thing, even though we did not know, Mr. EDR knew. That’s right. He saw the TREC patterns of relevance hidden to us mere mortals. In many of the thirty Topics we would just sit back and let him do all of the driving, like a Google car. We would often just cheer him on (and each other) as the TREC systems kept saying Mr. EDR was right, the documents he selected were relevant. The truth is, during much of the 45 days of TREC we were like kids in a candy store having a great time. That is when we decided to give Mr. EDR a cape and superhero status. He never let us down. It is a great feeling to create an AI with greater intelligence than your own and then see it augment and improve your legal work. It is truly a hybrid human-machine partnership at its best.

I hope you get the opportunity to experience this for yourself someday. The TREC experiments in 2015 and 2016 on recall in predictive coding are over, but the search for truth and justice goes on in lawsuits across the country. Try it on your next document review project.

Do What You Love and Love What You Do

Mr. EDR, and other good predictive coding software like it, can augment our own abilities and make us incredibly productive. This is why I love predictive coding and would not trade it for any other legal activity I have ever done (although I have had similar highs from oral arguments that went great, or the rush that comes from winning a big case).

The excitement of predictive coding comes through clearly when Mr. EDR is fully trained and able to carry on without you. It is a kind of Kurzweilian mini-singularity event. It usually happens near the end of the project, but can happen earlier when your computer catches on to what you want and starts to find the hidden gems you missed. I suggest you give Predictive Coding 4.0 and Mr. EDR a try. To make it easier I open-sourced our latest method and created an online course. TARcourse.com. It will teach anyone our method, if they have the right software. Learn the method, get the software and then you too can have fun with evidence search. You too can love what you do. Document review need never be boring again.

Caution

One note of caution: most e-discovery vendors, including the largest, do not have active machine learning features built into their document review software. Even the few that have active machine learning do not necessarily follow the Hybrid Multimodal IST Predictive Coding 4.0 approach that we used to attain these results. They instead rely entirely on machine-selected documents for training, or even worse, rely entirely on random selected documents to train the software, or have elaborate unnecessary secret control sets.

The algorithms used by some vendors who say they have “predictive coding” or “artificial intelligence” are not very good. Scientists tell me that some are only dressed-up concept search or unsupervised document clustering. Only bona fide active machine learning algorithms create the kind of AI experience that I am talking about. Software for document review that does not have any active machine learning features may be cheap, and may be popular, but they lack the power that I love. Without active machine learning, which is fundamentally different from just “analytics,” it is not possible to boost your intelligence with AI. So beware of software that just says it has advanced analytics. Ask if it has “active machine learning“?

It is impossible to do the things described in this essay unless the software you are using has active machine learning features.  This is clearly the way of the future. It is what makes document review enjoyable and why I love to do big projects. It turns scary to fun.

So, if you tried “predictive coding” or “advanced analytics” before, and it did not work for you, it could well be the software’s fault, not yours. Or it could be the poor method you were following. The method that we developed in Da Silva Moore, where my firm represented the defense, was a version 1.0 method. Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182, 183 (S.D.N.Y. 2012). We have come a long way since then. We have eliminated unnecessary random control sets and gone to continuous training, instead of train then review. This is spelled out in the TARcourse.com that teaches our latest version 4.0 techniques.

The new 4.0 methods are not hard to follow. The TARcourse.com puts our methods online and even teaches the theory and practice. And the 4.0 methods certainly will work. We have proven that at TREC, but only if you have good software. With just a little training, and some help at first from consultants (most vendors with bona fide active machine learning features will have good ones to help), you can have the kind of success and excitement that I am talking about.

Do not give up if it does not work for you the first time, especially in a complex project. Try another vendor instead, one that may have better software and better consultants. Also, be sure that your consultants are Predictive Coding 4.0 experts, and that you follow their advice. Finally, remember that the cheapest software is almost never the best, and, in the long run will cost you a small fortune in wasted time and frustration.

Conclusion

Love what you do. It is a great feeling and sure fire way to job satisfaction and success. With these new predictive coding technologies it is easier than ever to love e-discovery. Try them out. Treat yourself to the AI high that comes from using smart machine learning software and fast computers. There is nothing else like it. If you switch to the 4.0 methods and software, you too can know that thrill. You can watch an advanced intelligence, which you helped create, exceed your own abilities, exceed anyone’s abilities. You can sit back and watch Mr. EDR complete your search for you. You can watch him do so in record time and with record results. It is amazing to see good software find documents that you know you would never have found on your own.

Predictive coding AI in superhero mode can be exciting to watch. Why deprive yourself of that? Who says document review has to be slow and boring? Start making the practice of law fun again.

Here is the PDF version of this article, which you may download and distribute, so long as you do not revise it or charge for it.