Information → Knowledge → Wisdom: Progression of Society in the Age of Computers

April 5, 2015

Jobs and WozThe personal computer revolution started by the Hacker elite in the 1970s has completely transformed the world. From a historical perspective our current computer-based culture is a relative new-born. Yet it is already dominant. The first generation of hackers born in the fifties, epitomized by Steve Jobs and Steve Wozniak, have succeeded beyond everyone’s wildest dreams. They have quickly changed our world into an information based society. The dark days of ignorance, misinformation, dogma, and beliefs are receding. Some of the power elite of pre-information, pre-technology societies still try to block free information. But this is a futile, desperate attempt to maintain social control. Eventually even the Great Firewall of China will come down.

great_firewall

The insanely great success of our computational age will continue. Its social impact will evolve and grow even larger. Society will continue to change quickly as the power and sophistication of our computers continues to grow, and as our abilities to use these new computational skills continue to improve.

The old social norms based on superstition, lies, and half-truths are dying. The logical progression I see is from the Information-based world spawned by computers – which is where we are now – to the next step of a Knowledge-based society, and finally, someday, to a Wisdom-based culture.

The rapid rise of personal computers and the World Wide Web of connected computers created an unexpected flood of electronic information. So much so, that many (including me before I this thought process), often refer to our times as the Information Age. But that is wrong.

Time_Magazine_Apple_iWatchA more correct description is that we live in the Age of Personal Computing. This is an age where hackers and technology rule. The first large impact of personal computers was an exponential spike in the amount of information, plus the democratization and globalization of information distribution and communication. The first changes of the Computer revolution allowed everyone, everywhere, to be informed. Computers not only dramatically increased the amount of information we have, but equalized its distribution on a global scale. The result is a whole new world.

The spike and distribution of online information is just a first major consequence of the New Age of Computation. It will not be the last. The focus on information alone will soon change, indeed, must soon change. The information explosion is nowhere near the final goal. Information alone is dangerous and superficial. Our very survival as a society depends on our quick transition to the next stage of a computer culture, one where Knowledge is the focus, not Information.

We must now quickly evolve from shallow, merely informed people with short attention spans, and superficial, easily manipulated insights, to thoughtful, knowledgeable people. Then ultimately, some day, we must evolve to become truly wise people.

Wisdom is the Ultimate Goal of the Personal Computer Revolution – Not Information, Not Knowledge

Knowledge_NYCThe first step – Information – is just a stepping stone to a more mature, Knowledge-based culture. I predict society will make this transition in the next five to twenty years. As great an achievement as this will be, Knowledge alone is also just a dangerous stepping stone. It reminds me of the all-too-true joke of what the PhD acronym really means – piled higher and deeper. We must not just be a society of knowledgeable people, like some sort of world academia.

An academic knowledge-world is certainly not what the original hackers, the Steves – Jobs and Wozniak – had in mind when they first envisioned a New Age of Personal Computers, and created Apple to help make it happen. The Steves were among the original hackers that started this new age, but they were not alone. There were thousands of other computer enthusiasts like them with similar backgrounds and intent to change the world by computers.

The many trips Steve Jobs took, LSD based, meditation, or physical, such as to India or Kyoto, allowed him to see a higher purpose potential to personal computers. That is what made him such a magnetic personality. That was part of his leadership aura. Steve Jobs intensity and vision was beyond what the trip-free Bill Gates could ever imagine. Steve was right at home among the Whole Earth computer tribes of high-tech hippies. They were the first generation of hackers that triggered the New Age of Personal Computing. I remember it all very well.

Ralph_B&WSteve Jobs knew the importance of wisdom first hand. He had probed his inner depths and come to terms with death and the meaning of life. He had channeled his fears of death into action and hard work. His intention, and that of Steve Wozniak, and many others all over the world, including me in my own little way (photo right in 1975), was to promote technology, including personal computer use. We looked at it as a tool, what the Steves called a bicycle for the mind. In my case I used this new tool to create simple games and education programs for my kids, and myself, and also for music, and eventually in the practice of law. As a young lawyer I should have been preparing for trial at night, but instead I stayed up late coding. We were all using computers for personal reasons, including especially games and art. Only later did the business components begin to predominate.

The idea behind personal computers in the seventies and eighties was individual empowerment, individual creativity, so that people could be happy, and not just become rich, smart, efficient, and knowledgeable. Knowledge and efficiency alone were never the end-game of the majority Apple branch of the early computer users, the crazy ones, the misfits.

Jobs_2005_speechQuoting the well-known Stanford commence speech of Steve Jobs in 2005:

Remembering that I’ll be dead soon is the most important tool I’ve ever encountered to help me make the big choices in life. Because almost everything – all external expectations, all pride, all fear of embarrassment or failure – these things just fall away in the face of death, leaving only what is truly important. Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. You are already naked. There is no reason not to follow your heart.

The goal of any advanced civilization is Wisdom, not Knowledge. Living your life with awareness and understanding of your own mortality. Living your life with joy, with flow. Just ask Socrates, who boasted of knowing Nothing (as he had transcended that and phased into Wisdom), and the ancient Greeks. They still exemplify our understanding of high culture in the West. In the East just ask Buddha or Lao Tzu. In fact, consult any of the great wisdom traditions, the great religions. It is not enough in any wisdom tradition to know, we must use this knowledge for both personal happiness and the advancement of all Mankind, indeed, for the benefit of all life on Earth.

Zen_KoanThat is Wisdom – knowledge converted to beneficial action. This is essentially the Zen philosophy of Steve Jobs and many others like him. It is not a vision of amassing knowledge, which is often just the dogma generated by another person. It is the wisdom to follow your own inner voice. Again to quote Job’s commencement speech at Stanford in 2005:

Your time is limited, so don’t waste it living someone else’s life. Don’t be trapped by dogma – which is living with the results of other people’s thinking. Don’t let the noise of other’s opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. They somehow already know what you truly want to become. Everything else is secondary.

Steve-Jobs-zen

Steve Wozniak, the inspired engineer that helped Jobs make things happen, understood these wisdom lessons too. In fact, Woz reminds me of the laughing Buddha in Chinese tradition. As he stated in an NPR interview in 2006:

I’ve been having a lot of fun everyday. You know, pranks, jokes. But it actually started with a lifetime philosophy. When you’re about 20 years old, you kind of think out – I figured out that it was better – less good to be successful and better to have a laughing life, laugh more than you frown all through your life. Because on the day you die, which one would you have said had the happier life, the better life? And so I put a lot of humor in my life.

steve-wozniak

Wisdom in all true cultures, including legal culture, means the freedom to live your own life, to have a laughing life, to be happy, or at least try. As our American founders put it in the Declaration of Independence, it means to live in accord with certain self-evident truths:

[T]hat all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

Hope for the Future

founding-fathers2Although these inalienable rights have been enshrined in our constitution and other basic laws for over two hundred years, we are still far from attaining these goals. We are still far from the wisdom state our forefathers dreamed about. Still, in the U.S., and other civilized societies, we at least all agree on these goals. We agree on the principle of the common good, equal justice for all, and the desirability of happiness. We believe these are inalienable rights for all, even if we still lack the collective wisdom to live that way.

Our ideals are not our realities. Freedom, liberty, equality, and justice for all are still just goals. But we should not despair. We should keep the faith of the Founding Fathers of America and other wisdom ancestors everywhere. They were certainly the misfits, the rebels of their day. They were the ones crazy enough to think that individual liberties for all people were possible. The Founders of America saw things very differently from the British establishment of their day. They were also brave enough to take action, to be revolutionary.

Although we are still far from attaining their goals in the U.S., much less the rest of the world, we should not give up hope. Personal computers, and the incredibly fast changes we have already seen them bring, give us all hope.

first-apple-computerThe information society in which we now live came to pass in just twenty years after the launch of the first Apple computer. I recall having one of the first home computers in town (alas, not an Apple, but a TI-994A), then having one of the first computers on my desk at work (the first issue IBM PC). Now every home and office has more computers than anyone can keep track of. We now carry around, and some even wear, far more computing power than anyone ever dreamed was possible in the seventies and eighties.

We have already transitioned into a global information society, such that all information is now just a Google away. Think how incredible that is. This fast change shows the power of personal computers. If the crazy ones, those who dare to try to put a dent in the Universe, just keep on working hard, then all things are possible. We can keep on changing the world. Again to quote Jobs’ Stanford speech:

Here’s to the crazy ones, the misfits, the rebels, the troublemakers, the round pegs in the square holes…the ones who see things differently – they’re not fond of rules…You can quote them, disagree with them, glorify them or vilify them, but the only thing you can’t do is ignore them because they change things…they push the human race forward, and while some may see them as the crazy ones, we see genius, because the ones who are crazy enough to think that they can change the world are the ones that do.

Click here to see the whole Stanford speech.

Jobs_as_hippie_visionaryJobs and Wozniak saw technology as a way to wisdom, to happiness. They were hardly alone. There have been many, many true believers in the computer revolution and its impact on Art, Science, and Culture. They have been at the side of the Steves all along, both literally and figuratively. There were thousands, even in the early days. Then tens of thousands, then millions, and now most of the world are Apple consumers and at least wannabe different-thinkers

Our generation, every one alive today, must continue to seize the moment. We must use the new computer technologies to escape the first stage information deluge. We must use the latest personal computer systems to create a new Knowledge Society, with the aim of launching a Wisdom World after that.

From Information to Knowledge

Alvin_Toffler_quote_illiteratePersonal computers transformed our society to an information culture in just a few years. The dominance of information as the key catalyst of our society was not a surprise, although the extent of the deluge was unexpected. American writer and futurist Alvin Toffler coined the phrase Information Overload in 1970 in his book Future Shock. He could see the problem coming. Toffler also predicted the need for lifelong learning, for knowledge, in order to cope with information overload.

The problem of too much information and not enough knowledge, is a perennial problem that human kind has faced for thousands of years. See eg. If You Can Know It All, How Come You Don’t?: Why the Internet provides a mountain of knowledge, but people only take a molehill,  (NY Times, March 20, 2015), where J. Peder Zane wrote:

At least since the heyday of ancient Greece and Rome, each generation has confronted the overwhelming struggle to search, sift and sort growing piles of information to make what is known useful. “Papyrus, print or petabyte — the history of feeling overwhelmed by information always seems to go back further than the latest technology.” said Seth Rudy, a professor of English literature at Rhodes College who explores this phenomenon in his new book, “Literature and Encyclopedism in Enlightenment Britain: The Pursuit of Complete Knowledge.” “The sense that there is too much to know has been felt for hundreds, even thousands, of years.”

Although too much information is a perennial problem, ours is the first generation where it has become the focal point of society. Our is the first generation where unlimited information about everything is in the hands of everyone, not just a few isolated scholars. I believe that the current flood of information, where, as the NY Times article observed, more information is created every two days than had existed in the entire world from the dawn of time to 2003, has led us to a tipping point. This flood, this shock, empowers us to take the next stage; indeed, it forces us to do so.

salman_khan

Sal Khan

Just as information volume quickly exploded into exabytes by the advent of personal computing power, the next stage of a knowledge society can come quickly too. I am optimistic about that. We can use the next generation of computers and other advanced technologies to make this transition before we get too addicted to mere information. We see early signs of this already with things like TED, the Khan Academy and other free or nearly free online education; the new popularity of opinion and analysis blogs; thinking and knowledge oriented social media sites, such as Quora; and also with analytics, active machine learning, and other forms of artificial intelligence. Also see: 5 Signs That Science Is Taking Over the World, Huff Post, 03/03/2015.

The transition from information to knowledge is archetypal. So too is the transition from knowledge to wisdom. This is beautifully, albeit pessimistically imaged by the great Twentieth Century poet, T.S. Eliot:

Opening Stanza from Choruses from “The Rock”

Eagle_soarsThe Eagle soars in the summit of Heaven,

The Hunter with his dogs pursues his circuit.

O perpetual revolution of configured stars,

O perpetual recurrence of determined seasons,

O world of spring and autumn, birth and dying

The endless cycle of idea and action,
Endless invention, endless experiment,
Brings knowledge of motion, but not of stillness;
Knowledge of speech, but not of silence;
Knowledge of words, and ignorance of the Word.
All our knowledge brings us nearer to our ignorance,
All our ignorance brings us nearer to death,
But nearness to death no nearer to GOD.
Where is the Life we have lost in living?
Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?
The cycles of Heaven in twenty centuries
Bring us farther from GOD and nearer to the Dust.

T. S. Eliot (1888-1965),
The Rock (1934)

TS_Eliot

Some Specific Predictions of How Society Will Evolve

ESCHER famous etching of a man gazing into a crystal ball ruined by putting Losey's face into itThis transition from information to knowledge is the next natural step. There seems to be widespread agreement on that. It is part of the natural learning process. But the time frame, and the how we get there as a society, is the subject of much contention. On the time issue, some say it will never happen. That is too pessimistic in my view. Some say it will happen in the next year or two. That is too optimistic. Most do not put a time frame on it at all, seeing the safety in vagueness.

As to exactly how to progress as a society, the information is even more scarce. I am building the bridge in this essay based on my own observations and general processing. If others have opined on how they think it will happen, I have not read it. Truth be told, I have not really searched hard for such information either, as I wanted my thoughts to be my own, and not be overly influenced by others. There may well be others that have predicted how such transformation to a knowledge society may come about, but I am unaware of their predictions. It seems to me that such hypothesis testing predictions are generally avoided, as all they can do is prove your analysis to be wrong. Vanity.

magic_8_ball_animatedIt is safer to avoid expressing your opinion on the two topics of when and how. They are possible to verify as either wrong or right, after the fact, that is, unless your prediction date is beyond your probable remaining life-span. By that time no one will remember if you were wrong (or right) anyway. If you must put your analysis to the test and make predictions to see if they come true, then the smart thing is to make the predictions general. The vaguer the predictions, the more likely that one or more will seem to come true, thus proving your analysis. If not come true, at least not be so wrong as to make it obvious that your reasoning was incorrect.

hypothesis_testing-cycleNever fear, I will not play it safe. Better to have the courage of your convictions and adopt a scientific approach. I will now make several, very specific predictions as to how we will transition from an information society to a knowledge society. I will also make a prediction as to when. These predictions will serve as a test of my current theories and hypotheses. If my predictions come true, then this will be evidence of the accuracy of my analysis. If not, well better think again and adjust the hypothesis based on the observations.

Steve Jobs said you can only connect the dots …. in your life by looking backwards, that you cannot predict an individual’s destiny with any certainty. I agree this is somewhat true when it comes to individuals, but not necessarily true at all when dealing with large groups of people. There I think you can connect the dots …. going forward. You can forecast the immediate future based on analysis and logical projections of current trends.

My Predictions as of April 5, 2015.

As to when society will transition from information to knowledge,  I predict 5-20 years, which is very optimistic.

Schrodinger's CatAs to how this change will take place, I will go out on a limb and make several specific predictions. The predictions are based on my analysis and estimates of certain probable events. In five to twenty years my analysis will either be vindicated, or shown to be embarrassingly wrong. Oh well. It is worth the risk because there is a chance that by making these positive predictions I will trigger a self-fulfilling prophecy. Maybe the alleged future vision will be the observation that makes Schrodinger’s Cat live.

Here are twelve future predictions, followed by a few more general trend analysis projections concerning crowd-sourcing and crowd-wisdom. All seem to have an AI element and multiple kinds of ideal computer administrators. The AI Admins protect our human interests in the cyber world and facilitate the transition to the Knowledge Age. I was a little surprised by this, but that is where my thinking led.

  1. Oculus_girlSeveral inventions, primarily in insanely great new computer hardware and software, will allow for the creation of many new types of cyber and physical interconnectivity environments. There will be many more places that will help people to go beyond information to knowledge. They will be both virtual realities, for you or your avatars to hang out, and real-world meeting places for you and your friends to go to. They will not be all fun and games (and sex), although that will be a part of it. Many will focus exclusively on learning and knowledge. The new multidimensional, holographic, 3D, virtual realities will use wearables of all kinds, including Oculus-like glasses (shown above), iWatches, and the like. Implant technology will also arise, including some brain implants, and may even be common in twenty years. Many of the environments, both real and VR, will take education and knowledge to a new level. Total emersion in a learning environment will take on new meaning. The TED of the future will be totally mind-blowing.
  2. socrates3Some of the new types of social media sites will be environments where subject matter experts (SME) are featured, avatars and real, cyber and in-person, shifted and real-time. There will also be links to other sites or rooms that are primarily information sources.
  3. The new SME environment will include products and services, with both free and billed aspects. 
  4. The knowledge nest environments will be both online and in-person. The real life, real world, interactions will be in safe public environments with direct connections with cyberspaces. It will be like stepping out of your computer into a Starbucks or laid-back health spa.
  5. The knowledge focused cyberspaces, both those with and without actual real-words SMEs, will look and feel something like a good social media site of today, but with multimedia of various kinds. Some will have Oculus type VR enhancements like the StarTrek holodeck. All will have system administrators and other staff who are tireless, knowledgable, and fair; but most will not be human.
  6. AI_hand_bodyThe admins, operators and other staff in these cyberspaces will be advanced AI, like cyber-robots. Humans will still be involved too, but will delegate where appropriate, which will be most of the time. This is one of my key predictions.
  7. The presence of AIs will spread and become ubiquitous. They will be a key part of the IOT – Internet of Things. Even your refrigerator will have an AI, one that you program to fit your current dietary mood and supply orientation.
  8. Nano_brain_implantThe knowledge products and services will come in a number of different forms, many of which do not exist in the present time, but will be made possible by other new inventions, especially in the area of communications, medical implants, brain-mind research, wearables, and multidimensional video games and conferences.
  9. RobotsAll subject areas will be covered, somewhat like Wikipedia, but with super-intelligent cyber robots to test, validate and edit each area. The AI robots will serve most of the administrator and other cyber-staffing functions, but not all.
  10. seal-of-approvalThe AI admins will monitor, analyze, and screen out alleged SMEs who do not meet certain quality standards. The AI admins will thus serve as a truth screen and quality assurance. An SME’s continued participation in an AI certified site will be like a Good Housekeeping Seal of Approval.
  11. Police_penguin_avatarThe AI admins will also monitor and police the SME services and opinions for fraud and other unacceptable use, and for general cybersecurity. The friendly management AIs will even be involved in system design, billing, collection, and dispute resolution.
  12. Environments hosted by such friendly, fair, patient, sometimes funny, polite (per your specified level, which may include insult mode), high IQ intelligence, both human and robot, will be generally considered to be reliable, bona fide, effective, safe, fun, enriching, and beautiful. They will provide a comforting alternative to information overload environments filled with conflicting information, including its lowest form, data. These alternative knowledge nests will become a refuge of music in a sea of noise. Some will become next generation Disney World vacation paradises.

crowsdsourcingIn addition to this vision of SME knowledge nests, I can reasonably foresee the development of many other types of safe AI hosted enterprises. Some important ones will focus on the wisdom of the crowd, some on crowdsourcing, and some on both. Others will focus on other things entirely, including schools and crafts, shopping, charities, recreations, hobbies, games, travel, business lines, legal disciplines, special interests, products, services lines, political parties, policy focus groups, governments, police, health, hospital, rehabilitation, sex, drugs, and other quasi-legal activities,

For some early examples of this already, see Craig’s List, your favorite forum or Internet Relay Chat (IRC), Facebook pages, Twitters, YouTube Channels, inter-active blogs and the like. Think what they would be like with super-smart admins, operators and other staff, and the best content in the world. When this happens new types of interest groups and social collectives will arise that are based on other new inventions. They will accelerate the development of many new kinds of social groups and communication methods. Many will easily spill over into the real world. The will first take the form of restaurants, shops, cafes, spas and then later branch out into other stores and offices. The group interactions of all types could quickly take us to a knowledge society. 

human-and-robotsThis prediction on crowd sourcing and crowd wisdom is to a large extent based on Steve Job’s favorite guide, intuition. So I cannot be very specific. But my sense is that it will all again tie into AI. When these new kind of clubs or environments, or whatever the next generation social media sites end up being called, reach a certain threshold of participation, something new and unexpected will emerge. It will arise from these new collective groups and the Big Data of knowledge they generate. We are just beginning to tap the enormous potential of the wisdom of the crowd, and the power, both economic and political, of crowdsourcing. The AIs could make them happen in a big way. The impact on politics and policy could be especially powerful.

Many Have Already Transitioned from Information to Knowledge

The Thinker by RodinMedia seems to be the chief villain of the superficial information society, but it can quickly change as people change. Say what you will about Twitter, blogs, and modern media, the attention spans of many are, nevertheless, lengthening. Almost all my tweets, and that of many others, link to longer articles. I am reading more books than ever, on a Kindle, with millions of books just seconds away from search and download. There are many like me who are now focused primarily on knowledge, not information. More and more of us are taking time to think things through for ourself.

Many of us are following the natural learning stages and moving from information to knowledge. We are not doing so by simply piling-on more and more information, or by researching endless facts. We do so by trying to understand what all the data means, by looking for the hidden patterns and structure underlying the mountains of data.

The new monthly version of this blog, indeed, this very essay, is another example of the transition. If you want a blog that focuses on e-Discovery information, go to a good one like K&L Gates’ eDiscoveryLaw.com. It has the latest news of case law and other current events, which is the way this blog used to be when it was weekly. Now at the e-Discovery Team you will find analysis, not news. You will find how I make sense of all of this information.

Here is a chart I prepared of some examples of the differences as we progress to version 2.0 of the Computer Age. Here you see a transition from active, independent search, to responsive, collective judgment.

Info_Knowledge_Chart

I Do Not See a Way at This Time to Transition to a Third Stage Wisdom Society

Be_Here_NowWhen as a new global society we make this collective transition from information to knowledge, we will, I trust, at that point see a clear way to the final destination of wisdom. At the current time, I do not see any way, clear or murky, to collective wisdom. We are still far from a wisdom-based society. It will, I admit, take decades of knowledge building for a wisdom-based society to be possible. Such a society may never be attained, but my intuition tells me that it will. It depends in large part on the whole earth having the right computer tools.  

whole_earth

Spock_smilingIn a wisdom-based society people will not only be rational, but also intuitive and emphatic. People will not be imprisoned by their own thinking. Although this fate does entrap many who focus solely on information and knowledge. Despite the Star Trek myth of Spock, you can have both logic and emotion, and many other traits as well, including intuition, spirituality, humor, aesthetics, and sense of beauty and design.

american_eagle_splatterThe common law legal tradition that is my professional home embraces such a holistic model. You might think of it as rational and knowledge based only, but it is not. It is based on both law (the rational part) and equity. It is based on fundamental human perceptions and feelings of fairness and justice. To see this you have only to look at the Declaration of Independence, Constitution, and Bill of Rights. Call us lawyers dreamers if you wish, but that is the higher calling of all people – Truth and Justice for all, just as our fundamental legal documents declare is our inalienable right.

Although I do not see a way to a wisdom society in our times, I do know one thing – information alone will not get us there. That much is obvious from the superficial, short-attention-span world in which we live. Also, a society, like a person, cannot just leapfrog from information to wisdom. There has to be an intermediary step. A knowledge-based society must come next, and then we will know what to do.

knowledge-exchange

Although We Cannot Get to a Wisdom Based Society Now, We Can Get to a Society Based on Knowledge

online world GRADUATEThe transition from information to knowledge is a big step, but it is attainable. The transition of society from wide-spread ignorance and information shortages, to an over-abundance of information, came very fast. This gives us hope to think the next step to knowledge can come quickly too. It could happen in the next five to twenty years, maybe less. I hope to live long enough to see that, and will take whatever action I can to make it so.

The little slice of the pie that I see now is that a knowledge based society can come as we develop new technologies to analyze the information generated, to process it to knowledge, and to teach on a deep level. It can happen once large numbers of people are not just informed, but knowledgable. I think this is a natural progression. People very quickly get sick of a constant flood of too much information. There is a natural yearning for real knowledge. There is an innate human desire to make sense of the world, to understand. It is, after all, a basic survival skill. This transition from information to knowledge can happen as education grows. It will very soon become cool to really know and understand, not just spout off the latest fad facts.

The Next Generations Will Figure Out a Way

This next stage knowledge-based society will have a good chance of taking us to the end game, the wise society. Surely the next level society will know how to do it. They will know enough not to get trapped into mere knowledge, just as we are currently informed enough not to get trapped into mere information. Eventually the goals and wise dreams of the Declaration of Independence will be attainedThere is a way. It is built on embracing technology, not opposing it, but being very careful what you kiss.

Declaration_of_independence2I do not delude myself into thinking that I will live to see that day. The day when large numbers of people are not just knowledgable, but wise. The day when society treasures wisdom above all else. The day when this collective wisdom finally allows our Founding Fathers’ Declaration of Independence dreams to come true.

Although I do not think I will live that long, I do have faith and confidence that the next generations of computer born people will find a way. Perhaps our children, or children’s children, will live to see that day. They will not get stuck with knowledge alone. They will hold true to the dreams of our ancestors. The future generation will find a way to transition from a mere knowledge based society, to one where all of Mankind can enjoy freedom, equality and liberty, not just a privileged few. 

Transition Beyond an Information Society is a Survival Imperative

greedyThis journey, this progress of our technology culture, is not an idle dream. It is a survival imperative. Information alone, unprocessed, and not yet converted to knowledge, is dangerous. I imagine that some planets in this enormous Universe of ours get stuck and never make it to the next step. These other worlds destroy themselves with too much information and not enough knowledge. They self-destruct in various new technology scenarios, from nuclear holocaust, to climate destruction, to Big Brother dictatorships, to self-obsessed, stagnating, shallow, greedy, short attention span news-junkie people. All of these cultural disasters could well await our own planet.

There are so many ways that a culture based on Information, not Knowledge, can go wrong and either destroy itself, or stagnate, and never make it to the end game of freedom and justice for all. The transformation from an Information society to a Knowledge society must happen quickly if we are to survive and prosper.

Personal computers got us into the Information society and out of the past age of ignorance. It connected the world and brought down many barriers. There has been tremendous progress. But even with all of the technology now at hand, we are still in a very precarious position. We must continue to invent and advance our technologies to transition to knowledge. Our answer lies in part in analytics, in processing information so that we discern the underlying patterns, so that we can understand what all of the information means. The computer technologies that advance such analytics are part of our way forward. Companies that develop and sell such products and services would be good investment opportunities.

global_warmingIt is not enough, for instance, to have heard of global warming, to have seen the information. This information must be understood. We as a society must know what it means. That is the difference between information and knowledge. Everyone today in our Information based society has at least heard of global warming, but, for most, it is just information. It is remote, abstract. It is just redundant information. We do not really know what it means. That is consistent with an information-based culture. It demonstrates the danger of groups having advanced technologies without knowledge of how to use them.

Information alone does not drive corrective action or hard choices. Information may be right, it may be wrong. It seldom drives actions. But once we know and understand, then action becomes easy, becomes natural. We seem to be hard-wired that way as humans.

It is one thing to read a news report that a tiger may be around, to be informed of the tiger spotting. It is quite another to know that a tiger is nearby, to hear it, to see the tiger for yourself. That instantly puts you in fight or flight mode. If you are connected to a group, and they all scream a tiger warning, then you would move. But if you just hear the news, just receive abstract information, you may just say to yourself, hmm interesting, tigers are around you say, and then go back to your daily chores. The next minute, when you are off-guard, you are eaten by a tiger.

Conclusion

Ralph_05_artWe have to know to act, and so we need to go beyond an information society, and we have to do it fast. If we do not, the dark side of technology could soon overwhelm us. Stop just reading. Stop just being informed. It is not enough. Think. Process. Analyze. Cross-check. Verify. Take action. Create. Share. Teach. Teamwork.

Let us all work together to take our computer based culture to the next stage of social development, the knowledge stage. Stay focused on knowledge, not information. Of course, stay informed too. I am not saying to wall yourself off and stop taking in new information so that you can just study and think. I am saying not to focus solely on information. I am saying to balance your information input with your internal processing.

Never be satisfied with just being informed, push yourself to become knowledgeable. Go to the next step to thoroughly process and analyze. Invest both your time and money on technology that will help you to transform information into knowledge.

Do not fear the new analytics and AI, ride these new technologies to gain real knowledge. For instance, if you are a lawyer who needs to find evidence, do not just read about predictive coding. Do it. Action and testing are the way to personal understanding. Become knowledgable about what is important to you, not just informed.

ARK_Info_FloodThe new analytic inventions, and others that allow for knowledge, not just information, can be our Ark. They can allow us to survive the flood of information and arrive safely on the other side. They can lead to a more mature society based on knowledge. From the new world of global knowledge, another path will surely appear, one leading to Wisdom. Our children, or children’s children, may then finally attain a global society based on wisdom, on truth, liberty and justice for all. We may not live to see it, but to try anyway, to care, is an important part of what makes us human.

 

[A PDF version of this essay is found here and may be freely distributed for any non-profit purpose so long as no changes are made.]


My Hack of the NSA and Discovery of a Heretofore Unknown Plan to Use Teams of AI-Enhanced Lawyers and Search Experts to Find Critical Evidence

March 1, 2015

NSA_logoNow that my blog has changed from weekly to monthly I have more time for my hobbies, like trying to hack into NSA computers. I made a breakthrough with that recently, thanks primarily to exuberant disclosures by Snowden after the Oscars. I was able to get into one of the NSA’s top-secret systems. Not only that, my hack led to discovery of a convert operation that will blow your mind. (Hey, if the NSA can brag about their exploits, then so can I.) And if that were not enough, I was able to get away with downloading two documents from their system. I will share what I borrowed with you here (and, of course, on Wikileaks). The documents are:

  • A perviously unknown Plan to use sophisticated e-Discovery Teams with AI-enhancements to find evidence for use in investigations and courtrooms around the world.
  • A slide show in movie and PDF form that tells you how these teams operate.

nsa-spying-logoI can disclose my findings and stolen documents here without fear of becoming Citizen Five because what I found out is so incredible that the NSA will disavow all knowledge. They will be forced to claim that I made up the whole story. Besides, I am not going to explain how I hacked the NSA. Moreover, unlike some weasels, I will never knowingly give aid and comfort to foreign governments. This is something many Hollywood types and script kiddies fail to grasp. All I will say is that I discovered a critical zero-day type error in two lines of code, out of billions, in a software program used by the NSA. In accord with standard white hat protocol, if the NSA admits my story here is true, I will tell them the error. Otherwise, I am keeping this code mistake secret.

Time_SpiralThe hack allowed me to access a Top Secret project coded-named Gibson. It is a Cyberspace Time Machine. This heretofore secret device allows you to travel in time, but, here’s the catch, only on the Internet. Since it is an Internet based device the NSA has to keep it plugged in. That is why I was not faced with the nearly insoluble air gap defense protecting the NSA’s other computer systems.

From what I have been able to figure out, the time travel takes place on a subatomic cyber-level and requires access to the Hadron Collider. The Gibson somehow uses entangled electrons, Higgs bosons, and quantum flux probability. The new technology is based on Hawking’s latest theories, the speed of light, gravity, quantum computers, and, can you believe it, imaginary numbers, you know, the square root of negative numbers. It all seems so obvious after you read the NSA executive summary, that other groups with Hadron Collider access and quantum computers are likely to come up with the same invention soon. But for now the NSA has a huge advantage and head start. Maybe someday they will even share some of that info with POTUS.

google_Hadron

The NSA Internet Time Machine allows you to peer into the past content of the Internet, which, I know, is not all that new or exciting. But, here is the really cool part that makes this invention truly disruptive, you can also look into the future. With the Gibson and special web browsers you can travel to and capture future webpages and content that have not been created yet, at least not in our time. You can Goggle the future! Just think of the possibilities. No wonder the NSA never has any funding problems.

Apple_buildingThis kind of breakthrough invention is so huge, and so incredible, that NSA must deny all knowledge. If people discover this is even possible, other groups will race to catch up and build their own Internet Time Machines. That is probably why Apple is hoarding so much cash. Will there be a secret collider built off the books under their new headquarters? It kind of looks like it. Google is probably working on this too. The government cannot risk anyone else knowing about this discovery. That would encourage a dangerous time machine race that would make the nuclear race looks like child’s play. Can you imagine what Iran would do with information from the future? The government simply cannot allow that to happen.

minority-report_Cruse_LoseyFor that reason alone my hack and disclosures are untouchable. The NSA cannot admit this is true, or even might be true. Besides, having seen the future, I already know that I will not be prosecuted for these intrusions. In fact, no one but a few hard-core e-Discovery Team players will even believe this story. I can also share the information I have stolen from the future without fear of CFAA prosecution. Technically speaking my unauthorized access of web pages in the future has not happened yet. Despite my PreCrimelike proposals in PreSuit.com, you cannot (yet) be prosecuted for future crimes. You can probably be fired for what you may do, but that is another story.

nsa_eye_blueStill, the hack itself is not really what is important here, not even the existence of the NSA’s Time Machine, as great as that is. The two documents that I brought back from the future are what really matters. That is the real point of this blog, just in case you were wondering. I have been able to locate and download from the future Internet a detailed outline of a Plan for AI-Enhanced search and review.

The Plan is apparently in common use by future lawyers. I am not sure of the document’s exact date, but it looks like circa 2025. It is obviously from the future, as nobody has any plans like this now. I also found a video and PDF of a PowerPoint of some kind. It shows how lawyers and other investigators in the future use artificial intelligence to enhance all kinds of ESI search projects, including overt litigation and covert investigations. It appears to be a detailed presentation of how to use what is still called Predictive Coding. (Well, at least they do not call it TAR anymore.) Nobody in our time has seen this presentation yet. I am sure of that. You will have the first glimpse now.

The Plan for AI-Enhanced search and review is in the form of a detailed 1,500 word outline. It looks like this Plan is commonly used in the future to obtain client and insurer approval of e-discovery review projects. I think that this review Plan of the future is part of a standardized approval process that is eventually set up for client protection. Obviously we have nothing like that now. The plan might even be shared with opposing counsel and the courts, but I cannot be sure of that. I had to make a quick exit from the NSA system before my intrusion was detected.

I include a full copy of this Plan below, and the PowerPoint slides in video form. See if thee documents are comprehensible to you. If my blog is brought down by denial of service attacks, you can also find it on Wikileaks servers around the world. The Plan can also be found here as a standalone document, and the PDF of the slides can be found here. I hope that this disclosure is not too disruptive to existing time lines, but, from what I have seen of the future of law, temporal paradox be damned, some disruption is needed!

Time_MachineAlthough I had to make a quick exit, I did leave a back door. I can seize root of the NSA Gibson Cyberspace Time Machine anytime I want. I may share more of what I find in upcoming monthly blogs. It is futuristic, but as part of the remaining elite who still follow this blog, I’m sure you will be able to understand. I may even start incorporating this information into my legal practice, consults, and training. You’ll read about it in the future. I know. I’ve been there.

If you have any suggestions on this hacking endeavor, or the below Plan, send me an encrypted email. But please only use this secure email address: HackerLaw@HushMail.com. Otherwise the NSA is likely to read it, and you may not enjoy the same level of journalistic sci-fi protection that I do.

_______________

Outline of 12-Step Plan for Predictive Coding Review

1. Basic Numerics of the Project

a. Number and type of documents to be reviewed

b. Time to complete review

c. Software to be used for review

(1) Active Machine Learning features

(A) General description

(B) Document ranking system (ie- Kroll ranks documents by percentage probability, .01% – 99.9%)

(2) Vendor expert assistance to be provided

d. Budget Range (supported by separate document with detailed estimates and projections)

2. Basic Goals of the Project, including analysis of impact of Proportionality Doctrine and Document Ranking. Here are some possible examples:

a. High recall and production of responsive documents within budget proportionality constraints and time limits.

b. Top 25% probable relevant, and all probable (50%+) highly relevant is a metric goal proportional and reasonable in this particular case for this kind of ESI. (Note – these numbers are often used in high-end, large scale projects where there is a premium on quality.)

c. All probable relevant and highly relevant within a specified range or set of ranges.

d. Zero Errors in document review screening for attorney client privileged communications.

e. Evaluation of large production received by client.

f. Time sensitive preparations for specific hearings, mediation, depositions, or 3rd party subpoenas.

g. Private internal corporate investigations as part of quality control, business information, compliance and dispute avoidance..

h. Compliance with government requests for information, state criminal investigations and private civil litigation.

3. General Cooperation Strategy

a. Disclosures planned

(1) Transparent

(2) Translucent

(3) Brick Wall

b. Treatment of Irrelevant Documents

c. Relevancy Discussions

d. Sedona Principle Six

4. Team Members for Project

Penrose_triangle_Expertisea. Predictive Coding Chief. Experienced searcher in charge of the Predictive Coding aspects of the document review

1. Experienced ESI Searcher

2. Same person in charge of non-PC aspects, if not, explain

3. Authority and Responsibilities

4. List qualifications and experience

b. Subject Matter Experts (SME)

(1) Senior SME

A. Final Decision Maker – usually partner in charge of case

B. Determines what is relevant or responsive

(i) Based on experience with the type of case at issue

(ii) Predicts how judge will rule on relevance and production issues

C. Formulates specific rules when faced with particular document types

D. Controls communications with requesting parties senior counsel (usually)

E. List qualifications and experience

(2) Junior SME(s)

A. Lead Document Review expert(s)

B. Usually Sr. Associate working directly with partner in charge

C. Seeks input from final decision maker on grey area documents (Undetermined Category)

D. Responsible for Relevancy Rule articulations and communications

E. List qualifications and experience

(3) Amount of estimated time in budget for the work by Sr and Jr SMEs.

A. Assurances of adequate time commitments, availability

B. Reference time estimates in budget

C. Time should exclude training

(4) Response times guaranties to questions, requests from Predictive Coding Chief

c. Vendor Personnel

(1) Anticipated roles

(2) List qualifications and experience

d. Power Users of particular software and predictive coding features to be used

(1) Law Firm and Vendor

(2) List qualifications and experience

e. Outside Consultants or other experts

(1) Anticipated roles

(2) List qualifications and experience

f. Contract Lawyers

(1) Price list for reviewers and reviewer management

A. $500-$750 per hr is typical (Editors Note: Is this widespread inflation, or new respect?)

B. Competing bids requested? Why or why not.

(2) Conflict check procedures

(3) Licensed attorneys only or paralegals also

(4) Size of team planned

A. Rationale for more than 5 contract reviewers

B. “Less is More” plan

(5) Contract Reviewer Selection criteria

g. Plan to properly train and supervise contract lawyers

5. One or Two-Pass Review

a. Two pass is standard, with first pass selecting relevance and privilege using Predictive Coding, and second pass by reviewers with eyes-on review to confirm relevance prediction and code for confidentiality, and create priv log.

b. If one pass proposed (aka Quick Peek), has client approved risks of inadvertent disclosures after written notice of these risks?

6. Clawback and Confidentiality agreements and orders

a. Rule 502(d) Order

b. Confidentiality Agreement: Confidential, AEO, Redactions

c. Privilege and Logging

(1) Contract lawyers

(2) Automated prep

7. Categories for Review Coding and Training

a. Irrelevant – this should be a training category

b. Relevant – this should be a training category

(1) Relevance Manual for contract lawyers (see form)

(2) Email family relevance rules

A. Parents automatically relevant is child (attachment) relevant

B. Attachments automatically relevant if email is?

C. All attachments automatically relevant if one attachment is?

c. Highly Relevant – this should be a training category

d. Undetermined – temporary until final adjudication

e. No or Very Few Sub-Issues of Relevant, usually just Highly Relevant

f. Privilege – this should be a training category

g. Confidential

(1) AEO

(2) Redaction Required

(3) Redaction Completed

i. Second Pass Completed

8. Search Methods to find documents for training and production

a. ID persons responsible and qualifications

CULLING.2-Filters.3-lakes-ProductionLb. Methods to cull-out documents before Predictive Coding training begins to avoid selection of inappropriate documents for training and to improve efficiency

(1) Eg – any non-text document; overly long documents

(2) Plan to review by alternate methods

(3) ID general methods for this first stage culling; both legal and technical

c. ID general methods for Predictive Coding, ie – Machine selected only, or multimodal

d. Describe machine selection methods.

(1) Random – should be used sparingly, and never as sole method

(2) Uncertainty – documents that machine is currently unsure of ranking, usually in 40%-60% range

(3) High Probability – documents as yet un-coded that machine considers likely relevant

(4) All or some of the above in combination

Multimodal Search Pyramide. Describe other human based multimodal methods

(1) Expert manual

(2) Parametric Boolean Keyword

(3) Similarity and Near Duplication

(4) Concept Search (passive machine learning, such as latent semantic indexing)

(5) Various Ranking methods based on probability strata selected by expert in charge

f. Describe whether a Continuous Active Learning (CAL) process for review will be used, or two-stage process (train, then review), and if later, rationale

9. Describe Quality Control procedures, including, where applicable, any features built into the software, to accomplish following QC goals

quality_trianglea. Three areas of focus to maximize quality of predictive coding

(1) Quality of the AI trainers work to select documents for instruction in the active machine learning process

(2) Quality of the SME work to properly classify documents, especially Highly Relevant and grey area documents, in accord with true probative value and court opinions

(3) Quality of the software algorithms that apply the training input to create a mathematical model that accurately separates the document cloud into probability polar groupings

b. Supervise all reviewers, including contract reviewers who usually do the bulk of the document review work.

(1) ID persons responsible

(2) ID general methods

c. Avoid incorrect conceptions and understanding of relevance and responsiveness, iw – what are you searching for and what will you produce.

(1) Target matches legal obligations

(2) Relevance scope dialogues with requesting party

(3) 26(f) conferences and 16(b) hearings

(4) Motion practice with Court for early resolution of disputes

(5) ID persons responsible

d. Minimize human errors in document coding. Zero Error Numerics.

(1) Mistakes in relevance rule applications to particular documents

(2) Physical mistakes in clicking wrong code buttons

(3) Inconsistencies in coding of same or similar documents

(4) Inconsistencies in coding of same or similar document types

(5) ID persons responsible

e. Facilitate horizontal and vertical communications in team

(1) ID persons responsible

(2) ID general methods

f. Corrections for Concept Drift inherent in any large review project where understanding of relevance changes over time

(1) ID persons responsible

(2) ID general methods

g. Detection of inconsistencies between predictive document ranking and coding

(1) ID persons responsible

(2) ID general methods

h. Avoid incomplete, inadequate selection of documents for training

(1) ID persons responsible

(2) ID general methods

i. Avoid premature termination of training

(1) ID persons responsible

(2) ID general methods

j. Avoid omission of any Highly Relevant documents, or new types of strong relevant documents

(1) ID persons responsible

(2) ID general methods

k. Avoid inadvertent production of privileged documents

(1) List of attorneys names and email domains

(2) Active multimodal search supplement to predictive coding

(3) Dual pass review

(4) ID persons responsible

(5) ID general methods

l. Avoid inadvertent production of confidential documents without proper labeling and redactions

(1) ID persons responsible

(2) ID general methods

m. Avoid incomplete, inaccurate privilege logs

(1) ID persons responsible

(2) ID general methods

n. Avoid errors in final media production to requesting party

(1) ID persons responsible

(2) ID general methods

UpSide_down_champagne_glass10. Decision to Stop Training for Predictive Coding

a. ID persons responsible

b. Criteria to make the decision

(1) Probability distribution

(2) Separation of documents into two poles

(3) Ideal of upside down champagne glass visualization

(4) Few new relevant documents found in last rounds of training

(5) Few new strong relevant types found

(6) No new Highly Relevant documents found

11. Quality Assurance Procedures to Validate Reasonability of Decision to Stop

ei-Recall_smalla. Random Sample Tests to validate the decision

(1) ei-Recall method used, if not, describe

(2) accept on zero error for any Highly Relevant found in elusion test, or new strong relevant type.

(3) Recall and Precision goals

b. Judgmental sampling

12. Procedures to Document the Work Performed and Reasonability of Efforts

a. Clear identification of efforts on the review platform itself with screen shots before project closure

b. Memorandums to file or opposing counsel

(1) Basic metrics for possible disclosure

(2) Detail for internal use only and possible testimony

c. Availability of expert testimony if court challenges arise

________________

_______

What follows is another file I stole from the NSA, a video of PowerPoint slides (no voiceover) for a future presentation called:

Predictive Coding: An Introduction and Real World Example.

The PDF of the slides can be found here.

____

____________




Two-Filter Document Culling – Part Two

February 1, 2015

Please read Part One of this article first.

Second Filter – Predictive Culling and Coding

Bottom-Filter_onlyThe second filter begins where the first leaves off. The ESI has already been purged of unwanted custodians, date ranges, spam, and other obvious irrelevant files and file types. Think of the First Filter as a rough, coarse filter, and the Second Filter as fine grained. The Second Filter requires a much deeper dive into file contents to cull out irrelevance. The most effective way to do that is to use predictive coding, by which I mean active machine learning, supplemented somewhat by using a variety of methods to find good training documents. That is what I call a multimodal approach that places primary reliance on the Artificial Intelligence at the top of the search pyramid. If you do not have active machine learning type of  predictive coding with ranking abilities, you can still do fine grained Second Level filtering, but it will be harder, and probably less effective and more expensive.

Multimodal Search Pyramid

All kinds of Second Filter search methods should be used to find highly relevant and relevant documents for AI training. Stay away from any process that uses just one search method, even if the one method is predictive ranking. Stay far away if the one method is rolling dice. Relying on random chance alone has been proven to be an inefficient and ineffective way to select training documents. Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine Training – Part OneTwoThree and Four. No one should be surprised by that.

The first round of training begins with the documents reviewed and coded relevant incidental to the First Filter coding. You may also want to defer the first round until you have done more active searches for relevant and highly relevant from the pool remaining after First Filter culling. In that case you also include irrelevant in the first training round, which is also important. Note that even though the first round of training is the only round of training that has a special name – seed set – there is nothing all that important or special about it. All rounds of training are important.

There is so much misunderstanding about that, and seed sets, that I no longer like to even use the term. The only thing special in my mind about the first round of training is that it is often a very large training set. That happens when the First Filter turns up a large amount of relevant files, or they are otherwise known and coded before the Second Filter training begins. The sheer volume of training documents in many first rounds thus makes it special, not the fact that it came first.

ralph_wrongNo good predictive coding software is going to give special significance to a training document just because it came first in time. The software I use has no trouble at all disregarding any early training if it later finds that it is inconsistent with the total training input. It is, admittedly, somewhat aggravating to have a machine tell you that your earlier coding was wrong. But I would rather have an emotionless machine tell me that, than another gloating attorney (or judge), especially when the computer is correct, which is often (not always) the case.

man_robotThat is, after all, the whole point of using good software with artificial intelligence. You do that to enhance your own abilities. There is no way I could attain the level of recall I have been able to manage lately in large document review projects by reliance on my own, limited intelligence alone. That is another one of my search and review secrets. Get help from a higher intelligence, even if you have to create it yourself by following proper training protocols.

Presuit_smallMaybe someday the AI will come prepackaged, and not require training, as I imagine in PreSuit. I know it can be done. I can do it with existing commercial software. But apparently from the lack of demand I have seen in reaction to my offer of Presuit as a legal service, the world is not ready to go there yet. I for one do not intend to push for PreSuit, at least not until the privacy aspects of information governance are worked out. Should Lawyers Be Big Data Cops?

Information governance in general is something that concerns me, and is another reason I hold back on Presuit. Hadoop, Data Lakes, Predictive Analytics and the Ultimate Demise of Information GovernancePart One and Part Two. Also see: e-Discovery Industry Reaction to Microsoft’s Offer to Purchase Equivio for $200 MillionPart Two. I do not want my information governed, even assuming that’s possible. I want it secured, protected, and findable, but only by me, unless I give my express written assent (no contracts of adhesion permitted). By the way, even though I am cautious, I see no problem in requiring that consent as a condition of employment, so long as it is reasonable in scope and limited to only business communications.

I am wary of Big Brother emerging from Big Data. You should be too. I want AIs under our own individual control where they each have a real big off switch. That is the way it is now with legal search and I want it to stay that way. I want the AIs to remain under my control, not visa versa. Not only that, like all Europeans, I want a right to be forgotten by AIs and humans alike.

Facciola_shrugBut wait, there’s still more to my vision of a free future, one where the ideals of America triumph. I want AIs smart enough to protect individuals from out of control governments, for instance, from any government, including the Obama administration, that ignores the Constitutional prohibition against General Warrants. See: Fourth Amendment to the U.S. Constitution. Now that Judge Facciola has retired, who on the DC bench is brave enough to protect us? SeeJudge John Facciola Exposes Justice Department’s Unconstitutional Search and Seizure of Personal Email.

Perhaps quantum entanglement encryption is the ultimate solution? See eg.: Entangled Photons on Silicon Chip: Secure Communications & Ultrafast ComputersThe Hacker News, 1/27/15.  Truth is far stranger than fiction. Quantum Physics may seem irrational, but it has been repeatedly proven true. The fact that it may seem irrational for two electrons to interact instantly over any distance just means that our sense of reason is not keeping up. There may soon be spooky ways for private communications to be forever private.

quantum-cryptology

At the same time that I want unentangled freedom and privacy, I want a government that can protect us from crooks, crazies, foreign governments, and black hats. I just do not want to give up my Constitutional rights to receive that protection. We should not have to trade privacy for security. Once we lay down our Constitutional rights in the name of security, the terrorists have already won. Why do we not have people in the Justice Department clear-headed enough to see that?

Getting back to legal search, and how to find out what you need to know inside the law by using the latest AI-enhanced search methods, there are three kinds of probability ranked search engines now in use for predictive coding.

Three Kinds of Second Filter Probability Based Search Engines

SALAfter the first round of training, you can begin to harness the AI features in your software. You can begin to use its probability ranking to find relevant documents. There are currently three kinds of ranking search and review strategies in use: uncertainty, high probability, and random. The uncertainty search, sometimes called SAL for Simple Active Learning, looks at middle ranking documents where the code is unsure of relevance, typically the 40%-60% range. The high probability search looks at documents where the AI thinks it knows about whether documents are relevant or irrelevant. You can also use some random searches, if you want, both simple and judgmental, just be careful not to rely too much on chance.

CALThe 2014 Cormack Grossman comparative study of various methods has shown that the high probability search, which they called CAL, for Continuous Active Learning using high ranking documents, is very effective. Evaluation of Machine-Learning Protocols for Technology-Assisted Review in Electronic DiscoverySIGIR’14, July 6–11, 2014.  Also see: Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine TrainingPart Two.

My own experience also confirms their experiments. High probability searches usually involve SME training and review of the upper strata, the documents with a 90% or higher probability of relevance. I will, however, also check out the low strata, but will not spend as much time on that end. I like to use both uncertainty and high probability searches, but typically with a strong emphasis on the high probability searches. And again, I supplement these ranking searches with other multimodal methods, especially when I encounter strong, new, or highly relevant type documents.

SPLSometimes I will even use a little random sampling, but the mentioned Cormack Grossman study shows that it is not effective, especially on its own. They call such chance based search Simple Passive Learning, or SPL. Ever since reading the Cormack Grossman study I have cut back on my reliance on random searches. You should too. It was small before, it is even smaller now.

Irrelevant Training Documents Are Important Too

In the second filer you are on a search for the gold, the highly relevant, and, to a lesser extent, the strong and merely relevant. As part of this Second Filter search you will naturally come upon many irrelevant documents too. Some of these documents should also be added to the training. In fact, is not uncommon to have more irrelevant documents in training than relevant, especially with low prevalence collections. If you judge a document, then go ahead and code it and let the computer know your judgment. That is how it learns. There are some documents that you judge that you may not want to train on – such as the very large, or very odd – but they are few and far between,

Of course, if you have culled out a document altogether in the First Filter, you do not need to code it, because these documents will not be part of the documents included in the Second Filter. In other words, they will not be among the documents ranked in predictive coding. The will either be excluded from possible production altogether as irrelevant, or will be diverted to a non-predictive coding track for final determinations. The later is the case for non-text file types like graphics and audio in cases where they might have relevant information.

How To Do Second Filter Culling Without Predictive Ranking

KEYS_cone.filter-copyWhen you have software with active machine learning features that allow you to do predictive ranking, then you find documents for training, and from that point forward you incorporate ranking searches into your review. If you do not have such features, you still sort out documents in the Second Filter for manual review, you just do not use ranking with SAL and CAL to do so. Instead, you rely on keyword selections, enhanced with concept searches and similarity searches.

When you find an effective parametric Boolean keyword combination, which is done by a process of party negotiation, then testing, educated guessing, trial and error, and judgmental sampling, then you submit the documents containing proven hits to full manual review. Ranking by keywords can also be tried for document batching, but be careful of large files having many keyword hits just on the basis of file size, not relevance. Some software compensates for that, but most do not. So ranking by keywords can be a risky process.

I am not going to go into detail on the old fashioned ways of batching out documents for manual review. Most e-discovery lawyers already have a good idea of how to do that. So too do most vendors. Just one word of advice. When you start the manual review based on keyword or other non-predictive coding processes, check in daily with the contract reviewer work and calculate what kind of precision the various keyword and other assignment folders are creating. If it is terrible, which I would say is less than 50% precision, then I suggest you try to improve the selection matrix. Change the Boolean, or key words, or something. Do not just keep plodding ahead and wasting client money.

I once took over a review project that was using negotiated, then tested and modified keywords. After two days of manual review we realized that only 2% of the documents selected for review by this method were relevant. After I came in and spent three days with training to add predictive ranking we were able to increase that to 80% precision. If you use these multimodal methods, you can expect similar results.

Basic Idea of Two Filter Search and Review

CULLING.2-Filters.3-lakes-ProductionLWhether you use predictive ranking or not, the basic idea behind the two filter method is to start with a very large pool of documents, reduce the size by a coarse First Filter, then reduce it again by a much finer Second Filter. The result should be a much, much small pool that is human reviewed, and an even smaller pool that is actually produced or logged. Of course, some of the documents subject to the final human review may be overturned, that is, found to be irrelevant, False Positives. That means they will not make it to the very bottom production pool after manual review in the diagram right.

In multimodal projects where predictive coding is used the precision rates can often be very high. Lately I have been seeing that the second pool of documents, subject to the manual review has precision rates of at least 80%, sometimes even as high as 95% near the end of a CAL project. That means the final pool of documents produced is almost as large as the pool after the Second Filter.

Please remember that almost every document that is manually reviewed and coded after the Second Filter gets recycled back into the machine training process. This is known as Continuous Active Learning or CAL, and in my version of it at least, is multimodal and not limited to only high probability ranking searches. See: Latest Grossman and Cormack Study Proves Folly of Using Random Search For Machine TrainingPart Two. In some projects you may just train for multiple iterations and then stop training and transition to pure manual review, but in most you will want to continue training as you do manual review. Thus you set up a CAL constant feedback loop until you are done, or nearly done, with manual review.

CAL_multi

As mentioned, active machine learning trains on both relevance and irrelevance. Although, in my opinion, the documents found that are Highly Relevant, the hot documents, are the most important of all for training purposes. The idea is to use predictive coding to segregate your data into two separate camps, relevant and irrelevant. You not only separate them, but you also rank them according to probable relevance. The software I use has a percentage system from .01% to 99.9% probable relevant and visa versa. A near perfect segregation-ranking project should end up looking like an upside down champagne glass.

UpSide_down_champagne_glassAfter you have segregated the document collection into two groups, and gone as far as you can, or as far as your budget allows, then you cull out the probable irrelevant. The most logical place for the Second Filter cut-off point in most projects in the 49.9% and less probable relevant. They are the documents that are more likely than not to be irrelevant. But do not take the 50% plus dividing line as an absolute rule in every case. There are no hard and fast rules to predictive culling. In some cases you may have to cut off at 90% probable relevant. Much depends on the overall distribution of the rankings and the proportionality constraints of the case. Like I said before, if you are looking for Gilbert’s black-letter law solutions to legal search, you are in the wrong type of law.

Upside-down_champagne_2-halfs

Almost all of the documents in the production set (the red top half of the diagram) will be reviewed by a lawyer or paralegal. Of course, there are shortcuts to that too, like duplicate and near-duplicate syncing. Some of the documents in the irrelevant low ranked documents will have been reviewed too. That is all part of the CAL process where both relevant and irrelevant documents are used in training. But only a very low percentage of the probable irrelevant documents need to be reviewed.

Limiting Final Manual Review

In some cases you can, with client permission (often insistence), dispense with attorney review of all or near all of the documents in the upper half. You might, for instance, stop after the manual review has attained a well defined and stable ranking structure. You might, for instance, only have reviewed 10% of the probable relevant documents (top half of the diagram), but decide to produce the other 90% of the probable relevant documents without attorney eyes ever looking at them. There are, of course, obvious problems with privilege and confidentiality to such a strategy. Still, in some cases, where appropriate clawback and other confidentiality orders are in place, the client may want to risk disclosure of secrets to save the costs of final manual review.

In such productions there are also dangers of imprecision where a significant percentage of irrelevant documents are included. This in turn raises concerns that an adversarial view of the other documents could engender other suits, even if there is some agreement for return of irrelevant. Once the bell has been rung, privileged or hot, it cannot be un-rung.

Case Example of Production With No Final Manual Review

In spite of the dangers of the unringable bell, the allure of extreme cost savings can be strong to some clients in some cases. For instance, I did one experiment using multimodal CAL with no final review at all, where I still attained fairly high recall, and the cost per document was only seven cents. I did all of the review myself acting as the sole SME. The visualization of this project would look like the below figure.

CULLING.filters_SME_only_review

Note that if the SME review pool were drawn to scale according to number of documents read, then, in most cases, it would be much smaller than shown. In the review where I brought the cost down to $0.07 per document I started with a document pool of about 1.7 Million, and ended with a production of about 400,000. The SME review pool in the middle was only 3,400 documents.

CULLING.filters_SME_Ex

As far as legal search projects go it was an unusually high prevalence, and thus the production of 400,000 documents was very large. Four hundred thousand was the number of documents ranked with a 50% or higher probable prevalence when I stopped the training. I only personally reviewed about 3,400 documents during the SME review, plus another 1,745 after I decided to stop training in a quality assurance sample. To be clear, I worked alone, and no one other than me reviewed any documents. This was an Army of One type project.

Although I only personally reviewed 3,400 documents for training, and I actually instructed the machine to train on many more documents than that. I just selected them for training without actually reviewing them first. I did so on the basis of ranking and judgmental sampling of the ranked categories. It was somewhat risky, but it did speed up the process considerably, and in the end worked out very well. I later found out that information scientists often use this technique as well.

My goal in this project was recall, not precision, nor even F1, and I was careful not to overtrain on irrelevance. The requesting party was much more concerned with recall than precision, especially since the relevancy standard here was so loose. (Precision was still important, and was attained too. Indeed, there were no complaints about that.) In situations like that the slight over-inclusion of relevant training documents is not terribly risky, especially if you check out your decisions with careful judgmental sampling, and quasi-random sampling.

I accomplished this review in two weeks, spending 65 hours on the project. Interestingly, my time broke down into 46 hours of actual document review time, plus another 19 hours of analysis. Yes, about one hour of thinking and measuring for every two and a half hours of review. If you want the secret of my success, that is it.

I stopped after 65 hours, and two weeks of calendar time, primarily because I ran out of time. I had a deadline to meet and I met it. I am not sure how much longer I would have had to continue the training before the training fully stabilized in the traditional sense. I doubt it would have been more than another two or three rounds; four or five more rounds at most.

Typically I have the luxury to keep training in a large project like this until I no longer find any significant new relevant document types, and do not see any significant changes in document rankings. I did not think at the time that my culling out of irrelevant documents had been ideal, but I was confident it was good, and certainly reasonable. (I had not yet uncovered my ideal upside down champagne glass shape visualization.) I saw a slow down in probability shifts, and thought I was close to the end.

I had completed a total of sixteen rounds of training by that time. I think I could have improved the recall somewhat had I done a few more rounds of training, and spent more time looking at the mid-ranked documents (40%-60% probable relevant). The precision would have improved somewhat too, but I did not have the time. I am also sure I could have improved the identification of privileged documents, as I had only trained for that in the last three rounds. (It would have been a partial waste of time to do that training from the beginning.)

The sampling I did after the decision to stop suggested that I had exceeded my recall goals, but still, the project was much more rushed than I would have liked. I was also comforted by the fact that the elusion sample test at the end passed my accept on zero error quality assurance test. I did not find any hot documents. For those reasons (plus great weariness with the whole project), I decided not to pull some all-nighters to run a few more rounds of training. Instead, I went ahead and completed my report, added graphics and more analysis, and made my production with a few hours to spare.

A scientist hired after the production did some post-hoc testing that confirmed an approximate 95% confidence level recall achievement of between 83% to 94%.  My work also confirmed all subsequent challenges. I am not at liberty to disclose further details.

In post hoc analysis I found that the probability distribution was close to the ideal shape that I now know to look for. The below diagram represents an approximate depiction of the ranking distribution of the 1.7 Million documents at the end of the project. The 400,000 documents produced (obviously I am rounding off all these numbers) were 50% plus, and 1,300,000 not produced were less than 50%. Of the 1,300,000 Negatives, 480,000 documents were ranked with only 1% or less probable relevance. On the other end, the high side, 245,000 documents had a probable relevance ranking of 99% or more. There were another 155,000 documents with a ranking between 99% and 50% probable relevant. Finally, there were 820,000 documents ranked between 49% and 01% probable relevant.

Probability_Distribution_Ora

The file review speed here realized of about 35,000 files per hour, and extremely low cost of about $0.07 per document, would not have been possible without the client’s agreement to forgo full document review of the 400,000 documents produced. A group of contract lawyers could have been brought in for second pass review, but that would have greatly increased the cost, even assuming a billing rate for them of only $50 per hour, which was 1/10th my rate at the time (it is now much higher.)

The client here was comfortable with reliance on confidentiality agreements for reasons that I cannot disclose. In most cases litigants are not, and insist on eyes on review of every document produced. I well understand this, and in today’s harsh world of hard ball litigation it is usually prudent to do so, clawback or no.

Another reason the review was so cheap and fast in this project is because there were very little opposing counsel transactional costs involved, and everyone was hands off. I just did my thing, on my own, and with no interference. I did not have to talk to anybody; just read a few guidance memorandums. My task was to find the relevant documents, make the production, and prepare a detailed report – 41 pages, including diagrams – that described my review. Someone else prepared a privilege log for the 2,500 documents withheld on the basis of privilege.

I am proud of what I was able to accomplish with the two-filter multimodal methods, especially as it was subject to the mentioned post-review analysis and recall validation. But, as mentioned, I would not want to do it again. Working alone like that was very challenging and demanding. Further, it was only possible at all because I happened to be a subject matter expert of the type of legal dispute involved. There are only a few fields where I am competent to act alone as an SME. Moreover, virtually no legal SMEs are also experienced ESI searchers and software power users. In fact, most legal SMEs are technophobes. I have even had to print out key documents to paper to work with some of them.

Penrose_triangle_ExpertiseEven if I have adequate SME abilities on a legal dispute, I now prefer to do a small team approach, rather than a solo approach. I now prefer to have one of two attorneys assisting me on the document reading, and a couple more assisting me as SMEs. In fact, I can act as the conductor of a predictive coding project where I have very little or no subject matter expertise at all. That is not uncommon. I just work as the software and methodology expert; the Experienced Searcher.

Right now I am working on a project where I do not even speak the language used in most of the documents. I could not read most of them, even if I tried. I just work on procedure and numbers alone, where others get their hands in the digital mud and report to me and the SMEs. I am confident this will work fine. I have good bilingual SMEs and contract reviewers doing most of the hands-on work.

 Conclusion

Ralph_face_13There is much more to efficient, effective review than just using software with predictive coding features. The methodology of how you do the review is critical. The two filter method described here has been used for years to cull away irrelevant documents before manual review, but it has typically just been used with keywords. I have tried to show here how this method can be employed in a multimodal method that includes predictive coding in the Second Filter.

Keywords can be an effective method to both cull out presumptively irrelevant files, and cull in presumptively relevant, but keywords are only one method, among many. In most projects it is not even the most effective method. AI-enhanced review with predictive coding is usually a much more powerful method to cull out the irrelevant and cull in the relevant and highly relevant.

If you are using a one-filter method, where you just do a rough cut and filter out by keywords, date, and custodians, and then manually review the rest, you are reviewing too much. It is especially ineffective when you collect based on keywords. As shown in Biomet, that can doom you to low recall, no matter how good your later predictive coding may be.

If you are using a two-filter method, but are not using predictive coding in the Second Filter, you are still reviewing too much. The two-filter method is far more effective when you use relevance probability ranking to cull out documents from final manual review.

Try the two filter method described here in your next review. Drop me a line to let me know how it works out.


Follow

Get every new post delivered to your Inbox.

Join 3,946 other followers