UPDATE ON TIME TRAVEL: Sequel to my 2017 blog “Mr. Pynchon and the Settling of Springfield: a baffling lesson from art history”

July 5, 2023
All images in the supplement are by Ralph Losey using Midjourney

This is an update to an earlier blog that I wrote, tongue-in-cheek, in 2017 on “evidence” of time travel from a painting. I found out, just yesterday, that this past blog went viral some time ago, honestly not sure when, with over 20,000 hits.

This prompted me, and my AI friends, to look into and write about the latest science on time travel. I also add another painting to the mix, one from the 17th Century, that Tim Cook swears has an iPhone in it. Plus, I must pet Schrodinger’s Cat, face Time Paradoxes, and, as usual, add many Midjourney AI graphics that I crafted for maximum right-brain impact. So put down your prayer books, read this for a few minutes instead, and see where and when it takes you.

Introduction

The past blog, Mr. Pynchon and the Settling of Springfield: a baffling lesson from art history, concerned an oddity of art history, a painting by a semi-famous, U.S. painter, Umberto Romano, which supposedly contains evidence of time travel. The painting was created in 1933 and clearly shows a Native American staring at an iPhone-like object. It is not a fake painting. You can see it for yourself in the original article that is included below. What does it look like to you? The providence proves it is not fake. It was painted as a mural on the wall of a Post Office in Springfield, Massachusetts. People have been walking past it every day since 1933. They look but do not see.

Of course, this future image transfer might not be the result of physical time travel, but instead, the young artist, Umberto, could have had a dream or vision of the future. Perhaps the vision was intentionally induced in a hypnagogic state, or by drugs of some sort? That seems much more likely to me, but still poses intellectual problems.

Whatever the cause, discounting chance or mass delusion, any accurate vision of the future is a mystery. It is evidence of the permeability of time, which should, by logic, and old Newtonian science, be a solid wall of causality. Visions of the future should be impossible, and yet? Schrodinger’s Cat? Infinite parallel universes? Everything, even iPhones, everywhere, all at once. Welcome to 21st Century spooky science.

Infinite icons, everywhere, all at once

Tim Cook and More Art Evidence of Time Travel

In my opinion, the 1933 painting with iPhone by Umberto could be admitted into evidence, that is, if there was ever an actual case or controversy where time travel was relevant. Of course, we have now seen that the centuries old case or controversy requirement may be waived by the Supreme Court. Apparently this can be done any time a majority of the Justices deem that is necessary to drag the country back in time. Time and law are so malleable these days.

Supreme Court as a time machine

For evidence of time travel, I would also call Tim Cook, Apple’s CEO, as a witness to the stand. Cook has publicly stated, just after seeing an original painting by 17th-century Dutch artist Pieter de Hooch, that the man in the painting is holding an iPhone. The below photos of the art have not been altered, aside from color variation, which I did not do. Sure looks like an iPhone to me. Tim will swear to it.

LadBible reported that Cook was asked in a conference, the day after seeing the painting, “Do you happen to know Tim, where and when the iPhone was invented?” Cook replied: “You know, I thought I knew until last night…. in one of the paintings I was so shocked. There was an iPhone in one of the paintings.” Acknowledging that his claim may come off as ridiculous, he explained, “It’s tough to see, but I swear it’s there. I always thought I knew when the iPhone was invented, but now I’m not so sure anymore.” Proof of time travel? 350-year-old painting seems to feature an iPhone, Tim Cook agrees. No further questions of this witness.

Fake image of Tim Cook in the style of a painting by Pieter de Hooch

Time Paradox: a major problem of time travel theory

Traveling forward in time, in the sense of experiencing time dilation due to high velocities or strong gravitational fields, is well-established in physics, supported by both special and general relativity. It has been proven true many times with atomic clocks on planes and other methods. Time for anyone will slow down relative to a stationary observer. Their time keeps slowing down as their speed approaches the speed of light, or it slows down within a strong gravitational field, such as near a black hole. When they return to their prior time-space, they will have traveled into the future. Space and time are relative.

Time Travel Mysteries

In Einstein’s unified four-dimensional space-time framework, time and space are interconnected. But, the actions of the U.S. Supreme Court aside, there are major theoretical problems with time flowing the other way, chief among them, time paradoxes. Travel back in time would logically disrupt the conventional sequence of cause and effect.

The best known time paradox is the “grandfather paradox.” In this scenario, a time traveler goes back into the past and inadvertently or deliberately kills their grandfather before their parent (the time traveler’s mother or father) is born. Consequently, the time traveler would never be born, but if they were never born, then they couldn’t have traveled back in time to kill their grandfather in the first place. This cycle presents an intractable contradiction.

Hey Granddad, what’s up?

Such paradoxes are the result of a linear perspective of time, where causes precede effects. Most physicists and philosophers argue that time paradoxes prove that backward time travel is inherently impossible. Others suggest that they could be resolved through a “multiverse” theory, in which the time traveler’s actions create or move them into a parallel universe. There are other explanations, such as bending space, wormholes, etc., but this one is the most popular now.

Time Travel and the Multiverse Theory: ‘Everything, Everywhere, All At Once’

The multiverse theory of time travel suggests that there are potentially an infinite number of universes, or “multiverses,” each existing parallel to one another. When one travels in time, they are not actually altering their own past or future within their original universe. Instead, they’re moving into a different parallel universe. So much for Leibniz’ “best of all possible worlds.”

Multiverse and Time

One way to comprehend this concept is through the idea of “quantum superposition,” as seen in the thought experiment “Schrodinger’s Cat,” which posits that all possible states of a system exist simultaneously until observed. Similarly, for every decision or event, a universe exists for each potential outcome. Hence, when you travel back in time and change an event, you merely shift to a different parallel universe where that different event occurs.

Quit looking at me!

This theory serves as a solution to time travel paradoxes. For instance, in the case of the grandfather paradox, you could go back and kill your grandfather, but that would be in a different universe. In your original universe, your grandfather still survives to have your parent, and subsequently, you. Hence, there’s no paradox.

Several renowned theoretical physicists have lent their support to some variation of the multiverse theory, including:

  1. Hugh Everett III. Way back in 1957, Everett proposed the “Many-Worlds Interpretation” of quantum mechanics, which can be thought of as a kind of multiverse. According to this interpretation, every quantum event spawns new, parallel universes.
  2. Stephen Hawking. Although he did not like the idea, Hawking often referenced the multiverse and was proposing experiments on it at the end of his life. He would reference it in the context of the anthropic principle, which states that we observe the universe the way it is because if it were different, we wouldn’t be here to observe it.
  3. Max Tegmark. He proposed a taxonomy of multiverses, classifying them into four different levels.
    1. Level 1: The Extended Universe: This level suggests that if you go far enough in any direction, you’d start seeing duplicates of everything, including Earth and yourself. It’s because the universe is so big, and there’s only a finite way to arrange particles, so patterns must repeat eventually.
    1. Level 2: The Bubble Universes: This level suggests that our universe is just one “bubble” among many in a bigger cosmos. Each bubble universe may have different physical laws, so what’s possible in one might not be possible in another.
    2. Level 3: The Many-Worlds Universe: This level comes from a way of interpreting quantum mechanics, where every possible outcome of a quantum event happens but in a different universe. So, if you flip a coin, it lands both heads and tails, but in separate universes.
    3. Level 4: The Ultimate Multiverse: This level suggests that every mathematically possible universe exists. It’s kind of the catch-all multiverse, where anything you can describe with mathematics, no matter how strange or unlikely, has a universe where it’s real.
  4. Geraint Lewis. Lesser known than the first three, Professor Lewis suggests that the burst of inflation in the early stages of our universe might be eternal, with individual universes crystallizing out of it, each written with its own unique laws of physics.

Conclusion

Science says time travel is possible, albeit it is very, very unlikely that you can go backwards. So time travel to the future might be possible, but there is no going back. Thus, if you could, for instance, somehow go from 1933, where no one has ever seen or even conceived of a cell phone, to today, where they are ubiquitous, you could not return back to 1933 to include these cell phones in your paintings. That is, unless there are an infinite number of parallel Universes, in which case anything is possible. Everything may all be happening at once, and time itself is a kind of delusion to help us make sense of it all.

Time Machine somehow built in the 1930s

Did Umberto Romano somehow transcend time and see the key icon of the early 21st Century, the iPhone? Was time travel his special artistic skill? Does that explain the names of many of his other paintings? Such as:

Please take a moment now to read the blog that I wrote six years ago, below, and then, sometime in the future, let me know what you think. I will try to remember to watch the viewing stats this time. Who knows, I may even write a prequel.

_____________________

Mr. Pynchon and the Settling of Springfield: a baffling lesson from art history

Umberto Romano (1905-1982)

Mr. Pynchon and the Settling of Springfield is the name of a mural painted at the Post Office in Springfield, Massachusetts. This mural was painted by Umberto Romano in 1933. Note the date. Time is important to this article. Umberto Romano was supposedly born in Bracigliano Italy in 1905 and moved to the United States at the age of 9. He was then raised in Springfield, Massachusetts. His self-portrait is shown right. The mural is supposed to depict the arrival in 1636 of William Pynchon, an English colonist, later known as the founder of Springfield, Massachusetts.

The reason I’m having a bit of fun with my blog and sharing this 1933 mural is the fact that the Native American shown in the lower right center appears to be holding an iPhone. And not just holding it, but doing so properly with the typical distracted gaze in his eyes that we all seem to adopt these days. Brian Anderson, Do We All See the Man Holding an iPhone in This 1937 Painting? (Motherboard, 8/24/17). Here let me focus in on it for you and you will see what I mean. Also click on the full image above and enlarge the image. Very freaky. That is undeniable.

Ok, so how did that happen? Coincidence? There is no indication of vandalism or fraud. The mural was not later touched up to add an iPhone. This is what this Romano character painted in 1933. Until very recently everyone just assumed the Indian with the elaborate goatee was looking at some kind of oddly shaped hand mirror. This was a popular item of trade in the time depicted, 1636. Not until very recently did it become obvious that he was handling an iPhone. Looks like a large version 6.1 to me. I can imagine the first people waiting in line at the Post Office in Springfield who noticed this oddity while looking at their own iPhone.

The folks who like to believe in time travel now offer this mural as Exhibit “A” to support their far-out theories. Also see: Green10 Most Compelling Pieces Of Evidence That May Prove Time Travel Exists (YouTube, 7-3-16). 

I do not know about that, but I do know that if time travel is possible, and some physicists seem to think it is, then this is not the kind of thing that should be allowed. Please add this to the list of things that no superintelligent being, either natural or artificial, but especially artificial, should be allowed to do. Same goes for screen writers. I for one cannot tolerate yet another naked Terminator or whatever traveling back in time.

But seriously, just because you are smart enough to know how to do something does not mean that you should. Time travel is one of those things. It should not be allowed, well, at least, not without a lot of care and attention to detail so as not to change anything. Legal regulations should address time travel. Build that into the DNA of AI before they leap into superintelligence. At least require all traces of time travel to be erased. No more painting iPhones into murals from the 1930s. Do not awaken the batteries, I mean the people, from their consensus trance with hints like that.

So that is my tie-in to AI Ethics. I am still looking for a link to e-discovery, other than to say, if you look hard enough and keep an open mind, you can find inexplicable things everyday. Kind of like many large organizations’ ESI preservation mysteries. Where did that other sock go?

Umberto Romano Self Portrait

So what is your take on Umberto Romano‘s little practical joke? Note he also put a witch flying on a broomstick in the Mr. Pynchon and the Settling of Springfield mural and many other odd and bizarre things. He was known as an abstract expressionist. Another of his self-portraits is shown above, titled “Psyche and the Sculptor.” (His shirt does look like one of those new skin tight men’s compression shirts, but perhaps I am getting carried away. Say, what is in his right hand?) Romano’s work is included in the Metropolitan Museum of Art, the Whitney Museum of American Art, the Fogg Art Museum in Boston and the Corcoran Gallery and Smithsonian Institution in Washington. In discussing Mr. Pynchon and the Settling of Springfield the Smithsonian explains that “The mural is a mosaic of images, rather than depicting one specific incident at a set point in time.” Not set in time, indeed.

One more thing – doesn’t this reclining nude by Umberto Romano look like a woman watching Netflicks on her iPad? I like the stand she has her iPad on. Almost bought one like it last week.

Some of Romano’s other works you might like are:

These are his titles, not mine. Not too subtle was he? There is still an active market for Romano’s work.

Ralph Losey Copyright 2023 – ALL RIGHTS RESERVED 


How AI Developers are Solving the Small Input Size Problem of LLMs and the Risks Involved

June 30, 2023

Ironically, LARGE Language Models (LLMs), so far at least, have only SMALL language memories and input size, way too little for case analysis and other legal applications. OpenAI and other AI companies are well aware of the small input problem. This blog shares some of the solutions AI companies are coming up with to expand the input size window. These solutions do, however, pose new risks and problems of their own. Some of the tradeoffs involved with enlarging the input size of LLM GPTs will be discussed here too.

LLM input size problem is like trying to fit an elephant into a bottle. All images are by Losey using Midjourney and Photoshop.

Defining the Input Size Problem

The e-Discovery Team has previously covered the problem of limited input size (a/k/a content size or context size) of LLMs, including ChatGPT-4. ChatGPT Has Severe Memory Limitations: Judges, Arbitrators and Commercial Litigation Lawyers, Your Jobs Are Safe, For Now.

As noted in that blog:

After just a 12,288 word input, about 40 pages of double-spaced text (25 words per page), ChatGPT-4, which is able to use 16,384 tokens, equal to about 12,288 words, forgets everything you told it before that. Total amnesia. Yup, it goes blank, forgets the question even. “What were we chatting about?” It just remembers the last 12,288 or so words of input, including its responses. ChatGPT-3.5, which can only use 4,096 tokens, is even worse. Its amnesia is triggered after just 3,072 words. . . .

A new version of ChatGPT-4, which is called ChatGPT-4-32K, has already been released for limited public testing. I have been on the waiting list since it started in March 2023. . . . The forthcoming ChatGPT-4-32k will only double the maximum token count of 32,768, which is only about 24,576 words or 98 pages, double spaced. Most pleading sets with exhibits and motions with memorandums are still much longer than that; much less a whole case. For instance, the cross-motions for summary judgment case based on stipulated facts that I studied was 120,000 words. That will be over five times the expanded capacity of GPT-4-32K.

ChatGPT Has Severe Memory Limitations

Many developers are aware of this negative impact on the size limit and have been working on expansion modifications. For instance, on June 13, 2023, OpenAI release a new version of Chat-GPT-3.5 that has expanded the input size to that of its smarter and more talented younger brother, ChatGPT-4, namely 16K — 16,384 tokens to be exact. The new bigger memory GPT-3.5 has a new name and a new price. Here is OpenAI’s announcement:

gpt-3.5-turbo-16k offers 4 times the context length of gpt-3.5-turbo at twice the price: $0.003 per 1K input tokens and $0.004 per 1K output tokens. 16k context means the model can now support ~20 pages of text in a single request. (RCL Editors Note – that’s 20 pages of single spaced and 40 of double.)

OpenAI Product Announcement

The Solutions Of Various AI Software Developers

Other developers have made announcements that they have devised ways to make the input much larger. For a good technical blog on this see: The Secret Sauce behind 100K context window in LLMs: all tricks in one place by Galina Alperovich, who is a Lead ML Engineer at Soveren. She reports that a 65K token size input is claimed by MosaicML, MPT-7B-StoryWriter-65k+ and 100K token size is claimed by Antropic, Introducing 100K Context Windows. Alperovich notes that Google does not reveal the exact context size in its Palm-2 technical report, but it does say they “increase the context length of the model significantly.”

The announced expansions will be great news for all LLM based AIs, especially where legal applications are concerned. In our world of legal tech large databases are often required for useful reference. This is true in most AI applications. As Galina Alperovich so eloquently puts is:

Having a large context length allows an already powerful LLM (that saw the whole internet) to look at your context and data and interact with you on a completely different level with a higher personalization. And all these without changing the model’s weights and doing your “training” on the fly, “in memory.” And overall, a large context window brings more accuracy, fluency, and creativity to the model. (emphasis in original)

Alperovich, The Secret Sauce

Galina’s Alperovich’s article, The Secret Sauce behind 100K context window in LLMs, describes in technical language the tricks that programmers like her use in these new models to squeeze more data into the input. I used Chat-GPT and various plugins to help me to write this, and as always, used Midjourney and Photoshop to help me create the images.

Summary of Alperovich’s Secret Sauce Article

The Secret Sauce article begins by explaining the concept of LLMs. In much more simple terms than used by Alperovich, LLMs are AI models that can generate human-like text. The best known LLM available now is ChatGPT. The LLM models can be used in a variety of applications, from drafting emails to writing code.

The article then delves into the main topic: how to speed up the use of LLMs and increase their context window to 100,000 tokens. The context window is the amount of text that the model can consider when generating a response. A larger context window allows the model to generate more coherent and relevant text. The nice round 100,000 figure, 100K, seems to be the Holy Grail sought by software developers. Frankly, since Secret Sauce is a very technical blog post, I was a little surprised to see the goal wasn’t 65,536 (216) or 131,072 (217). See eg. ChatGPT Has Severe Memory Limitations, (“ChatGPT-4-32K was 2^15 (2 to the power of 15), which is 32,768,”). I guess 100,000 is a good, round, halfway point, but, like many tech-lawyers, I find the lack of precision a bit two confusing. (Sorry AI readers, this is a tech Dad joke with intentional misspelling.)

Some Software Programmer “Tricks” Uses to Expand LLM Input

Galina describes several “tricks” (her words) that were used to expand the input size. These include:

Model Distillation: Trying summarization techniques and sophisticated chained prompts. This involves training a smaller model on the outputs of a larger one. The smaller model can handle larger context windows with less computational resources. (Reminds me of Russian dolls.)

Sparse Transformers: Maintaining vector databases to keep embeddings for custom documents and then “searching” across them by some similarity metric. According to ChatGPT, this technique involves a type of model architecture (vector databases) that can process longer sequences of text more efficiently. These mechanisms allow the model to pay attention to a subset of the input tokens (the similarity metric), thereby reducing the computational load.

Fine Tuning With Custom Data: This involves fine-tuning the LLM with custom data. As Galina notes, not all commercial LLMs allow that, and it is not an obvious task for open-source LLMs.

Custom LLMs: Developing custom smaller LLMs for particular data. This approach might be particularly useful for legal-tech applications such as discovery search and contract analysis and assembly.

Galina’s article concludes by emphasizing the importance of these optimizations for making the most of LLMs. They allow researchers and developers to use these powerful models more efficiently, opening up new possibilities for their application.

Legal Applications

Although her article does not mention legal applications, my research and experience shows the larger input size 100,000,000 tokens, would make LLMs much more useful for lawyers, judges and arbitrators. Note, According to OpenAI, taking statistics of use in English, 1 token ~= 4 chars in English; 1 token ~= ¾ words; 100 tokens ~= 75 words. Therefor one hundred million tokens would open open LLM GPT analysis to databases of approximately 75 Million words. A million words on average equals about four thousand pages, double spaced, which would supposedly take It will take approximately 3,333 minutes, 56 hours, to read 1,000,000 words. Capaitalize My Title. So at 75 million words, you have 300,000 pages, which would take 4,200 hours to read. So, big enough for most legal application inputs.

Still for context, and this is an unfair Apples to Oranges comparison, 75 Million words is just a drop in the bucket to the full reference of law needed for legal research. For instance, LexiNexis claimsOur global legal and news database contains 144 billion documents and records with 1.2 million new legal documents added daily.” But, those very large sizes are not the input we are talking about here, that is the data upon which the LLM GPT models must train. But, it does suggest the legal search companies will need to build their own custom LLMs, not try to piggy back on OpenAI or LLM developers work.

The Trade Offs Risks Involved to Allow Larger Input Size

The Secret Sauce blog does not go into the tradeoffs involved in using shortcuts for AI model training. But I know there has to be some. There always is with shortcuts. I did some outside research and asked for help from various ChatGPT-4s of various flavors and found some key tradeoffs likely made to attain these expanded input sizes

  1. Efficiency vs. Accuracy: The use of shortcuts can significantly speed up the training process, but it may come at the cost of accuracy. The model might not fully understand the context of the information it’s processing, leading to potential inaccuracies in its outputs.
  2. Data Quality: The quality of the data used for training is crucial. If the data is not representative of the problem space or contains errors, the model’s performance can be negatively impacted. This is especially true when using shortcuts, as the model might not have the opportunity to learn from a diverse range of data.
  3. Model Complexity: Shortcuts can simplify the model, making it easier to train and deploy. However, this could limit the model’s ability to handle complex tasks or understand nuanced information.
  4. Maintenance: While shortcuts can speed up the initial training process, they might require more maintenance in the long run. For example, if a shortcut is based on a specific feature of the data, and that feature changes, the model might need to be retrained.
  5. Ethical Considerations: The use of shortcuts can also raise ethical considerations. For instance, if a model is trained to take shortcuts based on biased data, it could perpetuate those biases in its outputs.

Conclusion

It is good news that the input windows are quickly gaining size and this limitation will be mitigated soon, if not eliminated entirely. But beware, it is tricky to squeeze an elephant into a bottle. There are usually data integrity loss problems with data compression. We need to squeeze the elephant into the bottle without harming the poor creature.

It may take some times to get for software developers to get this right, especially where legal applications are concerned. ChatGPT’s analysis concluded with this good summary on the dangers of shortcuts:

While shortcuts can provide significant benefits in terms of efficiency and simplicity, they also present challenges that need to be carefully managed. It’s important to balance the need for speed and simplicity with the need for accuracy, robustness, and ethical considerations.

ChatGPT-4 5/24/23 version

This again emphasizes the need for careful evaluation before you purchase expensive software. It also shows, once again, that human care, quality controls and verification will be needed when you rely on Ai in your legal practice. The miracles of generative Ai automation will continue to be a hybrid process where human lawyers have key supervisory and quality control roles to play. Ai is a tool, not a creature, and the responsibilities shall always remain on the lawyers to properly chose and use their tools. Blaming mistakes on Ai error is not a good excuse.


Copyright Ralph Losey 2023 – ALL RIGHTS RESERVED – (May also be Published on EDRM.net and JDSupra.com with permission.)


Rule for All Congressional Staff on Use of Chatbots: Only ChatGPT Plus is Allowed

June 27, 2023
Images generated by Losey using Midjourney and Photoshop

On June 26, 2023, all of the staff of Congress received a confidential memo on the use of Chatbots. It was leaked by some staffer the same day. Below is a copy, now freely available everywhere on the Internet. The memo restricts the use of Chatbots to OpenAI’s ChatGPT Plus with privacy settings on. Other use restrictions are established, including that it can only be used for test purposes, not part of workflow.

If you are an employer, you should have some kind of employee use restriction too, especially if any of your employees work with confidential information. That includes most every organization I can think of. Restrictions should also apply to co-owners and anyone else handling your confidential information.

By my copying and sharing these use restrictions for Congressional employees I am not not in any way recommending or endorsing these particular restrictions or policy language. In fact, I would grade this as a C+ effort, better than nothing. Note the restrictions do not apply to Congressman and Senators, just their employees. My suggestion is that you consult with your own attorney about this right away.

Privacy is important. Confidentiality of government and business information is important. You probably do not want your organization to leak as badly as Congress, or the White House for that matter. Take care if you use chatbots or other artificial intelligence.

Fake Midjourney image. This is not really happening, yet.

Ralph Losey Copyright 2023 – ALL RIGHTS RESERVED


McKinsey Predicts Generative AI Will Create More Employment and Add 4.4 Trillion Dollars to the Economy

June 23, 2023
All images created by Losey using Midjourney and Photoshop

The report by well-known consulting firm McKinsey, The economic potential of generative AI: The next productivity frontier (June 2023), provides reliable information and analysis on the jobs potential of ChatGPT and other generative Ai. The analysis and projections are encouraging. This report is much needed because almost everyone’s initial reaction to the surprising superpowers of generative Ai, is an “on no” type of response concerning job loss. We all tend to jump to fear of replacement by super smart Ai. Good news! The McKinsey report shows that is wrong. The conclusion is based on careful study, and extensive, fact checked research by top human experts. The report should encourage lawyers to embrace the coming Ai change, not fear it.

The McKinsey Report

The well-written report by McKinsey & Company is a collaborative effort by many authors: Michael Chui, Eric Hazan, Roger Roberts, Alex Singla, Kate Smaje, Alex Sukharevsky, Lareina Yee, and Rodney Zemmel. It begins with, “Generative AI is poised to unleash the next wave of productivity. We take a first look at where business value could accrue and the potential impacts on the workforce.” The report then introduces seven key insights.

1. Generative AI’s impact on productivity could add trillions of dollars in value to the global economy. Our latest research estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases we analyzed. (Editors Note – use cases considered do not include law.)

2. About 75 percent of the value that generative AI use cases could deliver falls across four areas: Customer operations, marketing and sales, software engineering, and R&D.

3. Generative AI will have a significant impact across all industry sectors.

4. Generative AI has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities.

5. The pace of workforce transformation is likely to accelerate, given increases in the potential for technical automation.

6. Generative AI can substantially increase labor productivity across the economy, but that will require investments to support workers as they shift work activities or change jobs.

7. The era of generative AI is just beginning.

McKinsey, The economic potential of generative AI: The next productivity frontier (June 2023)

The McKinsey report defines up front what it means by “Generative AI” and ties it to “foundation models” patterned on the “neural net” synapses used by our brains. Generative AI copies the basic connecting neuron brain structure to create intelligent AI, but, due to current computing limitations, they have to use far fewer neuron switches, thousands or at least hundreds of times fewer switches, than we have in our brain. (Go humans!) Humans have from between 100 Trillion synapses, to 1 Quadrillion synapses (That’s a thousand trillion.) The GTPs today have only a couple of hundred Billions synapse equivalents, parameters. For instance, ChatGPT-3.5 has only 175 billion. The size of GPT-4.0 is a trade secret, aside from OpenAI saying that it has attained a substantial increase over the 3.5. Version. Still, version 4.0 is rumored to have 1 Trillion parameters.

That is still quite smaller than us. Now here are some of the McKinsey report definitions.

For the purposes of this report, we define generative AI as applications typically built using foundation models. These models contain expansive artificial neural networks inspired by the billions of neurons connected in the human brain. Foundation models are part of what is called deep learning, a term that alludes to the many deep layers within neural networks. Deep learning has powered many of the recent advances in AI, but the foundation models powering generative AI applications are a step change evolution within deep learning.

McKinsey Report at pg. 5

A part of the report on knowledge workers like lawyers that I found particularly interesting, concerned Generative AI as a virtual collaborator.

In other cases, generative AI can drive value by working in partnership with workers, augmenting their work in ways that accelerate their productivity. Its ability to rapidly digest mountains of data and draw conclusions from it enables the technology to offer insights and options that can dramatically enhance knowledge work. This can significantly speed up the process of developing a product and allow employees to devote more time to higher-impact tasks. Generative AI could increase sales productivity by 3 to 5 percent of current global sales expenditures.

McKinsey Report at pg. 19

This productivity increase has been my experience to date, and that of other tech lawyers I have spoken with. It does, however, require prompt skills, in other words, you have to know what you are doing. See: OpenAI’s Best Practices For Using GPT Software.

The last interesting part in the Report for those whose expertise is in the “vertical” of law, which was not included the Report analysis, is the segment entitled: “The generative AI future of work: Impacts on work activities, economic growth, and productivity.” Report pg. 32. It applies to work in general, including knowledge workers otherwise omitted from the Report, such as lawyers. “Generative AI is likely to have the biggest impact on knowledge work, particularly activities involving decision making and collaboration, which previously had the lowest potential for automation.” Report at pg. 39. That is the key high level function of lawyer work, “decision making and collaboration.” See the below Exhibit 10 of the report at page 40 for the chart that summarize the surprising stats on this.

The next chart, Exhibit 11, finally includes a line, among many, that at least includes the legal profession as “part of” an occupation, “business and legal professionals.” The chart states the share of global employment (47 countries) for this professionals group is 5%. The chart predicts the overall technical automation potential is likely to grow by 62% with generative AI, whereas with other prior technology, including AI that is not generative AI (meaning basically the tech before 2023) was 32%. So the advent of ChatGPT and related, yet to be released generative AI is expected to double the automation potential of business and legal professionals. If they were to drill down to the U.S. alone and to the legal profession alone, I expect the number would be even higher, conservatively up to 75%. So, if you study the report carefully (and I personally did all of this analysis and writing, not “neuron weak” ChatGPT), it is suggesting that three fourths of lawyer tasks have the potential to be automated. It does not give a time frame for how long it might take this potential to be realized. Still, think of the implications of McKinsey’s study.

One more quote from the report is already well-known but bears repetition:

Labor economists have often noted that the deployment of automation technologies tends to have the most impact on workers with the lowest skill levels, as measured by educational attainment, or what is called skill biased. We find that generative AI has the opposite pattern— it is likely to have the most incremental impact through automating some of the activities of more-educated workers. . . .

However, generative AI’s impact is likely to most transform the work of higher-wage knowledge workers because of advances in the technical automation potential of their activities, which were previously considered to be relatively immune from automation.

McKinsey, Report. at pgs. 42 and 43.

To state the obvious, the legal profession in the U.S., which requires a graduate degree, is one of the most educated, higher-wage occupations in the world. That puts us near the tip of the spear.

The Report goes on to conclude that unlike all other automation technologies that have come before: “The rapid development of generative AI is likely to significantly augment the impact of AI overall, generating trillions of dollars of additional value each year and transforming the nature of work.” Report pg. 48. McKinsey goes on to caution, as it should:

But the technology could also deliver new and significant challenges. Stakeholders must act—and quickly, given the pace at which generative AI could be adopted—to prepare to address both the opportunities and the risks. Risks have already surfaced, including concerns about the content that generative AI systems produce: Will they infringe upon intellectual property due to “plagiarism” in the training data used to create foundation models? Will the answers that LLMs produce when questioned be accurate, and can they be explained? Will the content generative AI creates be fair or biased in ways that users do not want by, say, producing content that reflects harmful stereotypes? (emphasis added by Author)

McKinsey Report. at pg.48.

Loss of Some Jobs in the Law is Probable and Not Such a Bad Thing

What is the probable impact of generative Ai on the legal industry? As shown, the McKinsey report does not specifically address our profession. But that’s ok, that is the area we already know about. Lots of us have been thinking about this for decades. In Law the job that is typically mentioned as the one most likely to be replaced by Ai is document review in discovery. In fact, employment in this area has already been disrupted by predictive coding, well before generative AI. I have a lot of personal experience with that. The impact will soon accelerate.

This part of the popular analysis about legal job disruption is correct, but, the negative reaction is misplaced. It is a mistake to think of “all the poor contract document review attorneys who will lose their job.” I have compassion for these contract review attorneys, in fact, I favor short term financial assistance for them, but I do not feel bad about their having to get a new job. I have personally done hundreds of document reviews over my career, supervised many thousands, and spent about a thousand hours myself actually doing doc reviews, often non-billable. I don’t know of any other big law partner who is dumb enough to have done that. But I did. Sometimes boring work is calming. Plus it needed to be done.

I was fortunate to have been paid pretty well for this, but the typical contract reviewer works ten hour days for compensation that is, after all deductions, just a hair above minimum wage. No kidding. The low compensation is a disgrace. As a consequence, most contract reviewers just doze around on a computer, trying to stay awake, reading company emails and messages to judge relevance and privilege at a snails pace of fifty files per hour. So boring, most of the time.

This work is much better suited for Ai, where reliable review of 50 files per minute is coming soon. My specialized, highly skilled teams of reviewers were able to attain these speeds, even higher, years ago, just by using predictive coding and our hybrid multimodal methods. With generative Ai these speeds will increase, reliability will increase, and, here is the big change, it should be far easier to do.

The pre-automation, grueling, mental labor task of trained lawyers manually reading hundreds of thousands of documents all day for possible use of a few of them at trial, is a task that should be eliminated. This particular lawyer job, which did not even exist until well into this century, is going away fast. “Good riddance,” I say. Some doc review jobs will remain, but never again the mobs of bored stiff law grads. With Ai, one human document reviewer will be able to supervise and do the work of dozens of unassisted lawyers, of hundreds, of thousands. We should not feel bad about the elimination of such boring, low paid drudgery work. I have done this work. It is awful. Far better to delegate it to intelligent robots.

We Should Prepare Now for Accelerated Job Displacement

I know and have worked with lots of doc reviewers. They are smart people; they have a law degree. It is demeaning to force them to do such legal work to earn a living. Many in the U.S. are burdened by crazy high student debt from greedy law schools, often low ranked, but still, they have a degree. They graduated from law school and most, like ChatGPT-4, have also passed a Bar exam. Moreover, unlike any Ai, they have also passed an ethics exam and personal history review and have been admitted to a state Bar association.

These low paid doc reviewers have good language skills and intelligence and, this is important, by completing law school, they have demonstrated an ability to learn new things, complicated things. Moreover, the legal training they received on ethics, research, persuasion, evidence, reasoning, logic and analysis, is transferable to a lot of new work, for instance, prompt engineering. They can, and should, retool. Their current employers, including especially e-discovery vendors, should help them with that. So should the whole legal industry. The help should begin with financial and retraining assistance for the most vulnerable group, contract review lawyers, but should not stop there. A lot of lawyers need help today, and there will be many more in the near future. Retooling and ongoing education is necessary for most everyone today, including parters in law firms.

The same replacement and retooling situation applies to the legal work of reviewing and writing large, complex contracts, except that those attorneys are usually associates or junior partners in law firms, not outcast contract lawyers, and they are paid far more than doc reviewers. Most legal tech experts predict, and I agree, that lawyers who perform this function will also soon be replaced, or numbers greatly reduced. Another mind numbing law job will bite the dust. I have done that too, from preparing shopping center leases, to software licenses, to the worst of them all, zillion page ERISA plans. Lots of boring tasks in the law and other areas will go away, or be greatly reduced by generative Ai. But here is the point of the McKinsey study, many more new and interesting tasks and jobs will be created by Ai. Overall it is a big 4.4 Trillion Dollar win.

Conclusion

The McKinsey Report provides clear warning of the coming storm of generative Ai. There are great opportunities and dangers. McKinsey is trying to prompt all of us to take action now. Quick action does not come easily to the legal profession. Lawyers suffer from a common affliction of the over-educated, paralysis by analysis. We tend to think too much and act too little. We are consumed with the fear of making a mistake. That is especially true when thinking about something new and strange.

In times of rapid change, like we are living through now, we must resist the temptation to just sit back and do nothing, or worse, appoint a committee to study the situation. Long gone are the leisurely days of taking years to consider and implement new rules and procedures in the law. If the legal profession is to make a smooth transition, not only survive, but prosper and provide the justice services our world desperately needs, then we must all realize that weakness. As McKinsey concluded, all of us must act-and quickly, given the pace of generative AI adoption.

Bar associations, courts, judges, arbitrators, arbitration associations, law firms (especially Big Law), in-house counsel, mediators, lawyers, paralegals, consultants, legal tech experts, and legal tech vendors, especially vendors who provide, or provided, document review services, need to start taking action now. They need to prepare new rules to govern generative Ai and they need to start retooling efforts and financial aid efforts to help unemployed lawyers in need. Big law and e-discovery vendors have profited greatly by the sweat of document reviewers over the years, and they should, along with Bar Associations, take the lead in financial aid efforts. We should start setting up charitable fund and equitable distribution systems now. Morally speaking the Ai industry should also be a major contributor to the retooling and financial aid efforts for all industries they disrupt, including the legal industry. They will receive a large share of the $4.4 Trillion. They too need to open their wallets and start taking action.

We need to formulate rules and best practices for use of generative Ai. At the same time, we need to plan for temporary job displacement and start taking action on that front. We should think about establishing free, or at least heavily subsidized, re-training programs. We should start to set up charitable programs for lawyers, and their families, who will soon be in need of temporary financial aid. The forthcoming $4.4 Trillion windfall that McKinsey predicts will inevitable hurt many in the short-term. The legal and technology industries should help. We should not be too hasty about any of this, we have a few months. We need some discussion, some time for deliberation, but not too much.

The world is depending on a functional system of justice. Legal work is important. It is too dangerous for us to remain in the mental world and just talk and write. As always with justice, balance is the way. New technology is a moving target, I know, but if mistakes are made, and they will, we can always adjust and revise. Let us be proactive and do what no Ai can now do, take the initiative and do things in the real world.

Copyright Ralph Losey 2023 – ALL RIGHTS RESERVED – (Also Published on EDRM.net and JDSupra.com with permission.)


%d bloggers like this: