Henry Kissinger and His Last Book – GENESIS: Artificial Intelligence, Hope, and the Human Spirit

June 19, 2025

Ralph Losey. June 19, 2025.

Henry A. Kissinger co-wrote his last book at the age of 100 with tech giants Eric Schmidt and Craig Mundie. Genesis makes clear that what we do next with AI could be our greatest triumph or our gravest mistake. Here I review the book and, to the extent necessary, the legend behind it, Henry Kissinger.

Henry Kissinger’s Genesis was published posthumously in late 2024. He had great technical help and was prodded to write the book by his two co-authors: Eric Schmidt, the former CEO of Google, and Craig Mundie, the former Chief of Research at Microsoft. They both knew Kissinger well before starting to write the book with him, which at age 99, they all knew would be his last. This is Kissinger’s second book on AI with Eric Schmidt. His first was published in 2022 entitled, The Age of AI: And Our Human Future, with another co-author, Daniel Huttenlocher, a Professor and Dean at MIT.

Like many, I had mixed feeling about Henry Kissinger from his work for Nixon and the Vietnam War, but still I was persuaded by Eric Schmidt’s many videos to give Genesis a try. Schmidt has been tirelessly promoting the book and the strategic insights Kissinger has on AI. He even created a slick promotional video for Genesis that uses an AI enhanced voice speaking Kissinger’s own words (suggest you click on it now to prep for my all-too-human book review). Also see, Eric Schmidt on AI, Foreign Policy, and working with Dr. Henry Kissinger (Nixon Foundation, 3/5/25) (more personal details on Kissinger than any of the dozens of other Schmidt interviews).

AI generated image in style of elder Kissinger. All images in the article are by Ralph Losey using AI (except the two public domain photos noted).

In the Nixon Foundation video Schmidt states that Kissinger was the leader in the writing and editing of Genesis and that Henry was very meticulous and dedicated to it. Id. at video 27:46. Henry finished the last chapter the week before he died on November 29, 2023. Id. at video 35:50. At the same time he was writing Genesis, Kissinger was co-authoring an article with his diplomatic colleague, Graham Allison. The Path to AI Arms Control: America and China Must Work Together to Avert Catastrophe, (Foreign Affairs, 10/13/23). Yes, at the end of his life Henry Kissinger was focused on the power, promises and severe dangers of AI, including the danger of war with China posed by possible superintelligence.

Having now read the book I can well understand why Schmidt is urging everyone to read Genesis. We are living in dangerous times where strategy and diplomacy are more important than ever. Henry Kissinger’s last book is a red flag, outline of solutions and a beacon of hope. I am happy to recommend it.

Schmidt and Kissinger meeting with government officials in Beijing. Real meeting, fake AI image of it by Losey.

Kissinger’s Inside Briefings About AI

Schmidt went on to share in the Nixon Foundation video that he and introduced Kissinger to both Demis Hassabis and Dario Amodei. Henry became friends with them and had many in-person and zoom conversations. Eric Schmidt on AI, Foreign Policy, and working with Dr. Henry Kissinger, video at 29:30.

With their help, and that especially that of Schmidt and Mundie, Kissinger came to understand that AI involved the Midas Touch archetype, not only for its profitability, but as a warning to be careful what you ask for. Our dream of superintelligent AI, one well beyond our comprehension, could easily become a nightmare. Genesis, in Kissinger’s words, “examines what AI means to humanity and explores solutions to the challenges it poses.

AI Image of Hassabis, Amodei, Schmidt, Kissinger.

In fact, Eric tells the story of when Kissinger asked to test out Dario’s latest AI models. Amodei told Henry to come up with a prompt, a question, which he did: “Design a new religion that will spread rapidly in today’s age.Id. at video 30:00. Kissinger was blown away by the AI’s detailed response and plan of action. Schmidt says it was that demonstration that caused Kissinger to understand fully, for the first time, the power of the AI revolution underway. Henry had some hands-on AI experience and that drove him, as it does me, to think and write about it obsessively.

AI image of Henry writing about AI and religion. Who would expect he’d do that at age 100 just before he passed away?

I thought it would be fun to run the same prompt on OpenA’sI new model, o3 Pro, released on 6/12/25. OpenAI states that o3 Pro is its best model for reasoning. It is certainly far ahead of the Anthropic AI that Kissinger used a few years ago. It took o3 Pro thirteen minutes and two seconds of reasoning time before it responded. That is a very long time for AI, which thinks millions of times faster than we do. In the end, it did figure out how to create a new, very appealing religion.

In fact, like Kissinger before me, I was blown away by its frighteningly well considered plan. The religion proposed is called “Synterra Path – a mash‑up of syn (“together”) and terra (“earth”).” You can see the full eleven point plan for yourself in the attached transcript, which does not include the extensive sources that o3 Pro also provided. Please do not attempt to implement the plan for Synterra Path, which is already underway, or the AI lawyer agents will file suit.

The new AI religion where Kissinger is the prophet who asked AI first. Try Henry’s prompt yourself!

Genesis Is a Short Book by Kissinger Standards

At 218 pages Genesis is a short read, but to be honest, not an easy read. It takes concentration. The AI parts are fairly easy and beautifully explained but the complex Kissinger strategy and philosophy parts are more difficult. Those Kissinger insights are also what make Genesis a must read for anyone trying to understand the AI Age.

The book is very short by the standards of Henry Kissinger. He is famous for many things, but one you might not have heard of is the controversy surrounding his undergraduate, senior paper at Harvard in 1950, The Meaning of History: Reflections on Spengler, Toynbee and Kant. This paper caused Harvard to start a 35,000 word limit rule for senior papers that stands to this day. You see the paper young Henry, shown below, submitted to his professors was over 400 pages long!

Kissinger in 1950 Harvard Yearbook, public domain.

Young Kissinger did not talk much. Instead, he wrote and wrote. A few years after his senior paper fiasco. Harvard gushed to read his Ph.D. thesis: Peace, Legitimacy, and the Equilibrium (A Study of the Statesmanship of Castlereagh and Metternich. It won many awards and led to Harvard making him a professor. His word generation skills were equal to or exceeded the best generative AI of today.

At the end of his life Kissinger was still writing. He somehow crammed his one hundred years of insights into the 218 pages of Genesis. So admittedly, it is a challenging read, and yes, it would take about 400 pages for my AIs and I to totally unpack it, but don’t worry, that’s not happening. Ask your AI to do it. Hopefully it understands statesmanship, Henry Kissinger and Immanuel Kant.

AI image of Kant and Kissinger.

Henry Kissinger in WW II

To understand the book more information about Kissinger’s formative years is required, the years before he became famous as a Harvard professor, Richard Nixon’s National Security Advisor, Secretary of State, and controversial Nobel Peace Prize winner. You need to understand first of all that Kissinger was born and raised in Germany in a Jewish family where he suffered persecution as a boy. As a teenager in 1938, his parents and younger brother were among the lucky few to escape Nazi Germany and immigrate to America.

Henry was drafted into the Army 1943 and, while in training camp in Camp Croft, became a U.S. citizen. He was then shipped to France as a private, and since he was obviously smart and spoke German, was assigned to an intelligence unit. Young Henry Kissinger saw combat right away as a kind of spy at the front lines. He even volunteered for hazardous intelligence duties during the Battle of the Bulge.

On April 10, 1945, at age of 21, Henry participated in the liberation of the Hannover-Ahlem concentration camp, part of the Neuengamme concentration camp. At the time, Kissinger wrote in his journal, “I had never seen people degraded to the level that people were in Ahlem. They barely looked human. They were skeletons.Isaacson 1992, pp. 39–48. For more details and photos see Henry Kissinger’s World War II (Warfare History Network, June 2018). Both Eric Schmidt and I think this was a turning point in his life.

AI generated image of liberation of concentration camp with young Kissinger on the far right.

Kissinger was relatively silent about his wartime service. In fact, he rarely spoke at all as a boy and young man. ‘Too shy‘ is what they called it back then. Can you imagine what it must have been like for a young Jewish man on the spectrum to walk into a Nazi concentration camp in his home country? He saw people, his people, barely alive; prisoners who had been treated at dirty things, with no human dignity or respect for their life at all. He turned to the German philosopher Immanuel Kant for comfort of sorts and, according to his friend, Eric Schmidt, decided at that time to dedicate his life to a “higher purpose” of preventing the horrors of war. Eric Schmidt on AI, Foreign Policy, and working with Dr. Henry Kissinger, video at 19:30.

According to Kissinger’s biographer, Walter Isaacson, Henry never lost his strong German accent because he suffered from extreme shyness as a child and that made him hesitant to ever speak. Isaacson, Kissinger: A Biography (Simon & Schuster 1992). It remained a very strong accent his whole life. Eric Schmidt tells the story that despite Google’s best efforts, neither the German nor English language AI could understand his speech well enough to transcribe it. Eric Schmidt on AI, Foreign Policy, and working with Dr. Henry Kissinger video at 12:12.

As the War against Germany ended, and after the shock of seeing near death prisoners released from a Jewish Concentration Camp, Kissinger was assigned to the Counter Intelligence Corps (CIC), where he became a CIC Special Agent. Henry quickly received a field promotion to sergeant and was put in charge of a team in Hanover Germany tracking down the hated Gestapo officers and saboteurs. Once discovered they were tried, imprisoned or hanged. After seeing a concentration camp, that must have been satisfying work for Kissinger. He was awarded a Bronze Star for his service.

AI image of what Kissinger might have looked like as a U.S. Army intelligence agent hunting Gestapo intelligence agents. Click here to me my AI video visualization of his work. Upcoming Netflix series?

In June 1945 he was promoted again and made commandant of the Bergstraße district of Hesse Germany, with responsibility for denazification of the district. In 1946, Kissinger was reassigned to teach at the European Command Intelligence School at Camp King in Germany. He continued to teach there as a civilian employee following his separation from the army. Kissinger later recalled that his experience in the army “made me feel like an American.” Isaacson. Kissinger. p. 695.

Kissinger’s Kant, AI, Inherent Dignity and Dogs

Obviously, the deep thoughts of this legend impressed and influenced his co-authors, Schmidt and Mundie. They are elite tech scientists and businessmen who readily admit to having had no time in their past for social studies. Henry Kissinger’s politely viewed them as one-dimensional scholars, not fully educated. Eric Schmidt on AI, Foreign Policy, and working with Dr. Henry Kissinger video at 18:40 (“technology people don’t understand history, people, social dynamics, politics“). They were not polymaths with great interdisciplinary knowledge like Kissinger, they were just trained in a science/math bubble. For that reason, Kissinger told Schmidt that he “should not be in charge of anything. Id.

That’s a kind of funny thing to say to the former CEO of Google and one of the most successful business leaders of our day. Surprisingly, Schmidt agreed with Kissinger, saying “it would be nice if there were more than just the tech people making these decisions.” Id. I wholeheartedly agree, and so does Eric Schmidt, one of the richest people in the world ($32 Billion), but as Schmidt pointed out to Kissinger, that is not likely in profit driven companies.

Kissinger learned from talking to the lead technology people that they did not understand psychology, international law, geopolitics, diplomacy, warfare, history, or philosophy, much less Kissinger’s favorite, the notoriously difficult Immanuel Kant (1724-1804) and Kantian ethics. Lucky me, I was forced to study Kant while studying philosophy in Vienna. Immanuel is generally considered to be the greatest German philosopher. Immanuel Kant believed that all humans have the right to common dignity and respect. It was part of his famous categorical imperative that humans must never treat others merely as a means to an end, but always as ends in themselves. This is the opposite of what many people in fact think and do, especially the Nazis that Henry fought.

Kissinger puzzled over whether AI might someday deserve this dignified treatment. Right now, we treat it as a tool, a pet we control. That’s appropriate now, but what happens when it’s smarter than we are? Could the tables then be turned? Kissinger worried about that too, that AI might someday advance so far as to rob humanity of its dignity. Maybe we would someday be the less intelligent beings on a leash, the spoiled pets of super advanced AI.

Pampered rich human taken for a walk by his AI owner.

This is fate some people might welcome, especially a dog lover like me, but not Henry. I kind of doubt he ever had time for a dog. As Schmidt puts it, and I paraphrase, it is important that when we are no longer the smartest beings on Earth, that we control our masters better than dogs control us. Again, I suspect Schmidt has not spent much time with dogs either, at least not ones like mine that seem pretty good at controlling their owner. Beam me up for a walk on the moon please AI master, I’m bored

Pet human likes all the great things his AI owner does for him, like perfect health and beaming up for a walk on the Moon.

This Kantian ethical view of human dignity influenced the thinking of Kissinger on a variety of AI topics discussed in Genesis. See for instance Genesis at page 205:

As a starting point we would encourage a definition of dignity. . . . Without a definition of dignity, we would not know if and when AI, given enough faculties, could become a being of dignity, could stand fully in place of a human, or could be entirely unified with a human. An AI, even if sustainably proved to be not human, might instead constitute a member of a separate, similarly dignified category that would nonetheless deserve its own, equal standard of treatment.

[We] encourage inclusive coexistence with AI while avoiding reckless attempts as premature coevolution.

For these reasons the authors conclude at page 207: “that humans retain and exercise the power of conscious choice in the age of AI.” The authors say we must be free and not let AI control us, no matter how attractive the much loved, pampered pet role may seem.

This AI is so nice to her pet human. He doesn’t seem to mind.

Genesis on Managing Emergence of Superintelligent AI

The authors go beyond intellectual discussion and take positions on several issues, and when they do so, take pains to use words such as “we believe.” This confirms this is a key issue they discussed at length. One such topic is “AI emergence” in Chapter 5 on Security.

We believe there will not be just one supreme AI but rather multiple installations of superior intelligence in the world. . . . Our strongest creations, acting as countervailing forces, could be better equipped than humans to exert and maintain an equilibrium in global affairs, inspired (but not constrained) by human precedent. Non-human intelligence could thus manage its own emergence, at least in the realms of national security and geopolitics. . . .

No doubt, it is a risk for AI to assume early and sustained responsibility for the species and societies behind its own conception, but the traditional pathways, which require perfection in human performance, may be even riskier. Best, in our current view, would be to have AI working before, and not after, humanity has to confront the proliferation of new threats to survival. The appropriate question under this assumption is this. How can humans accelerate only desirable pathways for AI while delaying the undesirable?

We believe that in diplomacy, defense, and perhaps elsewhere, some of the risks of AI can be managed successfully only by AI itself. . . .

This is one especially poignant instance of the dilemma of dependence—and subsequent perceived inferiority—explored in an earlier chapter. But, in the case of our security, unlike that of our displacement in scientific or other academic endeavors, we may more readily accept the impartiality of a mechanical third party as necessarily superior to the self-interestedness of a human—just as humans easily recognize the need for a mediator in a contentious divorce. It is our belief, and hope, that in this case some of our worst traits will enable us to exhibit some of our best: that the human instinct towards self-interest, including at the expense of others, may prepare us for accepting AI’s transcendence of the same.

Genesis pgs. 120, 124, 135-136.

Henry Kissinger had personal experience of the superiority of AI in some fields. For instance, Schmidt says that Kissinger, whom he calls the greatest diplomat in the world, naturally liked to play the board and video game of Diplomacy, and he found that the AI systems could play as well as he could. Eric Schmidt on AI, Foreign Policy, and working with Dr. Henry Kissinger video at 25:00.

Henry Kissinger loses again to the little AIs and doesn’t like it.

Genesis on the Problems and Pleasures of Prosperity

On Chapter Six on Prosperity, “the authors of this book believe that AI could conceivably be harnessed to generate a new base-line of human wealth and well-being.Id. at 148. I for one am glad to see they all agreed on that.

They also agree on a few basic worries and likely outcomes pertaining to prosperity: “We do worry that a great fraction of humans could become primarily passive consumers of AI-generated content.” Id.

Just relax, sit back and enjoy it. What kind of life is that?

Kissinger and friends then go on to observe in Genesis:

Our concern about human passivity is not about the human loss of paid work. We already have a prototype of how people live when they can have what they want without working. We call them the rich and the retired. . . .

The adjustment to abundance is likely a problem of transition rather than a permanent challenge. Some will initially perceive the introduction of machine labors as depriving them of their primary source of fulfillment and joy. No doubt this will be a jarring experience. But to us it seems likely—not as a response to our exhortation, but rather as an outgrowth of human instinct—that, given time, humans would choose to persevere, perhaps in new avenues or as partners of AI, avoiding atrophy and instead excelling as thinkers and doers. Ultimately, if we establish the needed systems for distribution, connection, participation, and education, humans–empowered and inspired by AI–may continue working not for pay, but for pleasure and pride.

Id. 158-159

Better image of AI as partner and coach.

Key Question of Genesis:
Will we become more like them, or will they become more like us?

On Chapter Eight on Strategy, a key question that was introduced in earlier chapters is resolved, at least somewhat, by the authors agreement on the best answer.

To our minds, one question must define our human strategy in this new age of reckoning. That question is this: Will we become more like them, or will they become more like us?

Id. 184-185

The authors then discuss possible redesigns of the human form, including implants and DNA alterations, so we could be more like them. Fortunately, they agree that “extreme self-redesign may not be necessary and is anyway “generally undesirable.” They point out that:

‘Upgrading’ ourselves biologically might backfire to become a greater limitation on ourselves. . . . If we are unwilling or unable to become more like them, we must, while we are able, find ways to make them more like us. Towards this end, we need to apprise ourselves more fully not only of the essential and evolving nature of AI but also of humanity’s own nature, and we must attempt to encode these understandings in our machines. If we are to entwine ourselves with these non-human beings, and yet retain our independent humanity, these efforts are essential.

Id. at 190. I concur with their opinion. Know thyself.

Create AI in alignment with your higher self.

The Book’s Conclusion

The concluding paragraphs to Genesis at page 218 are clear, accent free, and very well written. They represent the last few days of the writing and life of Henry Kissinger. These words ring the bell for a new beginning for all mankind:

Neither blind faith nor unjustified fear can form the basis of an effective strategy; one needs self-doubt to have knowledge, but self-confidence to act. Indeed, in the age of AI, this is all the more urgent. We must try to understand the challenges that AI will present even as we lack the prior exposure or the essential experience to guarantee the accuracy of our comprehension. And even as we navigate this daunting task, we must also, to avoid a passive future, surmount the many difficulties already facing our species.

While some may view this moment as humanity’s final act, we perceive instead a new beginning. The cycle of creation—technological, biological, sociological, political—is entering a new phase. That phase may operate under new paradigms of, among other things, logic, faith, and time. With sober optimism, we may meet its genesis.

Click here for intro to a Losey movie on a new AI paradigm of logic, faith, and time. Here’s another, The cycle of creation.

My Conclusion: Impressive Man, Impressive Book

I was impressed by the young Henry Kissinger who overcame severe handicaps, including being Jewish as a boy in Nazi German, escaping as an immigrant in a strange land, joining the U.S. Army, and then, a few years later, fighting on the front lines as an intelligence scout, helping liberate a concentration camp, and then hunting down and prosecuting Gestapo criminals.

I was impressed by Kissinger’s intellectual curiosity and breadth of knowledge, which caused his technology co-authors to label him, in awe, a polymath.

I was in awe of Henry’s relentless writing outlet that continued to the last few days of his very long, one-hundred-year life. Physically spent but mentally as sharp as a tack. Incredible.

I was impressed by Kissinger’s unique insights and warnings about the impact of AI on humanity, both psychological and geopolitical.

Finally, I was impressed by Henry Kissinger’s hope, which I share, about the great potential for good of an ethically aligned, superintelligent AI, and the chance, if we work hard, that it will help humanity to achieve a far better future.

AI using propaganda poster art style.

As usual I provide an AI podcast where two young techie AIs share their slant on things. Echoes of AI: Henry Kissinger and His Last Book – GENESIS: Artificial Intelligence, Hope, and the Human Spirit.” Two Google Gemini AIs generated a 16-minute podcast talking about this article. They wrote the podcast, not me. 

Ralph Losey Copyright 2025. All Rights Reserved.


From Prompters to Partners: The Rise of Agentic AI in Law and Professional Practice

June 10, 2025

By Ralph Losey. June 10, 2025.

All graphics in this article by Ralph Losey using a variety of AI tools. Click here or image to see animated YouTube video of this image, also by Losey.

Introduction: Beyond the Prompt Era

The legal profession is undergoing a profound shift. For decades, the integration of computing in law was incremental—word processors, databases, legal research platforms. The advent of generative AI in 2022 brought a leap forward, with tools like ChatGPT, Claude, and Gemini able to respond to natural language prompts with astonishing labilities. Yet even these breakthroughs only marked the beginning. The next phase is where AI takes actions for you in the real world. That is emerging now, Once again, OpenAi led the way in January 2025 by release of its experimental agentic software. Introducing Operator (OpenAi 1/23/25) (research preview of an agent that can use its own browser to perform tasks for you). 

Click here to see YouTube animation.

This new breed of generative AI does more than answer, it acts. We talk to it and it acts for us. This is called agentic AI—a category of artificial intelligence system that is capable of autonomous goal pursuit, strategic reasoning, and complex task execution across multiple steps and tools.

These systems are operational. They don’t just assist lawyers with knowledge, they act for them. very soon they will be able to coordinate entire workflows, orchestrate multistep tasks across software environments, and even collaborate with other AI agents. See e.g. Bob Ambrogi, Thomson Reuters Teases Upcoming Release of Agentic CoCounsel AI for Legal, Capable of Complex Workflows (LawSites, 6/2/25) (agentic workflows will be released in Summer 2025 for document drafting, employment policy generation, deposition analysis, and compliance risk assessments.) Ambrogi explains that: “Unlike traditional AI assistants that require specific prompts for each task, agentic systems can understand broader objectives and determine the necessary steps to achieve them.”

This article explores what this agentic shift means for the practice of law. We’ll define agentic AI and how it differs from traditional systems, map its current and emerging capabilities, and examine its implications for ethics, professional responsibility, and legal education. There is no way to avoid it. Yes, there will be AI agents for lawyers, but will they have Powers of Attorney? And if so, what will they say?

Click here to see YouTube animation.

What Is Agentic AI?

Agentic AI refers to artificial intelligence systems that are not only reactive, but autonomous. That is, they can initiate action, pursue defined goals over time, and adaptively respond to feedback and environmental changes. See e.g., Erik Pounds, What Is Agentic AI? (NVIDIA, 10/22/24) (“Agentic AI uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems”). While definitions vary, key characteristics typically include:

  1. Assess the task: Determine what needs to be done, and gather relevant data to understand the context.
  2. Plan the task: Break it into steps, gather necessary information, and analyze the data to decide the best course of action and selecting the necessary software tools (e.g., web search, code execution, document editing).
  3. Execute the task: Using knowledge and the tools selected to complete the task, such as providing information or initiating an action. Delegating subtasks to other AI agents or systems as needed
  4. Learn from the task: to improve future performance. Requires memory, deep analysis, and feedback learning and adjustments.

Erik Pounds for NVIDIA put it this way, Agentic AI uses a four-step process for problem-solving:

  • PerceiveAI agents gather and process data from various sources, such as sensors, databases and digital interfaces.
  • Reason: A large language model acts as the orchestrator, or reasoning engine, that understands tasks, generates solutions and coordinates specialized models for specific functions like content creation, visual processing or recommendation systems.
  • Act: By integrating with external tools and software via application programming interfaces, agentic AI can quickly execute tasks based on the plans it has formulated.
  • Learn: Agentic AI continuously improves through a feedback loop, or “data flywheel,” where the data generated from its interactions is fed into the system to enhance models.

These features contrast sharply with large language models (LLMs) like GPT-4o in their default form, which excel at generating text but generally lack long-term memory, persistent goals, or execution capability unless scaffolded or paired with with external software.

Agentic systems blend language understanding with process execution. In doing so, they bridge the gap between reasoning and action—the cognitive and the operational.

Click here to see YouTube animation.

Timeline: From Legal Assistants to Legal Agents

The evolution of AI in law can be divided into distinct eras:

  • Pre-2010: Tools were rule-based and largely static (e.g., Westlaw, Lexis).
  • 2010–2020: Predictive coding and analytics began to supplement document review.
  • 2022: Generative AI became usable in practice with GPT-3.5 and early ChatGPT.
  • 2024–2025: Agentic systems like AutoGPT and CoCounsel Core began performing autonomous, multi-step tasks.

The agentic systems are coming to law this summer as we see in Ambrogi’s report on Thomson Reuters. This is just was Sam Altman predicted in his January 2025 Reflections essay: “We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.” it has already come to another major corporation, SalesForce, who has developed their own agentic software wraps. Silvio Savarese, The Agentic AI Era: After the Dawn, Here’s What to Expect (360 Blog, 1/7/25). SaleForce has recently surveyed 200 global HR leaders and reports:

HR leaders plan to redeploy nearly a quarter of their workforce in the near future as AI agents — which are capable of resolving complex issues independently — take on more routine tasks. As a result, HR leaders expect a productivity boost of 30% per employee. . . .

“We’re in the midst of a once-in-a-lifetime transformation of work with digital labor that is unlocking new levels of productivity, autonomy, and agency at a speed never before thought possible,” Scardino (CEO) said. “Every employee will need to learn new human, agent, and business skills to thrive in the digital labor revolution.”

AI’s Human Impact: How Agentic Technology Is Reshaping Work (SalesForce, 5/29/25).

This rapid shift has compressed decades of change into a few years, catching many firms off-guard. The advent now of agentic functions will soon heighten the impact of AI technology to tidal wave force levels. Hopefully, it will leave us all smiling, more productive, yet still in control. Time will tell what is on the other side of the tsunami.

Click here to see YouTube animation.

Case Studies

Law firms are now experimenting with AI agents that perform iterative research, validate case law, and compile arguments. For example, Emily Colbert, senior vice president of CoCounsel, is quoted by Ambrogi in his article as saying:

With our agentic guided workflows, we go from just one single-shot task, answering one question, to actually getting to a work output.

Thomson Reuters Teases Upcoming Release of Agentic CoCounsel AI for Legal, Capable of Complex Workflows.

According to Ambrogi’s excellent article, Colbert showed how lawyers will be able to initiate document creation processes — such as drafting demand letters or employment policies — through structured workflows. Id. Colbert estimates this will reduce the time to review documents or to draft and review contracts by as much as 63%, while reducing legal know-how tasks by 10%.

It is important to understand that the know-how tasks are typically an attorney’s bread and butter, work that justifies higher rates. Whereas the time for document review and related work, where they predict 63% less human time, is typically billed at lower rates and is often tedious and boring.

Click here to see YouTube animation.

Ethical Implications: Competence and Supervision

The American Bar Association’s Formal Opinion 512 (2024) emphasizes that lawyers must supervise AI as they would junior associates. Delegation does not equal abdication.

Key duties include:

  • Competence in use and supervision.
  • Ensuring AI output is reviewed before submission.
  • Protecting client confidentiality when using cloud-based agents.
  • Explaining to clients when AI is used on their matter.

Supervision must evolve into system-level governance.

Risks: Hallucinations, Bias, and Autonomy Drift

Autonomous systems present new legal hazards:

  • Hallucinations: AI can fabricate cases or statutes.
  • Bias: Prejudices in training data may impact legal outcomes.
  • Autonomy drift: Agents may exceed intended scope unless constrained.

Mitigation strategies include “constitutional AI” (value-aligned training), feedback loops, and multi-agent critique systems. See: Shomit Ghose, The Next “Next Big Thing”: Agentic AI’s Opportunities and Risks (UC Berkeley Engineering, 12/19/24). This article provides a good overview of agentic AI and then discusses Agentic Vulnerabilities, including hallucination, adversarial attack,  misalignment with human values, and, get this, scheming (yes, tests have revealed that AIs can sometimes be sneaky and hide things they are doing from humans). Also see: The rise of ‘AI agents’: What they are and how to manage the risks (World Economic Forum, 12/16/24).

Click here for YouTube animation.

A recent article in the Harvard Business Review provides interesting recommendations to the problem. Blair Levin, Larry Downes Can AI Agents Be Trusted? (Harvard Business Review, 5/26/25). Levin and Downes ague that Personal AI agents should be treated as fiduciaries and held to legal and ethical standards that prioritize the user’s interests. (Of course, what if the user is Putin?) The article recommends a three-pronged approach:

1. create legal frameworks that establish fiduciary duty,
2. encourage market-based enforcement through tools like insurance and agent-monitoring services, and
3. design agents to keep sensitive data and decisions local to user devices. Without clear oversight, users may hesitate to delegate meaningful authority—potentially stalling one of AI’s most promising use cases.

Levin, Larry Downes Can AI Agents Be Trusted? (Harvard Business Review, 5/26/25)

Governance Gaps: Law Lags Far Behind

As agentic systems enter the courtroom and back-office, regulatory bodies lag. The excellent article by Kevin LiuOmer Tene, The Rise of Agentic AI: From Conversation to Action (JDSupra, 5/19/25), points out five key legal risks in AI agent development.

  1. Transparency and Explainability
  2. Bias and Discrimination
  3. Privacy and Data Security
  4. Accountability and Agency
  5. Agent-Agent Interactions

The first three are well known and not unique to agentic AI, but the last two are new. Accountability and Agency pertains to liability for mistakes. The Agent-Agent Interactions muddy the responsibility even further when multiple agents become involved. This will be a whole new field of tort aw and contracts. Who is responsible for the negligent act? Did the agents create a contract?

Liu and Tene point out there is some law in the contract area: The Uniform Electronic Transactions Act (UETA), adopted is all US states. It defines an “electronic agent” as “a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual.”

Still, they admit this law is totally inadequate because UETA was designed for “relatively simple automated systems, not sophisticated AI agents that make complex judgment calls based on perceived preferences.” They also point out that not even the EU regulations mention agentic systems. But see: Katalin HorváthAnna Horváth, Meeting the challenge of agentic AI and the EU AI Act (Grip, 5/9/25). It goes without saying there are no U.S. regulations.

Until regulatory rules are enacted, or rules developed over years by case law, firms must self-regulate using internal policies and best guesses as to reasonable precautions. Do you really want to give your AI agent the electronic keys to your Tesla? Your 401K Plan? What will the powers of attorney look like?

Here is the good advice provided by Liu and Tene’s article:

As organizations deploy agentic systems, they’ll need to develop frameworks for appropriate oversight, clarify legal responsibilities, and establish boundaries for autonomous action. Users will need transparent information about what actions AI agents can take on their behalf and what data they can access. And developers will need to implement cybersecurity measures to prevent cascading system failures affecting various layers of multi agent ecosystems. 

The Rise of Agentic AI: From Conversation to Action

Click here for YouTube animation.

Outline of an Integration Playbook: Building Agentic Workflows

One thing we know if that firms ready to adopt agentic AI should follow a phased integration:

  1. Phase 1: Pilot prompt-based tools on low-risk internal tasks.
  2. Phase 2: Build or license custom GPT agents for defined workflows.
  3. Phase 3: Integrate agents via APIs and automation platforms.

Leadership buy-in, cross-disciplinary training, and iterative safety checks are essential for success in any complex technology project. Beyond that, it is too early for meaningful details.

Practical How-To Advice: Getting Started with ChatGPT Legal Tools

Agentic AI starts with foundational tools that most lawyers first encounter it through systems like ChatGPT, Claude, or Gemini. These are generative LLMs that simulate reasoning via language. Learning to use them effectively is the gateway to deeper AI integration. Master the basics of this AI before going on to add agent functions to them.

Beginner Level: Foundations and First Prompts

If you’re new to ChatGPT or similar tools, the goal is to become comfortable with the basic interface and prompt structure. Start by asking things like this:

  1. What is the statute of limitations for breach of contract in [your state]?
  2. Summarize this 3-page memo in one paragraph (upload or paste the memo).
  3. Draft a client-friendly explanation of [legal concept].

Use the AI for brainstorming, summarizing, and rephrasing tasks—not high-stakes analysis. Always verify its output. The AIs are still capable of hallucinating case law and other things, especially when responding to newbie prompts.

Click here to see an AI hallucinate. YouTube video of course.

Tips for Beginners:

  1. Use simple, clear prompts.
  2. Stick to low-risk tasks.
  3. Try rephrasing a single question three ways to see how results vary.
  4. Always fact-check.

Intermediate Level: Research and Drafting with Guardrails

Once familiar with the basics, explore more structured legal work. Use AI to:

  1. Generate first drafts of briefs, contracts, or letters.
  2. Identify key arguments from opposing counsel’s filing.
  3. Research cases on narrow points of law (use with reliable databases).

Incorporate retrieval-augmented generation (RAG) by uploading reference texts or pointing the model to primary sources. Use legal-specific tools like CoCounsel or Lexis+ AI to integrate structured legal databases.

Tips for Intermediate Users:

  1. Add context or reference documents to your prompts.
  2. Provide role instructions (e.g., “Act as a legal writing coach”).
  3. Review citations carefully for accuracy.

Advanced Level: Designing Agentic Workflows.

At the advanced level you are now ready to begin use of agentic AI workflows. You will be building systems, not just using them. This involves chaining prompts, defining agent behavior, and leveraging custom GPTs or multi-agent architectures.

Applications include:

  1. AI agents that handle document review, flag exceptions, and generate summaries.
  2. AutoGPT-type models that plan a legal strategy, delegate research to sub-agents, and present findings.
  3. Integrating tools like Zapier or APIs to connect GPT outputs with databases, calendars, or case management software.

Tips for Advanced Users:

  1. Customize your GPTs with system prompts, tools, and memory settings.
  2. Use multiple agents to divide complex legal workflows.
  3. Maintain human oversight at every key decision point. Keep humans in the loop to supervise Remember the risks discussed above.

These practical steps can bring the promise of agentic AI into real-world practice, letting lawyers augment their capabilities with precision and control. It is also a good idea to retain an experienced professional or company to assist in this work. (No, I’m not available.)

Click to see the YouTube of a good consultant at work.

The Future of Legal Education: Teaching Agents, Not Just Tools

Agentic AI necessitates a shift in legal education. Law schools and CLE providers must go beyond teaching AI tools as static apps and begin instructing students and practitioners on designing and supervising intelligent systems. Curricula should include pretty much everything covered in this article with use of the links as starting homework. The key elements of a law school course, or lengthy CLE, would include:

  1. AI ethics and law, including accountability frameworks.
  2. Prompt engineering and agent design.
  3. Simulation-based training with agentic systems in mock cases.

This shift to hands-on use of AI in law school parallels the rise of clinical legal education decades ago. Training future lawyers to work with AI as collaborators is now as essential as teaching legal writing, civil procedure and professional ethics. Maybe AI use will improve all of these key fields, including ethics. The scale of justice seem shaken now, some say broken. Maybe AI in skilled and honest hands can help restore the balance.

Click to see Losey’s image animation video on YouTube.

Here are a few AI educational tips gaining popularity today:

  1. Teachers should use interpersonal questions; for instance, the tried and proven Socratic method. Ask student questions.
  2. In-person oral exams are also a tradition worth preserving. Defend your thesis!
  3. The old way of teaching by human instructor lecturing is often boring. Avoid it when you can. AI tutors are better at lecturing because they can be personalized to each student’s level and have no time limits.
  4. Require papers that are generated by AIs. (I’d love to see, for instance, what the students and their AIs come up with on the powers of attorney question.) Then do things like require students to list the errors they detected in first drafts, which they corrected in the paper submitted. Learn and teach hybrid multimodal methods. I talk about this all of the time in my blog.
10 second YouTube video of hybrid AI/Human working together. One AI is built into the chair! The AI thought of this, not Losey.

Conclusion: The Duty to Lead Responsibly

Agentic AI will not wait for the legal profession to catch up. As these systems evolve from reactive tools to proactive partners, lawyers face a fork in the road: remain reactive, or lead the transformation.

Choose to be proactive. Use these technologies not just to improve productivity, but to ensure access to justice, reduce legal errors, and preserve ethical practice in the face of complexity. That means clear standards, transparent oversight, and above all, courage to shape the tools that will soon shape us. There is a lot to do, especially for the younger generations.

10 sec. animation by Losey on YouTube

The last words go, as usual, to the Gemini AI podcasters that chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and have some good insights, so you should find it worth your time to listen. Echoes of AI: From Prompters to Partners: The Rise of Agentic AI in Law and Professional PracticeHear two fake podcaster talk about this article for 24 minutes. They wrote the podcast, not me. 

Ralph Losey Copyright 2025. — All Rights Reserved.


Power Meets Platform: Legal Lessons from the Trump–Musk Dispute

June 9, 2025

By Ralph Losey. June 9, 2025.

Disclaimer & Purpose: This article is offered for educational discussion only. No endorsement or disparagement of any individual is intended. The goal is to illuminate emerging points of law where public authority meets private techno‑sovereignty.

Visual Allegory: Imagine Trump‑Kong squaring off against Musk‑Godzilla atop a smoldering volcano. The image is meant in respectful fun—an allegory for colossal forces testing modern legal frameworks.

All images and videos in this article are by Ralph Losey using AI tools. Like most techies, Kong and Godzilla are two of Losey’s favorite superheroes.

Why This Dispute Matters

The public sparring between President Donald J. Trump and entrepreneur Elon Musk is more than celebrity drama. It exposes structural tension between public authority and the private platforms that now shape global infrastructure and discourse. Their quarrel touches seven legal fault lines every lawyer, policymaker, and technologist should watch. These will be described here as we observe a new kind of chess game unfolding between two grand masters.

A Strategic Power Play

What began as a political bromance in 2017 evolved into open conflict after a series of public barbs. Musk criticized trade and climate policies; Trump hinted at cutting lucrative launch contracts. The clash fuels partisan passions—but behind the spectacle lies a constitutional stress‑test played out on social media amplified by AI.

Beyond Ego: A New Battle Over Sovereignty

When a single private actor commands satellites, rockets, electric grids, AI, and a megaphone reaching hundreds of millions, the traditional checks on concentrated power blur. Our legal system—built for railroads and rotary phones—must redraw the lines between public interest and private empire.

Musk as Archetype: The Sovereign Technologist

Musk’s vertical integration—rockets, satellites, cars, AI labs—embodies a modern platform sovereign. As The Guardian observed, “Handing the keys of planetary infrastructure to a handful of billionaires is a dangerous gamble.” Nick Robins-Early, The Trump-Musk feud shows danger of handing the keys of power to one person (6/7/25). Yet Musk’s innovations also slash launch costs and accelerate EV adoption, illustrating the dual edge of private leadership.

Seven Legal Lessons

1 — Privatized Infrastructure & National Dependence

Starlink’s frontline use in Ukraine showed the upside of commercial networks—but Musk’s hint he could “turn it off” awakened Congress to a single‑point vulnerability. Redundancy mandates under the Defense Production Act and competitive‑procurement clauses are gathering bipartisan support.

2 — Blurred Lines: Public Roles & Private Gain

Federal ethics laws, like 18 U.S.C. § 208, prevent officials from acting on matters affecting their financial interests. Musk’s simultaneous role as SpaceX CEO and unpaid federal adviser on space policy stretched that framework. Stronger recusal and disclosure standards are under debate.

3 — Retaliatory Contract Cancellation & the First Amendment

Government may not cancel contracts to silence speech (see Board of Comm’rs, Wabaunsee Cty. v. Umbehr, 518 U.S. 668 (1996)). Allegations that Trump weighed launch budgets against Musk’s criticism raise viewpoint‑retaliation red flags. Peter BakerTrump’s Feud With Musk Highlights His View of Government Power: It’s Personal (NYT Opinion, 6/8/25).

4 — Private Forums, Public Impact

In  Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton, 603 U.S. 707 (2024), the Supreme Court affirmed private platforms’ have editorial discretion protected by the First Amendment,and states cannot compel them to host speech they would prefer to exclude. Also see: Trump v. Twitter, Inc., 602 F. Supp. 3d 1213 (N.D. Cal. 5/6/22) (Twitter is a private entity, not governmental, and so President Trump’s First Amendment rights were not violated when he was banned). 

5 — Section 230 Reform: Scalpel, Not Sledgehammer

Critics say platforms should lose Section 230 safe harbor when algorithms amplify harmful content; defenders call § 230 a backbone of online free expression. Draft bills now focus on narrow carve‑outs for paid promotion or deepfakes rather than full repeal.

6 — Federalism & AI Governance

President Trump’s call for a 10‑year moratorium on state AI laws collided with Musk’s plea for agile regulation. A layered approach—baseline federal standards plus state innovation zones—may offer balance. See: Anthropic C.E.O., Dario Amodei’s recent opinion essay on need for some federal regulation, Don’t Let A.I. Companies off the Hook (6/5/25).

7 — Digital Sovereignty as National Security

Allied governments fear U.S. firms hold strategic “kill switches.” Expect growth in data‑localization mandates and consortium models that dilute single‑point control. Understanding European tech sovereignty: Why Europe is taking back control (HiveNet, 3/12/25).

From Spectacle to Structure

Legal systems built for an analog era are stress‑testing against hybrid actors who command code, capital, and charisma. This feud is a teaching case for future statutes that channel private ingenuity without ceding public accountability.

Action Items for the Legal Profession

  • Master AI literacy (prompt engineering, algorithmic auditing).
  • Write redundancy clauses into government‑tech contracts.
  • Advocate balanced § 230 reform instead of blanket repeal.
  • Strengthen public‑private ethics rules.
  • Monitor digital‑sovereignty laws to ensure cross‑border compliance.

Closing Thoughts

This dispute isn’t merely a tale of clashing egos or partisan spectacle—it is a vivid demonstration of legal lag. Democratic institutions engineered for an analog age are now colliding with empires built on code, capital, and charisma.

For the legal profession, the implications are urgent. This moment requires proactive engagement: architecting ethical guardrails for AI, demanding transparency in algorithmic decision‑making, and crafting standards as dynamic and decentralized as the technologies they seek to govern. Prompt engineering must become a core element of legal literacy; AI outputs deserve the same scrutiny we once reserved for contracts and statutes. Sovereignty, once confined to the nation‑state, now resides equally in APIs and datasets.

We need not fear AI—we must govern it. Used wisely, generative systems can illuminate policy fault lines and help safeguard traditional American freedoms. By wielding the gavel of AI, we can forge the next generation of hybrid lawyers—super‑charged with computational insight and grounded in constitutional values.

Click here to see image of making of next gen lawyers. YouTube by Losey.

Ralph Losey Copyright 2025. — All Rights Reserved.


AI Can Improve Great Lawyers—But It Can’t Replace Them

June 2, 2025

Ralph Losey, June 2, 2025.


The rise of legal AI has sparked a familiar fear: that our hard-won expertise might be absorbed into machines. That lawyers will be off-loaded—our reasoning encoded, commodified, and reduced to prompts. That we’ll be sidelined into “hand-holding” roles—providing comfort more than cognition, reassurance instead of reasoning. Or worse, that we’ll be replaced altogether.

Click to see video by Losey and his AIs, primarily Sora.

These anxieties are understandable. But they rest on a flawed premise: that all legal judgment can be captured, transferred, and automated. I’ve long disagreed. This article lays out why and how AI, when properly harnessed, can amplify human lawyers rather than erase them.

One recent LinkedIn post by Damien Riehl distilled this anxiety into what he called “the trillion-dollar question;” how firms capture—and try to monetize—lawyerly know-how. His reflection was a catalyst for this piece. But the ideas here reflect a position I’ve long held: AI can assist, but it cannot replicate the highest levels of legal reasoning. The future of the profession depends on knowing the difference and building systems that keep humans in the lead.

Image by Losey using Sora AI.

Introduction

The foundational fear behind AI replacement anxiety is based on a flawed assumption: that all of a lawyer’s judgment can be taught to a machine. Some of it can, no question. But the most valuable parts—the creative, emergent, contextual decisions—cannot. Legal expertise isn’t just about rules. It’s about timing, presence, and insight.

Damien Riehl recently posted on LinkedIn what he calls “the trillion-dollar question,” asking how firms value—and try to bottle—lawyerly know-how. His post included this insightful prompt and infographic:

“The trillion-dollar question: What is human judgment’s monetary value? 🤷 If a firm wants that lawyer’s know-how — to place it in an artifact (e.g., knowledge base), will that lawyer give it up for free (no additional compensation)? Especially if the lawyer is considering jumping ship — to Firm #2? Or will Firm #1 need to pay that lawyer for that scarce resource: know-how? After all, as hashtag#Tasks get commoditized (see Agents), then that lawyer’s human expertise/judgment is among the only hashtag#value left.

Image by Damien Riehl.

Damien is right to highlight scarcity. But I would frame it this way: the most valuable legal knowledge isn’t just scarce—it’s uniquely human and inalienable.

When I replied to Damien’s post, I put it simply:

“I don’t think lawyers need to worry. The truly valuable knowledge cannot be ingested by AI.”

A good discussion ensued pro and con. This article is the elaboration on my position.

Click for Video by Losey of future AI enhanced law office.

The Basic Point of Disagreement

Let me start by saying I agree with Damien—up to a point. Much of what lawyers do can be absorbed and automated. The boring stuff. The repetitive stuff. The kind of tasks that make you question your life choices—zero creativity, minimal judgment required. AI can do that. And should.

I say: good riddance.

I’ve billed for thousands of hours in that rut. But the stuff I stayed for—the creativity, the improvisation, the intuitive leaps—that part cannot be transferred to AI. That’s the scarce resource. The irreplaceable trillion-dollar bit.

Let me put it this way, just off the cuff: the most valuable human knowledge in law, or in any complex domain, cannot be input into AI because it is spontaneously created on an ongoing basis by humans from out of the particular ever-changing space-time configurations. It is ever-changing. AI needs human experts. It must remain a hybrid relationship.

Click to see video by Losey.

Each case, client, judge, and moment is distinct. I’ve handled a thousand legal matters since 1980, and no two have been exactly alike. None had the exact same facts, the same players, the same pressures. There are patterns, yes—but they aren’t circular. They spiral forward. Like DNA, legal reasoning evolves in loops that never exactly repeat. There’s structure, but the content within the structure is dynamic.

The best lawyers are not rote technicians—they are improvisational strategists. We may start with templates, but we do not end there. Only Gilbert-style lawyers (remember those law school simplifications?) are replaceable.

Systematic Three-Part Argument for Irreplaceability

To bring structure to this argument, I called in two of my favorite collaborators: old ChatGPT-4o (Omni) for wordsmithing, and the new state-of-the-art SORA for image generation. I prompted, shaped, edited, and reimagined—then fused their outputs with my own reasoning. That process itself is a case in point: Human-AI collaboration where the human has final control. Sometimes that’s not easy because AI can seem human and be hard to pin down.

What happened to my AI? Images by Losey. Click here to see video.

1. Human Knowledge Is Contextual and Emergent

The most valuable legal knowledge is not reducible to rules. It emerges in context—during a tough deposition, in a late-night drafting session, in the moment you stand to speak, when you think on your feet to respond to the unexpected.

This kind of knowledge:

  • Is not static (like statutes),
  • Is always evolving,
  • And is grounded in specific moments of time, place, and interaction.

AI lacks access to the now. It doesn’t live in our space-time. It doesn’t feel when a witness is off. It cannot read a room, has no empathy, intuition, or gut instinct.

What AI can do is amazing but this awareness should be balanced by knowledge of its limitations. This is the theme of the 100 + articles I’ve written on generative AI since 2023. See e.g., The Human Edge: How AI Can Assist But Never Replace (01/30/25). The writings are based on my personal use of generative AI since 2023 and long experiences as a lawyer and first user of other new technologies. There are always dangers in new technologies, but the elimination of legal practice is not one of them, not even with AGI or superintelligence.

Our intelligence is unique and dynamic. It arises anew each day like the Sun, emerging from the interplay of perception, emotion, intuition, and inspiration. AI has none of this. It is like the Moon that reflects the Sun’s light. All it can do is think.

Click here to see video by Losey on YouTube.

2. The Epistemological Limits of AI in Law

There is an boundary to what AI can know and understand. AI can simulate knowledge (e.g., predict patterns based on precedent), but it cannot originate genuine understanding or creative insight within a specific, real-world context. It lacks consciousness, lived experience, and embodied awareness—core faculties that human lawyers use to reason through complex legal and ethical challenges.

AI is like a brilliant parrot with a photographic memory—but no inner life.

Click to see Ralph’s full illustration on YouTube.

I’m a strong proponent of AI, but I temper that enthusiasm with something I call ontological humility. “Ontology” deals with the nature of being, what something truly is, not just how it behaves. And that’s the key. AI may look smart and sound persuasive, but it doesn’t know anything. It predicts and correlates; it doesn’t understand or care. Judgment requires presence, experience, and agency, things AI doesn’t possess. That’s why the human must remain in charge.

3. Hybrid Multimodal Systems: A Proven Model for the Future

Because of these limits, I have always advocated for hybrid human-AI systems. Not just in theory, but in method. Starting with e-discovery in 2012, I developed what I called Hybrid Multimodal approaches.

In that context, it meant combining:

  • Predictive coding (active machine learning),
  • Boolean keyword search,
  • Concept search,
  • Linear review,
  • And—most importantly—human creativity and supervision.

Why? Because no two discovery projects are alike. And no one method suffices. The strength comes from the interplay. See the free online course I made to share the hybrid multimodal method of document review. Here is a montage of illustrations for the course prepared by Losey just using PhotoShop.

Now, in the generative AI era, Hybrid Multimodal means something broader and, for me at least, much more interesting:

  • Human plus machine;
  • Text plus image, plus video, plus sound;
  • Prompts plus visualization, plus spontaneous human improvisation in chats; and,
  • Technical, architectural design and training processes, inside the AI itself.

This last listed technical meaning of “multimodal” is new. It isn’t about how users interact with the tools—it’s about how the models are built, how they fuse the different types of data. The new AIs are designed to facilitate multimodal. Users of AI do not need to know how AI integrates multiple data types, but for the insanely curious, see the review of Li and Tang,Multimodal Alignment and Fusion: A Survey (arXiv, Nov. 2024) (overview of how today’s AI systems align and combine text, images, audio, and video to generate richer, more context-aware outputs.)

Below are several infographic-style images prepared by ChatGPT-4o based on the text in the last two paragraphs. They visualize what AI hybrid multimodal today means, both in the way users may interact with it and in algorithmic and training work required to fuse the data so that this is possible. These graphics thus both demonstrate this text-image fusion, and, at the same time, explain it.

Even when I already know the answer, if it’s important, I may still ask for AI input. Why? Because it sees patterns across all fields. It’s a polymath. I’m not. I use it to stretch my thinking—to see the connections I might have missed.

I know some small part of the law. AI knows everything else. That’s a powerful partnership; again, so long as we remember who’s in charge. In the future, our robots will carry the bags, just like the starting associates of old.

Click here to make image come alive. AI photo by Losey.

Conclusion

The legal profession is not facing extinction by AI. It’s facing transformation through augmentation. The question is not whether AI will replace lawyers, but which lawyers will harness AI to amplify their judgment, and which will delegate themselves into irrelevance. That is the trillion-dollar question facing all of humanity, not just lawyers.

Just how valuable and unique is the human brain? Image by Losey. Click for his YouTube video.

Law’s highest-value knowledge is not static. It is situational, contextual, and alive. It emerges through human presence, experience, and discretion. No model, no matter how large, can replicate what happens in the mind of a skilled advocate adapting in real time to a novel legal problem.

AI can help us see patterns, accelerate production, and surface insights. But it cannot stand in our shoes. The future of legal excellence lies not in human replacement, but in human-AI synergy. That future is hybrid. That future is multimodal. That future is already here.

Click here for video animation by Losey

Each time you interact with advanced AI the experience is different. Sometimes what happens is surprising and leads to improvements in both quality and productivity. Click the same graphic below for a visual example.

Click to see what happens this time. Video by Losey using Sora AI.

The last words go, as usual, to the Gemini AI podcasters that chat between themselves about the article. I always prompt, and if there are too many mistakes, make them do it again. They can be pretty funny at times and have some good insights, so I think you’ll find it worth your time to listen. Echoes of AI: AI Can Improve Great Lawyers—But It Can’t Replace Them. Hear two fake podcaster talk about this article for 15 minutes. They wrote the podcast, not me. 

Ralph Losey Copyright 2025 — All Rights Reserved


e-Discovery Team

LAW and TECHNOLOGY - Ralph Losey © 2006-2026

Skip to content ↓