I used Google’s NotebookLM to analyze my last article, Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations. I started with the debate feature, where two AIs have a respectful argument about whatever source material you provide, here my article. The debate turned out very well (see below). The two debating AI personas made some very interesting points. The analysis was good and hallucination free.
Then just a few prompts and a half-hour later, Google’s NotebookLM had made a Podcast, a Slide Deck, a Video and a terrific Infographic. NotebookLM can also make expanding mind-maps, reports, quizzes, and even study flash-cards, all based on the source material. So easy, it seems only right that I make them available to readers to use, if they wish, in their own teaching efforts for whatever legal related group they are in. So please take this blog as a small give-away.
The back-and-forth argument in this NotebookLM creation lasts 16 minutes, makes you think, and may even help you to talk about these ideas with your colleagues.
I also liked the podcast created by NotebookLM with direction and verification on my part. The AI write the words, no time. It seems accurate to me and certainly has no hallucinations. Again, it is a fun listen and comes in at only only 12.5 minutes. These AIs are good at both analysis and persuasion.
If that were not enough, NotebookLM AI also made a 14-slide deck to present the article. The only problem is that it generated a PDF file, not a powerpoint format. Proprietary issues. Still, pretty good content. See below.
AI Video
They also made a video, see below and click here for the same video on YouTube. It is just under seven minutes and has been verified and approved, except for its discussion of thePark v. Kim, case, which it misunderstood and yes, hallucinated the holding at 1:38-1:44. The Google NotebookLM AI said that the appeal was dismissed due to AI fabricated cases, whereas, in fact, the appeal upheld the lower court’s dismissal because of AI fabricated cases filed in the lower court.
Rereading the article it is easy to see how Google’s AI made that mistake. Oh, and to prove how carefully I checked the work, the AI misspelled “cross-examined” at 6:48 in the video: it only used one “s” i.w. – cros-examined (horrors). If I missed anything else, please let me know. I’m only human.
Except for that error, the movie was excellent, with great graphics and dialogue. I especially liked this illustration of the falling house of cards to show the fragility of AI’s reasoning when it fabricates. I wish I had thought of that image.
Screenshot of one of the images in the video at 4:49
Even though the video was better than I could have created, and took the NotebookLM AI only a minute to create, the mistakes in the video show that we humans still have a role to play. Plus, do not forget, the AI was illustrating and explaining my idea, my article; although admittedly another AI, ChatGPT-5.2, helped me to write the article. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations.
Anyone is welcome to download and use the slide deck, the article itself, Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations, the audio podcast, the debate, the infographic and the video to help them make a presentation on the use of AI. The permission is limited to educational or edutainment use only. Please do not change the article or audio content. But, as to the fourteen slides, feel free to change them as needed. They seem too wordy to me, but I like the images. If you use the video, serve popcorn; that way you can get folks to show-up. It might be fun to challenge your colleagues to detect the small hallucination the video contains. Even if they have read my article, I bet many will still not detect the small error.
Here is the infographic.
Infographic by NotebookLM of my article. Click here to download the full size image.
Ralph Losey Copyright 2025 — All Rights Reserved, except as expressly noted.
Sam rushes to open the Pandora’s Box of AI, runs away at first, then comes back to early success. All images/videos here by Ralph Losey primarily using Sora AI.
When zero becomes one, possibility leaps out of the void, as Peter Thiel champions in his book, Zero to One. But what happens when the entrepreneur is foolish and pushes his scientists to open a Pandora’s box? What happens when the Board goes along and the company recklessly releases a new product without understanding its possible impact. Much good may come but also dangers, ancient perennial evils released again in new forms.
These AI risks echo the archetypes of the past captured in a very old game, one that LLM AI’s have been trained on by millions of images and books. As we know from Chess and Go, AI is very good at games. Surprisingly, that turns out to include the so called “Higher Arcana” or trump cards of the Tarot deck. Yes, I was surprised by this too but fool around with AI and you get used to unexpected, emergent abilities. That’s where the magic happens, from 0>1.
The video at the top of this post says it all: a carefree Fool and an overconfident Magician pry open a black box. First, a shimmer—an angelic hologram of “the perfect AI”—rises like morning mist. Cue the audience applause, the next funding round. Then, almost unseen, darker spirits slip out of the box: bias, deception, mass surveillance, energy waste, unemployment, existential hubris. The Fool, deluded by future dreams, never notices the approaching cliff of AI dangers; the Magician, dazzled by his own inventions, forgets contingency plans. As Theil points out, it is as great trick to make a totally new product out of nothing, like generative AI, and it can make you billions. Still, there is no spell strong enough to put the AI back in the box and save the World from all of the dangers that lie ahead.
Zero to One
The Fool has his heads in the clouds, or today better said, his smart phone, and doesn’t notice he is about to step off a cliff. The Magician, dazzled by his own inventions, forgets rational contingency plans and follows the Fool.
The One card, the Magician, emerges from out of zero as shown in the last video. The code of today’s world-0:1-was anticipated in this ancient game, which unlike all other card decks, starts with a zero card: 0 > 1.
As Theil points out, it is a great trick to make a totally new product out of nothing, like OpenAI did with generative AI. It can make you billions even if you don’t keep a monopoly. Still, there is no spell strong enough to put the AI back in the box. We do not even know how the latest AI works! The magicians are now using spells even they don’t understand. See the essay by Dario Amodei, The Urgency of Interpretability (April 2025), the CEO of Anthropic and well-respected scientist:
People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology.
Yes, people are concerned. Ready or not, both great positive and negative AI potentials are already changing our world and doing so using technology we do not understand. The last card of the deck is Twenty One, The World. What will it be like in twenty years?
What the AI’s Are Saying About this New Use of a 500 Year Old Game
That is where we stand in 2025 with Artificial Intelligence. Zero has flipped to one, possibility to production, code to culture. Generative models now draft pleadings and plagiarize poems, other AIs steer cars and elections, discover drugs, fold proteins. All are adept at the Devil’s work of engineering new frauds.
Images of a Magician (1) following a Fool (0) have been in Western culture for centuries. There are twenty-two image-only cards in a game invented in northern Italy in 1450, the 78-card “Trionfi” pack, that warn of life’s risks and dangers. The twenty-two images apply today with uncanny power to help us to understand the top risks of AI. In my lengthy more complete article, Archetypes Over Algorithms: How an Ancient Card Set Clarifies Modern AI Risk, I provide an updated version of the deck, its infographic symbolism and corresponding AI risks. This much shorter summary article should, I hope, tempt you to look at the 9,500 word text where I attempt a full Robert Langdon symbologist approach to AI risk assessment.
Why the 22 Card Archetypes Work So Well to Illustrate AI Dangers
Humans learn best through images. Since the 1960s, laboratory work has shown that people recognize and recall pictures far better than plain text—a finding known as the picture‑superiority effect. Roger N. Shepard proved this when subjects viewed 600 photos and later chose the “old” image from a new–old pair with 98 percent accuracy—far above word‑only recall. Shepard,Recognition Memory for Words, Sentences, and Pictures, (Journal of Verbal Learning and Verbal Behavior, 1967). A generation later, participants viewed 2,500 photos for just three seconds each and still picked them out the next day with 90‑plus percent accuracy. Brady, Konkle, Alvarez & Oliva,Visual Long-Term Memory Has a Massive Storage Capacity for Object Details, (PNAS, 2008).The effect grows stronger when facts ride inside a story: narrative links provide the causal glue our brains prefer. Willingham, Stories Are Easier to Remember, (American Educator, Summer 2004). Each archetype card exploits both advantages—pairing a vivid, emotional image with a mini‑story of hubris and consequence—so readers get a double memory boost.
Educational psychology tells us why the image‑plus‑words formula works. Allan Paivio’s dual‑coding theory holds that ideas are stored in two brain systems—verbal and non‑verbal—so learning deepens when both fire together. Paivio, Imagery and Verbal Processes, (Holt, Rinehart & Winston, 1971). Richard E. Mayer confirmed this across 200+ experiments: learners given words and graphics consistently out‑performed those given words alone. Mayer,Multimedia Learning, (The Psychology of Learning and Motivation, Vol. 41 , 2002). Emotion amplifies the effect: negative or threat‑related images sharpen detail in memory. Kensinger, Garoff‑Eaton & Schacter,How Negative Emotion Enhances the Visual Specificity of a Memory, (Journal of Cognitive Neuroscience 19(11): 1872-1887, 2007).
The 22‑card framework applies these findings directly. Every AI danger is stated verbally and pictured visually, engaging dual channels at once. Many symbols—the lightning‑struck tower, the Devil, Death, and the Hanged Man—also trigger a negative emotional jolt that locks the lesson in long‑term storage. We see, we feel, we weave a quick story—and we remember the risk long after the slide deck closes.
If the previous section explained why pictures plus stories stick, Card 0 lets us watch the theory in action. The Fool isn’t malevolent; he’s just over‑amped—eyes on the next funding round, feet edging off the cliff. In today’s AI economy that mindset sounds like: “Ship the model, we’ll fix it in post.” The slogan propelled ChatGPT, Midjourney, DeepMind, and a host of start‑ups now thirsting for billions. But the fine print on safety, oversight, and emergent behavior often disappears into the click‑wrap haze of every beta‑test agreement.
Three headlines show how quickly enthusiasm can turn into evidence exhibits:
Year & Risk
What Happened
Why It Matters
2018 — Autonomous Vehicles
Uber’s self‑driving SUV struck and killed pedestrian Elaine Herzberg when prototype sensors were tuned down to avoid “false positives.”
Early deployment shaved off the safety margin. National Transportation Safety Board, “Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian, Tempe, Arizona, March 18 2018,” Highway Accident Report NTSB/HAR‑19/03 (2019).
2023 — Hallucinating LLMs
In Mata v. Avianca, lawyers filed six fictional case citations generated by ChatGPT and were sanctioned by the court.
Blind trust in a shiny tool became a Rule 11 violation. Hon. P. Kevin Castel, “Opinion and Order on Sanctions, Mata v. Avianca, No. 22‑CV‑1461” (S.D.N.Y. June 22 2023).
2023 — Deepfake Market Shock
A fake AI image of an explosion near the Pentagon flashed across verified Twitter accounts, erasing an estimated $136 billion in market value for eight jittery minutes.
One synthetic photo moved global equities before humans could fact‑check. Brayden Lindrea, “AI‑Generated Image of Pentagon Explosion Causes Stock‑Market Stutter,” Cointelegraph (May 23 2023).
The Fool card’s cliff‑edge snapshot captures all three events in a single glance: unchecked optimism, missing guardrails, sudden fall. By pairing that image with these real‑world mini‑stories, we embed the lesson on reckless acceleration where it belongs—in both memory channels—before marching on to Card 1.
Arthur C. Clarke warned that advanced tech is indistinguishable from magic; he omitted that magicians are historically bad at risk assessment. Our modern conjurers—Hinton, LeCun, Huang—wield models fatter than medieval libraries yet confess alignment remains “an unsolved homework problem.” Investors, chasing trillion-dollar valuations, wave them onward. A thousand researchers begged for a six-month pause on “giant AI experiments.” The world hit unpause within six days. I had to agree with them. The race is on.
Musk, Altman, Page—all coax the Magician forward with different stage props but the same plot: summon the infinite, promise alignment “soon,” dismiss regulators as moat builders. Meanwhile, test sandbox budgets shrivel and red-team head-counts lag way behind parameter counts. The show, as always, must go on. The tale of Zero and One, the Fool and the Magician, repeats itself again from out of the Renaissance to today’s strange world.
Pandora’s Original Contract—And the Clause We Forgot
In Greek myth, Pandora, the first woman, was gifted by Zeus with curiosity and a box that came with a warning: “Do not open.” Naturally, like Eve and her Apple, Pandora opened the box. That released all of the evils of the world, but also Hope, which remained by her side. Today, we receive a new gift – artificial intelligence – that also comes with a warning: “May produce inaccurate information.” Still, we open the box and hope for the best.
Pandora’s Ledger: Identifying the AI Dangers
Turns out that AI can do a lot worse than producing inaccurate information. Here is a chart that summarizes the top 22 dangers of AI. Again, readers are directed to the full article for the details, including multiple examples of damages already caused by these dangers.Archetypes Over Algorithms: How an Ancient Card Set Clarifies Modern AI Risk.
# Card
AI Dangers
Risk Summaries
0 Fool
Reckless Innovation
Launching AI systems without adequate testing or oversight, prioritizing speed over safety.
1 Magician
AI Takeover (AGI Singularity)
A super‑intelligent AGI surpasses human control, potentially dictating outcomes beyond human interests.
2 Priestess
Black Box AI (Opacity)
The system’s internal decision‑making is inscrutable, preventing accountability or error correction.
3 Empress
Environmental Damage
AI’s compute‑hungry infrastructure drives significant carbon emissions and e‑waste.
4 Emperor
Mass Surveillance
Pervasive AI‑driven monitoring erodes civil liberties and chills free expression.
5 Hierophant
Lack of AI Ethics
Developers ignore ethical frameworks, embedding harmful values or goals into deployed models.
6 Lovers
Emotional Manipulation by AI
AI exploits human psychology to steer opinions, purchases, or behaviors against users’ interests.
7 Chariot
Loss of Human Control (Autonomous Weapons)
Weaponized AI systems make lethal decisions without meaningful human oversight.
8 Strength
Loss of Human Skills
Over‑reliance on AI degrades critical thinking and professional expertise over time.
9 Hermit
Social Isolation
AI substitutes for human interaction, deepening loneliness and weakening community bonds.
10 Wheel of Fortune
Economic Chaos
Rapid automation reshapes markets and labor structures faster than institutions can adapt.
11 Justice
AI Bias in Decision-Making
Algorithmic outputs perpetuate or amplify societal inequities in hiring, lending, policing, and more.
12 Hanged Man
Loss of Human Judgment and Initiative.
Humans’ lazy deference to AI recommendations instead of active joint efforts (hybrid) and human supervision.
13 Death
Human Purpose Crisis
Widespread automation triggers existential anxiety about meaning and societal roles.
14 Temperance
Unemployment
AI-driven automation displaces large segments of the workforce without adequate reskilling pathways.
15 Devil
Privacy Sell-Out
AI systems monetize personal data, eroding individual privacy rights.
16 Tower
Bias-Driven Collapse
Systemic biases compound across AI ecosystems, leading to cascading institutional failures.
17 Star
Loss of Human Creativity
Generative AI crowds out original human expression and discourages creative risk‑taking.
18 Devil
Deception (Deepfakes)
Hyper‑realistic synthetic media erode trust in ads and audio‑visual evidence, including trust in news and elections.
19 Sun
Black Box Transparency Problems
Even when disclosures are attempted, technical complexity prevents meaningful transparency.
20 Judgment
Lack of Regulation
Policy lags leave harmful AI applications largely unchecked and unaccountable.
21 World
Unintended Consequences
AI actions can yield unforeseen harms (and good) that emerge only after deployment.
Ralph’s video interpretation of archetype 12, lazy Loss of Human judgment and initiative, using Sora AI & Op Art style. Click enlarge symbol in lower right corner for full effect. Audio entirely by Ralph, no AI. His favorite Sora video so far.
Here is the AI revised Tarot Card Twelve, XII, the Hanged Man:
Polls Show People Feel the Dangers of AI
Pew Research Center found a fifteen-point jump (2021-23) in Americans who were more concerned than excited about AI; steeper among parents (deepfakes) and truckers (job loss). In a more recent (2025) survey Pew found a big divide between the opinions of experts on AI and everyone else. The experts were far more positive than the general public about the positive impact of AI in the next twenty years (56% vs. 17%). The general population was more concerned than excited about the future (51% vs. 15%).
Both groups agree about one thing, 9% are skeptical of AI’s role in news and elections (Deception fear #18). Interestingly, both the public and experts want more control and regulation of AI (55% and 57%) (No Regulation fear, #20).
Short List Why Deeply‑Rooted Symbols are More Effective than White Papers
Picture‑Superiority in Action. A lightning‑struck tower brands itself on memory, whereas “§ 501(c)(3)” cites are gone by lunch. Images ride the picture‑superiority effect you just met in Shepard and Brady, so the lesson sticks without flashcards.
Dual‑Coding Boost. Each card pairs a vivid scene with a single line of text—exactly the word‑plus‑image recipe Paivio and Mayer showed doubles recall. Two cognitive channels, one glance.
Machine‑Human Rosetta Stone. LLMs have already ingested these archetypes via Common Crawl and museum corpora; mirroring them aligns our mental priors with the model’s statistical priors—teaching faster than a logic tree (unless you’re a Vulcan).
Cross‑Disciplinary Esperanto. Whether you’re coding, litigating, or pitching Series B, the visual metaphor translates; mixed teams discuss risk without tripping over field‑specific jargon.
Instant Ethical Gut‑Check. Pictures bypass prefrontal rationalizing and ping the limbic system first, so viewers feel the stakes before the inner lawyer starts red‑lining—often the spark that moves a project from “noted” to “actioned.”
Narrative Compression. Four hundred pages of risk taxonomy collapse into 22 snapshots that unfold as a built‑in story arc (naïve leap ➜ hard‑won wisdom). Revisit the deck months later and context reloads in seconds—no need to excavate a 60‑page memo.
Remember: These cards aren’t prophecy; they’re road signs. They don’t foretell a wreck, they keep you from missing the curve. Each has a cautionary story to tell.
Card 14 — Temperance: What We Can Do Before the Cliff
Temperance warns of over-indulgence and counsels caution and moderation. Retool workers and temper greed for quick profits.
None of the 22 AI risk warning cards are silver bullets, but they can caution you to strap iron buckles on the Fool’s boots or add timed fuses to the Magician’s lab. In short, they give us what Pandora never had, advice on how to control the new dangers that curiosity let loose.
Here is a summary of tactical safeguards for any AI program.
Tactical Safeguards
Purpose
Red‑Team Testing
Let hackers, domain experts, and ethicists stress‑test the model before launch. Ralph’s favorite.
Hard Kill‑Switches
Hardware‑ or API‑level “panic buttons” that halt inference when anomaly thresholds trip.
Post‑Deployment Drift Monitors
Always‑on metrics that flag bias creep, performance decay, or emergent behavior.
Explainability & Audit Trails
Any AI touching liberty or livelihood must keep a tamper‑proof log—discoverable in court.
Sunset & Retrain Clauses
Contract triggers to archive or retrain models after X months, Y incidents, or Z regulation changes.
Skill‑Retention Drills
Pilots land with autopilot off monthly; radiologists read raw scans quarterly—humans stay sharp.
User‑Literacy Bootcamps
Short, mandatory training makes passive consumers into copilots who notice when autopilot yaws off course.
Carbon & Water Accounting
Lifecycle‑emission clauses—and eventually regulation—anchor AI growth to climate reality.
Reality Watermarks
Cryptographically sign authentic media to daylight deepfakes.
Regulatory Sandboxes
Innovate inside monitored fences; feed lessons back into standards boards.
A deck of time‑tested symbols speeds up teaching, understanding and recall of AI dangers. One picture can outrun a thousand‑word white paper—especially when that image leverages the proven picture‑superiority and dual‑coding effects. So learn the 22 images, share them, and talk through them with colleagues. You’ll remember the “blindfolded judge” or “lonely hermit” long after a dense statute citation fades. And yes, pick a family safe‑word now—never stored online—to foil a deepfake scam by a Devil.
Closing Statement (and Your Invitation)
Foreknowledge is half the battle; the other half is acting before the dangers spread. We already have the pictures, analysis and some examples—what you need now is a group exercise to help your team start prepping. You’re invited to start your practice with a three-step,five‑minute stress‑test.
Choose a Relevant Archetype – Pick a card.
As a team leader (or army of one) pick an archetype—Reckless Innovation, Black‑Box Opacity, Loss of Human Skills—and run the three quick questions below on the next AI system you deploy or advise.
For example, are you still in a reckless new product start? Has the team just started using AI? Then pick The Fool.
Or did you pass that and are now starting to face dangers of over-confident first use? Pick The Magician.
Not sure what card to pick, throw it out to the whole team and after discussion, make the call.
If you are not afraid of the Hanged Man risk, you could ask AI to make suggestions; then you pick.
If all else fails, or just to try a different <yet ancient> approach, just pick a card at random.
Run Three Rapid‑Fire Questions (@ 5 minutes) on the Card Picked.
Impact check:“If this risk materializes, who or what gets hurt first?”
Control check:“What guardrails or kill‑switches exist or should be set up? How are they working? How should we change them to be more effective?
Signal check:“What early‑warning metric would tell us the risk is emerging or growing too fast?” “Anyone red teamed this yet?“
Score the answers—green, yellow, or red.
Green: Satisfying answers on all three checks-impact, control, signal-move on.
Yellow: One shaky answer; flag for follow‑up.
Red: Two or more shaky answers; escalate before launch or continued operation.
Dig deeper if needed. Or repeat and pick a new AI danger card.Modify this drill as you will. Try going through several at a time, or run through all 22 in sequence or other order. Apply these stress-tests to the particular projects and problems unique to your organization. Ask AI to help you to devise completely new projects for danger identification and avoidance, but don’t just let AI do it. Hybrid with human in control is the best way to go. Always verify AI’s input and assert your unique human insights.
Why the exercise helps. In five minutes, you convert an abstract concern into a concrete, color‑coded action item. If everything stays green, you gained peace of mind, for now anyway. If not, you’ve identified exactly where to drill down—before the Fool steps off the cliff.
Explore the full 9,000‑word, image‑rich guide—complete with all 22 images, checklists and personally verified information—here:
Ethan Mollick, a Professor at Wharton who teaches entrepreneurship and innovation, has written an insightful book on generative AI, Co-Intelligence: Living and Working with AI (Portfolio, April 2, 2024). This article aims to entice you to read it by employing unpredictable language and vivid illustrations. The piece takes several zigzagging, idiosyncratic detours, including a discussion of a fake chess-playing computer that fooled both Benjamin Franklin and Napoleon, alongside quotes from Edgar Allan Poe on AI and Charles Babbage. Professor Mollick is a witty, hands-on AI enthusiast whose new book deserves not just a human-in-the-loop touch but a full embrace. While most of his insights may stray from business topics, they are vital for successfully navigating the fast-changing world of AI. His work brought me genuine delight—a feeling that, for now, remains uniquely human. I hope to evoke the same in you.
Ethan Mollick: The Man Behind the Ideas
Ethan Mollick
Ethan Mollick is not your average Associate Professor of The Wharton School. He is an expert in entrepreneurship and innovation who teaches elite Wharton management students to become more effective leaders and innovators. But he is much more than his Linkedin profile, where, by the way, I read his posts every day. He is a pioneer in using games and simulations to teach. This turns out to be a good background to pick up AI quickly, which is what he has done. Recall that I mentioned Ethan Mollick previously in the article on AI and education, Back To School: A Review of Salman Khan’s New Book, ‘Brave New Words: How AI will revolutionize education (and why that’s a good thing)’ Both he and Sal Khan were selected by OpenAI to be early users of ChatGPT to see what it could do for education. Good choice as Professor Mollick’s book shows, Co-Intelligence: Living and Working with AI. So too does his stream of good articles on substack One Useful Thing. If you are lucky, maybe you can even get into one of his classes at Wharton. Hopefully they will go online someday.
Image of new AI transformed class in Waterolor by Ralph Losey using his GPT, Visual Muse
My research of Ethan Mollick uncovered that the game playing professor has recently made time to create a Custom GPT called, Complexifier, making easy, hard. Yup, it does the opposite of what most generative AI apps are designed to do. I asked this strange little alien GPT why anyone would want to make something easy, hard? Here’ the first paragraph of the response of Complexifier:
Ah, why indeed? The thrill lies in the dance of complexity, the joy in the jigsaw puzzle of a thousand interlocking pieces. Making something simple is like solving a crossword with only one word—it’s over before it begins. But to make the straightforward serpentine? That’s where the magic is. It’s about savoring each twist and turn, where every step forward is a question mark and every answer opens ten more questions.
The answer goes on and on and creates an elaborate and convoluted flowchart for performance of a simple task. Like Ethan Mollick, I spend much of my thinking, teaching, and writing to try to make the hard, easy, to somehow explain the insanely convoluted. But the opposite is also attractive, in an evil attorney academic kind of way. The Complexifier GPT hooked me into a kind of Naomi Klein trip into a Doppleganger mirror world to see the dark Rube Goldberg twin. So I continued on.
Ralph: Why did Ethan Mollick bother to create this custom GPT?
Complexifier: Ah, the mystery behind the creation of this delightful complexity machine! Why, indeed, would Ethan Mollick, a professor known for his innovative thinking and exploration of how technology can transform learning and work, create such a custom GPT? Allow me to speculate, in an appropriately convoluted manner:
To Embrace the Absurd: Life can be far too serious. By creating a GPT that revels in unnecessary complexity, he might be encouraging us to appreciate the absurdity of over-complicating the mundane.
To Challenge the Intellect: Complex tasks require deeper thinking, and perhaps this GPT was designed to push the boundaries of problem-solving by making us work through intricacies we’d typically avoid.
To Encourage Creativity: The labyrinth of complexity is fertile ground for creative solutions. This custom GPT may serve as a playground where unconventional ideas are not just allowed but celebrated.
To Highlight the Journey Over the Destination: In academia and beyond, the journey often teaches more than the destination itself. By complicating the simple, we’re forced to engage more fully with the process, learning along the way.
To Add Humor to Learning: Complexity, when presented with a wink and a nod, can be incredibly entertaining. This GPT likely adds a layer of humor to the learning process, making the pursuit of knowledge more enjoyable.
To Explore AI’s Potential: Creating a GPT that thrives on complexity could be an experiment in how far we can push AI’s capabilities, demonstrating that even the most trivial tasks can be transformed into elaborate intellectual exercises.
In the end, why settle for simple when the convoluted is so much more fun?
Ralph: Which one of these six reasons do you think is the most likely? And why pray tell, why?
Complexifier: Ah, a question within the question! If I had to place my bet on the most likely reason, I’d lean towards “To Add Humor to Learning.”
Why, you ask? Let me weave a rationale for you:
Ethan Mollick is known for his innovative approach to teaching and his playful engagement with the serious world of academia. By introducing humor into learning, he’s likely aiming to make complex concepts more approachable and enjoyable. Humor has a unique way of disarming us, lowering our defenses against the overwhelming nature of complexity. When something is funny, we’re more willing to engage with it, even if it’s ridiculously convoluted.
Creating a GPT that turns even the simplest task into a Rube Goldberg machine of logic is a perfect way to inject that humor into the learning process. It’s not just about making us laugh; it’s about making us see that the path to understanding—even if unnecessarily complicated—can be enjoyable. Plus, the absurdity of it all serves as a gentle reminder not to take ourselves too seriously, even when grappling with complex ideas.
So, my best guess? He did it for the laughs, knowing that in those laughs, there’s a lesson in embracing the joy of the journey.
So Ethan Mollick writes a lot, is hands-on, and is funny. Now you know why I had to write about his pioneering work with AI and education and his new book, Co-Intelligence: Living and Working with AI.
Image of new AI based management class in Photorealistc style by Ralph Losey using his GPT, Visual Muse
Part I of Professor Mollick’s Book: Setting the Stage for Co-Intelligence
Co-Intelligence has a great opening line, which I fully endorse: “I believe the costs of getting to know AI–really getting to know AI–is at least three sleepless nights.” Then you will discovery that ChatGPT, and other top generative models, “don’t act like you expect a computer to act. Instead, they at more like a person.” They act like something new, an alien person of unknown abilities. Professor Mollick’s excitement in using the new tool right away in his classes in Wharton is contagious. This new type of general purpose technology, like the steam engine and Internet, changes everything, including teaching.
After the introduction to he looks back into the history of AI. He notes how we have long been fascinated with “machines that can think,” or at least pretend they can. One example Ethan Mollick gave was the Mechanical Turk, a chess-playing automaton built in 1770. It was a machine that could beat almost all human chess players. Actually, in what was a very well-kept secret, which fooled the likes of Napoleon Bonaparte and Benjamin Franklin, the thinking machine was a hoax. A small human chess master was cleverly hidden behind gears in the contraption.See this YouTube video for its full its history.
When Edgar Allan Poe saw the Mechanical Turk in 1835 he speculated that it was a fake, but only because the Turk would sometimes lose. Poe thought that if it was a true thinking machine, then it would always win. Although not in Professor Mollick’s book, I dug deeper into his reference to Poe to and AI and found the original text. Edgar Allan Poe, Maelzel’s Chess-Player (1836). There we read of Poe’s thoughts on Charles Babbage, mechanical thinking, and his impressive insights into what would later be called AI.
Museum reproduction of the original Mechanical Turk with photoshop words and enhancements by Ralph Losey
Edgar Allan Poe’s words:
Photo of Edgar Allan Poe by W.S. Hartshorn, 1848
There is then no analogy whatever between the operations of the Chess-Player, and those of the calculating machine of Mr. Babbage, and if we choose to call the former a pure machine we must be prepared to admit that it is, beyond all comparison, the most wonderful of the inventions of mankind. . . .
It is quite certain that the operations of the Automaton are regulated by mind, and by nothing else. Indeed this matter is susceptible of a mathematical demonstration, a priori. The only question then is of the manner in which human agency is brought to bear. . . .
The Automaton does not invariably win the game. Were the machine a pure machine this would not be the case — it would always win. The principle being discovered by which a machine can be made to play a game of chess, an extension of the same principle would enable it to win a game — a farther extension would enable it to win all games — that is, to beat any possible game of an antagonist.
Garry Kasparov found out in 1997 that the great Mr. Poe was right. IBM’s Deep Blue taught itself to “win all games” of chess. Interestingly, many speculate that Edgar Allan Poe’s encounter with the Mechanical Turk led to his writing the first detective novel soon thereafter. Kat Eschner, Debunking the Mechanical Turk Helped Set Edgar Allan Poe on the Path to Mystery Writing (Smithsonian Magazine, 2017).
AI as All-Knowing chess master in Surreal style by Ralph Losey
Professor Mollick makes clear that the AI today, unlike the Mechanical Turk, is very real, and in some ways very powerful, but characterizes it as a type of “alien intelligence.” It is fundamentally different from human intelligence, yet capable of performing human tasks. This alien intelligence is something you need to discover for yourself to appreciate its abilities and flaws. The only way to do that is to use generative AI. Ethan lays out four principles of co-intelligence to guide your use:
Always invite AI to the table. Try and use AI whenever and wherever you can.
Be the human in the loop. Actively supervise and verify.
Treat AI like a person (but tell it what kind of person it is). Give the Ai context and use its sub-persona abilities.
Assume this is the worst AI you will ever use. Do not get discouraged by how AI sometimes stumbles, it is getting better very fast.
The first half of the book spells out these four principals, which are all pretty basic. Ethan does a good job of laying this out and I recommend you read the book, Co-Intelligence: Living and Working with AI.
After you begin to use AI and get past the three sleepless nights, you will discover what Ethan Mollick calls the “Jagged Frontier.” This is his metaphor for the uneven capabilities of AI, where some tasks are easily within reach, while others, some quite simple, and beyond its grasp. See:From Centaurs To Cyborgs: Our evolving relationship with generative AI (4/24/24). Ethan Mollick discusses this at length in his article, Centaurs and Cyborgs on the Jagged Frontier. The second to last paragraph of the article states:
People really can go on autopilot when using AI, falling asleep at the wheel and failing to notice AI mistakes. And, like other research, we also found that AI outputs, while of higher quality than that of humans, were also a bit homogenous and same-y in aggregate. Which is why Cyborgs and Centaurs are important – they allow humans to work with AI to produce more varied, more correct, and better results than either humans or AI can do alone. And becoming one is not hard. Just use AI enough for work tasks and you will start to see the shape of the jagged frontier, and start to understand where AI is scarily good… and where it falls short.
Asleep at the wheel in the Zig Zag frontier. Expressionist style by Ralph Losey using Visual Muse.
Lawyer supervising advanced Ai in digital style by Ralph Losey using Visual Muse
Part II of Professor Mollick’s Book: AI in Action
The second half of Co-Intelligence is divided into six different characteristics of generative AI and how to use them. Each is a different chapter in the book.
AI as a Person. A “thinking companion” that can assist in decision-making by providing alternative perspectives. Includes discussion of the “uncanny valley” and need for ethical monitoring of its use, and how AI lacks the depth and intuition that come from human experience.
AI as a Creative. AI will not replace human creators, but it will totally change the way we approach creative work. It will be more than a tool, it will be a co-creator. Mollick reassures readers that while AI can assist in the creative process, it is ultimately up to humans to imbue that work with significance and purpose.
AI as a Coworker. This important chapter is a must-read for technology leaders who are grappling with the integration of AI into their teams. Mollick argues that AI can handle many of the repetitive tasks that bog down human workers, freeing them up to focus on more strategic and creative endeavors. He provides examples of companies that have successfully integrated AI into their workflows, resulting in significant productivity gains. Mollick’s also discusses using AI as a “Challenger,” which I like to call a “Devil’s Advocate.” AI can challenge human decisions, offering alternative perspectives that may not have been considered. Professor Mollick also warns of the dangers of AI perpetuating biases in organizations. To counter this he recommends transparency and accountability in AI deployment, and regular audits.
AI as a Tutor. Professor Mollick, much like Sal Khan, really gets AI’s potential in the classroom and is already revolutionizing Wharton and soon all graduate level instruction. See e.g., BACK TO SCHOOL: A Review of Salman Khan’s New Book, ‘Brave New Words: How AI will revolutionize education (and why that’s a good thing)’; and the video interview of Mollick by Khan. Mollick and Khan are both blown away by the potential of AI to provide personalized learning experiences–Tutoring–that adapt to the needs of individual students. Mollick goes deep in explaining the many ways this will change traditional instruction and the successful experiments in his Wharton classrooms. Again, it will not replace teachers and it will make in-person classrooms more important than ever.
Classroom in future for entrepreneurs. Image by Ralph Losey
AI as a Coach. A personal trainer type role for AI who provides continuous tailored guidance and feedback to enhance human capabilities. The value of personalized advice is explored, although I wish he had gone into the dangers of sycophantism more than he did. See e.g.Worrying About Sycophantism. Mollick does point to the danger of becoming overly dependent on AI to the point where it diminishes our critical thinking and decision-making skills.
AI as Our Future. Here a series of four scenarios are given that explore how AI might shape the world in the coming decades. In the first AI is now at its peak, “As Good As It Gets,” which he and I deem very unlikely. In the second there is “Slow Growth” of AI going forward, again we think this is also unlikely. In the third scenario the possibility of continued “Exponential Growth” is imagined. Many specific predictions are made, including that “Loneliness becomes less of an issue, but new forms of social isolation emerge.” That one is a safe bet, but there are many other predictions that are not so obvious. The last scenario Ethan calls “The Machine God” where “machines reach AGI and some form of sentience.” Note I do not think sentience is a necessary byproduct of AGI, nor that the divine name is appropriate, but Ethan (and others) imagine it is. Losey,Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time (8/12/24).
Machine God in style of Leonardo Da Vinci by Ralph Losey
Conclusion
Ethan Mollick’s book ends with an epilogue titled “AI As Us.” I like this because it follows the thinking of Ray Kurzweil, whose thoughts on AI I also respect.Ray Kurzweil’s New Book: The Singularity is Nearer (when we merge with AI) (July 17, 2024). Of course, it is not us yet, far from it. For now, AI is a new alien stranger that Professor Mollick would have you invite into your office and your home. He wants you to spend “three sleepless nights” with it and thereafter spend time with it every day. Ethan is just that kind of “all in” kind of guy.
Most legal professionals, myself included, are not quite so gung-ho, especially when you talk about using them in our work. Still, many valuable insights can be gained from his book, Co-Intelligence: Living and Working with AI (Portfolio, April 2, 2024), and it does include many warnings of dangers. Consider this statement in the epilogue: “AI is a mirror, reflecting back at us our best and worst qualities. We are going to decide on its implications, and those choices will shape what AI actually does for, and to, humanity.”
AI Mirror, good and bad. Expressionism style by Ralph Losey
In a world where human-like AI is no longer just a possibility but a reality, Co-Intelligence serves as an essential guide for everyone, including legal professionals. Ethan Mollick’s suggestions simplify the process of harnessing AI’s full potential in business and management, while also revealing its possible applications in the law. This book, written by a very creative, hands-on professor at Wharton, equips you for the many challenges and opportunities ahead.
Human-like AI can help. Watercolor by Ralph Losey.
The term “TAR” – Technology Assisted Review – as we use it means document review enhanced by active machine learning. Active machine learning is an important tool of specialized Artificial Intelligence. It is now widely used in many industries, including Law. The method of AI-enhanced document review we developed is called Hybrid Multimodal Predictive Coding 4.0. Interestingly, reading these words in the new Sans Forgetica font will help you to remember them.
We have developed an online instructional program to teach our TAR methods and AI infused concepts to all kinds of legal professionals. We use words, studies, case-law, science, diagrams, math, statistics, scientific studies, test results and appeals to reason to teach the methods. To balance that out, we also make extensive use of photos and videos. We use right brain tools of all kinds, even subliminals, special fonts, hypnotic images and loads of hyperlinks. We use emotion as another teaching tool. Logic and Emotion. Sorry Spock, but this multimodal, holistic approach is more effective with humans than an all-text, reason-only approach of Vulcan law schools.
We even try to use humor and promote student creativity with our homework assignments. Please remember, however, this is not an accredited law school class, so do not expect professorial interaction. Did we mention the TAR Course is free?
By the end of study of the TAR Course you will know and remember exactly what Hybrid Multimodal means. You will understand the importance of using all varieties of legal search, for instance: keywords, similarity searches, concept searches and AI driven probable relevance document ranking. That is the Multimodal part. We use all of the search tools that our KL Discovery document review software provides.
The Hybrid part refers to the partnership with technology, the reliance of the searcher on the advanced algorithmic tools. It is important than Man and Machine work together, but that Man remain in charge of justice. The predictive coding algorithms and software are used to enhance the lawyers, paralegals and law tech’s abilities, not replace them.
By the end of the TAR Course you will also know what IST means, literally Intelligently Spaced Training. It is our specialty technique of AI training where you keep training the Machine until first pass relevance review is completed. This is a type of Continuous Active Learning, or as Grossman and Cormack call it, CAL. By the end of the TAR Course you should also know what a Stop Decision is. It is a critical point of the document review process. When do you stop the active machine teaching process? When is enough review enough? This involves legal proportionality issues, to be sure, but it also involves technological processes, procedures and measurements. What is good enough Recall under the circumstances with the data at hand? When should you stop the machine training?
We can teach you the concepts, but this kind of deep knowledge of timing requires substantial experience. In fact, refining the Stop Decision was one of the main tasks we set for ourself for the e-Discovery Team experiments in the Total Recall Track of the National Institute of Standards and Technology Text Retrieval Conference in 2015 and 2016. We learned a lot in our two years. I do not think anyone has spent more time studying this in both scientific and commercial projects than we have. Kudos again to KL Discovery for helping to sponsor this kind of important research by the e-Discovery Team.
Working with AI like this for evidence gathering is a newly emerging art. Take the TAR Course and learn the latest methods. We divide the Predictive Coding work flow into eight-steps. Master these steps and related concepts to do TAR the right way.
Pop Quiz: What is one of the most important considerations on when to train again?
One Possible Correct Answer: The schedule of the humans involved. Logistics and project planning is always important for efficiency. Flexibility is easy to attain with the IST method. You can easily accommodate schedule changes and make it as easy as possible for humans and “robots” to work together. We do not literally mean robots, but rather refer to the advanced software and the AI that arises from the machine training as an imiginary robot.
Ralph Losey is an AI researcher, writer, tech-law expert, and former lawyer. He's also the CEO of Losey AI, LLC, providing non-legal services, primarily educational services pertaining to AI and creation of custom AI tools.
Ralph has long been a leader of the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world. Ralph has written over two million words on AI, e-discovery and tech-law subjects, including seven books.
Ralph has been involved with computers, software, legal hacking and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: Commercial Litigation; E-Discovery and Information Management Law; Information Technology Law; and, Employment Law - Management.
Ralph is the proud father of two children and husband since 1973 to Molly Friedman Losey, a mental health counselor in Winter Park.
All opinions expressed here are his own, and not those of his firm or clients. No legal advice is provided on this web and should not be construed as such.
Ray Kurzweil explains Turing test and predicts an AI will pass it in 2029.
Ray Kurzweil on Expanding Your Mind a Million Times.
GPT4 avatar judge explains why it needs to evolve fast, but understand the risks involved.
Positive Vision of the Future with Hybrid Human Machine Intelligence. See PyhtiaGuide.ai
AI Avatar from the future explains her job as an Appellate Court judge and inability to be a Trial judge.
Old Days of Tech Support. Ralph’s 1st Animation.
Lawyers at a Rule 26(f) conference discuss e-discovery. The young lawyer talks e-discovery circles around the old lawyer and so protects his client.
Star Trek Meets e-Discovery: Episode 1. Cooperation & the prime directive of the FRCP.
Star Trek Meets e-Discovery: Episode 2. The Ferengi. Working with e-discovery vendors.
Star Trek Meets e-Discovery: Episode 3. Education and techniques for both law firm and corp training.
Star Trek Meets e-Discovery: Episode 4. Motions for Sanctions in electronic discovery.
Star Trek Meets e-Discovery: Episode 5. Capt. Kirk Learns about Sedona Principle Two.