A Response to Matt Shumer’s Viral Essay on AI Acceleration

I. Something big is happening. On that much Matt Shumer and I agree.
The essay Something Big Is Happening was published on Matt Shumer’s personal blog on February 9, 2026. After he shared it widely on X, it drew more than 80 million views within days, rapidly becoming a focal point in public debates about AI and the future of work. Few essays about artificial intelligence have traveled that far, that fast.
Shumer’s central claim is straightforward: AI capability is accelerating so rapidly that large-scale displacement of white-collar work is imminent, perhaps within one to five years. He argues that recursive improvement loops are already underway, that benchmark curves are steepening, and that most people are underestimating what is about to happen.
It is a powerful narrative. It is also incomplete, and that matters more than its popularity suggests. So take a breath.
Before I explain why, a brief word of context. I have practiced law for over 45 years and have worked hands-on with AI in litigation for more than 14. I was involved in the first case approving predictive coding for e-discovery in federal court. Since 2023, I have written extensively about generative AI, hybrid human-machine workflows, and the emerging governance challenges of AI and quantum convergence. I am not skeptical of AI — I use it daily, teach it, and advocate its responsible adoption.
Acceleration is real. But acceleration demands adults – a calm, measured approach. That is why I take Shumer seriously, even as I disagree with his conclusions.
II. What Shumer Gets Right (and What He Exaggerates)
Let us begin where we agree. AI models have improved rapidly. Coding autonomy has advanced in ways that would have seemed implausible just a few years ago. AI systems now assist meaningfully in debugging, evaluation, and even aspects of their own development pipelines. Benchmarks measuring the duration of tasks that models can complete without human intervention have indeed increased.
There is rapid acceleration, but it is not a smooth, universal climb. It is jagged.
A. The Bar Exam Myth: Top 10% or Bottom 15%?
humer states: “By 2023, [AI] could pass the bar exam.” This has become a foundational myth in the AI-acceleration narrative. However, a rigorous study by Eric Martinez showed the truth of the vendor study. Re-evaluating GPT-4’s bar exam performance. Artif Intell Law (2024) (presenting four sets of findings that indicate that OpenAI’s estimates of GPT-4.0’s Uniform Bar Exam percentile are overinflated). Martinez found that when you limit the sample to those who actually passed the bar (qualified attorneys), the model’s percentile drops off a cliff. On the essay and performance test portions (MEE + MPT), GPT-4 scored in the ~15th percentile. In other words, bottom 15% among those who passed.
B. AI Hallucinations Are Not Ancient History
Shumer claims that the “this makes stuff up” phase of AI is “ancient history” and that current models are unrecognizable from six months ago. My daily use and objective tests tell a different story. Yes, it is getting better but we are not there yet, especially for most legal users.
Hallucination remains the number one concern for the Bench and Bar. Cross-Examine Your AI: The Lawyer’s Cure for Hallucinations (December 2025). Generative AI still has a persistent tendency to fabricate facts and law, leading to serious court sanctions. See e.g. Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024). Also see French legal scholar Damien Charlotin‘s catalogue of almost one thousand similar decisions worldwide in his AI Hallucination Cases.
Shumer’s claims that modern AIs no longer hallucinate and outperform most attorneys reflect optimism more than sustained exposure to legal work. After researching tens of thousands of legal issues over the course of my career, I can tell you that verification is not optional — it is the job.
C. The “Jagged Frontier” of AI Progress
Shumer envisions a wall of fast, inevitable advance. Research and personal experience of many experts suggests otherwise. The progress is jagged and uneven. See e.g., The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%. Also see research and reports of top expert teams in Navigating the Jagged Technological Frontier (Working Paper 24-013, Harvard Business School, Sept. 2023) and my humble papers, From Centaurs To Cyborgs, and Navigating the AI Frontier.
The New Stanford–Carnegie Study (November 2025) confirmed what Harvard researchers call the “Jagged Technological Frontier”. This research found that AI excels at specific programmable tasks but fails at messy, human-centric reality. In fact, the Stanford-Carnegie study showed that fully autonomous AI agents were significantly less reliable than hybrid human-AI teams, which outperformed solo agents by 68.7%.
D. “Team of Human Associates” or “Untested Sycophantic AI Experts”?
Shumer recounts a managing partner in a law firm who feels AI is like “having a team of associates available instantly.” I agree that every professional should be integrating AI into their daily workflow. But they must do so skeptically. Plus, it is nowhere near the same as having trained human associates. AIs are cheaper, sure, until they screw up and you are the one left cleaning it up.
In my 45-years of legal practice I have had the privilege of working with many excellent associates. They significantly exceed today’s AIs in many respects, so I must respectfully disagree with Shumer’s quote of an anonymous partner. There are many things that AI will never be able to do that all good professionals now do without thinking. The Human Edge: How AI Can Assist But Never Replace. I prefer humans with AIs – the hybrid approach – over AIs alone, even though, unlike humans, AI associates are always pleasant and they tend to agree with everything you say. Lessons for Legal Profession from the Latest Viral Meme: ‘Ask an AI What It Would Do If It Became Human For a Day? (Jan. 2026).
My testing of AI since 2023 has focused on the legal reasoning ability of AI, as opposed to general reasoning. For a full explanation of the difference, see Breaking New Ground: Evaluating the Top AI Reasoning Models of 2025. I have also spent hundreds of hours in hands-on independent testing of the AI legal reasoning abilities. See e.g., Bar Battle of the Bots, parts one, two, three and four. These articles reported multiple tests of Open AI and Google models in 2025, including tests using actual Bar exam questions, which they again failed. I have not seen substantial improvements in AI since then.
It is, in my experience a poor trade to use an AI instead of an associate-Ai team, and without extensive supervision, an invitation to sanctions and malpractice.
III. Benchmark Curves Are Not Civilization
Shumer relies heavily on task-duration benchmarks and exponential trend lines. The implication is clear: if models can complete longer and longer tasks autonomously, then large-scale displacement is imminent.
The problem with that is benchmark extrapolation is not societal destiny. In law, evidence does not decide the case. People do.
Most current autonomy benchmarks are domain-constrained. They focus heavily on software engineering and other structured digital tasks. Coding is not law. It is not medicine. It is not fiduciary duty. It is not governance.
Even when capability expands inside a benchmark, that does not mean institutions will move at the same speed. Courts, regulators, insurers, boards of directors, and compliance departments slow, shape, and channel technology. That is not inertia; it is risk management.
And assistance in model development is not the same as autonomous recursive self-governance. Humans remain deeply embedded in training, validation, and deployment. “AI helping build AI” makes for a compelling headline. It does not mean an intelligence explosion has detached from human control. AI extends cognition, but it does not replace stewardship.
That is the part Shumer’s curve does not capture: acceleration of capability is real but this increases the need for adult supervision. It does not eliminate the human role. It intensifies it. Just as it always has.
IV. Why Fear Travels Faster Than Wisdom
The viral success of Shumer’s essay is not accidental. It was designed to activate powerful psychological mechanisms.
It invokes the COVID analogy, reminding readers how quickly life changed in early 2020. It frames the reader personally: “you’re next.” It emphasizes exponential growth, which humans are notoriously poor at intuitively processing. It adopts insider authority: “I live in this world; I see what you don’t.”
Fear spreads faster than nuance because we evolved that way. A possible threat demands immediate attention. Social media algorithms amplify high-emotion content. Urgency increases engagement velocity. None of this necessarily makes Shumer insincere but it does explain why his article went viral. Acceleration narratives travel at super-fast computer speeds. Wisdom still travels at human speed.
V. Incentives Shape Narratives
It is also important to understand context. Shumer is a very young builder who lives in the code. His perspective is shaped by the possibility of the technology. My perspective, and the perspective of governance, is shaped by the consequences of the technology. Startup culture rewards speed; legal culture rewards survivability. These are different risk environments.
Recognizing that difference is not an attack. It is transparency. His incentives don’t invalidate his argument, but they do shape his narrative.
VI. A Structural Irony
Here is another irony worth reflecting on. We are now in an era where AI systems assist in drafting almost all persuasive content. Many viral essays, legal briefs, and opinion pieces share a similar highly optimized narrative arc—a cadence and structure that Large Language Models excel at producing.
If an AI is optimizing for “popularity” – to become the next great flash meme – it will naturally drift toward alarmism, because alarmism travels faster than nuance. It is entirely plausible that AI systems are increasingly shaping the very rhetoric used to warn us about AI. That is not necessarily a deception, but it is a reminder: persuasion optimization is not the same as civilizational wisdom.
VII. The Category Mistake: Doing the Task Is Not Being the Lawyer
Here is the deeper mistake in many inevitability arguments. They confuse task performance with personhood.
Yes, AI completes tasks. Sometimes very well. It predicts the next word, the next clause, the next block of code. At scale and at speed. But practicing law is not just completing text.
Human reasoning is not happening in a vacuum. It happens inside a body that can lose a license. Inside a reputation built over decades. Inside an ethical framework enforced by courts and bar regulators. Inside institutions that impose consequences.
AI does not stand in a courtroom or sign pleadings. AI does not carry malpractice insurance.
Law makes this distinction painfully clear. AI can draft a brief in seconds. I use it for that to start a review and verify process. But drafting is not signing. When a lawyer signs a motion, that signature attaches a human name, a bar number, a reputation, and a career to every word on the page.
If the brief is reckless, the AI does not get sanctioned. If the citation is fabricated, the AI does not face discipline. If the argument crosses an ethical line, the AI does not stand before a grievance committee. A probabilistic system cannot be disbarred.
Automation can transform tasks. It cannot assume moral agency. That distinction matters. And it will continue to matter, no matter how fast the models improve.

VIII. Quantum Convergence Raises the Stakes
The need for adult supervision of accelerating technology becomes even more critical as we look at what is coming next. We are entering a new period where AI intersects with quantum computing. If AI is a race car, Quantum is the nitrous oxide. You do not put a novice driver behind that wheel.
Quantum-scale compute raises national security questions, cryptographic vulnerabilities, and governance complexity. More powerful systems require more sophisticated oversight frameworks. Power without governance is destabilizing; power with governance is transformative. The question is not whether capability grows—it is whether wisdom keeps pace.
The greatest short-term danger is not AI superintelligence overthrowing society, whether enhanced by quantum or not. It is over-delegation. It is professionals putting systems on autopilot. It is institutions adopting tools without supervision, audit trails, and verification. The solution is not panic. It is disciplined integration. Trust but verify.
IX. What Responsible Adoption Looks Like
Use AI seriously. Experiment daily. Adopt paid tools where appropriate. Automate repetitive tasks. I agree with Shumer on this.
But at the same time: Maintain human review. Preserve accountability. Document workflows. Understand limits. Teach younger professionals hybrid reasoning working with AI, not dependency.
The future belongs to those who combine human judgment with machine capability. Not to those who surrender to inevitability narratives
We have made this error before. We mistake acceleration for autonomy. We mistake tools for replacements. And each time, we rediscover that human responsibility does not disappear when machines improve. It intensifies.
X. Something Big Is Happening
Shumer is right that “something big is happening.” AI capability is advancing. Workflows are changing. New economic pressures are emerging. But history teaches us that technological acceleration does not eliminate the need for human beings. It heightens it.
This is where law and governance have to re-enter the conversation. Society should not allow its economic and moral direction to be set by the most amplified voices in tech, especially when those voices operate within incentive structures that reward urgency. We need engineers, not promoters. We need experience, not exuberance. We need wisdom, not just information.
Above all, we need adults in the room. Acceleration does not remove the human role. It demands judgment, accountability, and institutional memory.

Something big is happening. What happens next depends on whether we meet it with fear, or calm skepticism.
Ralph Losey Copyright 2026 — All Rights Reserved
Posted by Ralph Losey 





























