Review of Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell
January 8, 2026
Introduction
Over the holidays, I had an opportunity to finish reading Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell. I very much recommend it. Although it was published in 2019, and doesn't really get into today's LLMs, it otherwise presents an excellent history of the artificial intelligence field and a clear-eyed understanding of what artificail intelligence can do and may never do.
Full Review
When I read Artificial Intelligence: A Guide for Thinking Humans, what stood out wasn’t any single technical insight—it was the discipline of the argument. Mitchell is doing something that’s surprisingly rare in AI writing: she refuses to confuse impressive outputs with actual understanding.
The book’s strength is its historical grounding. By walking through the symbolic AI era and the rise of neural networks, Mitchell shows that today’s debates aren’t new—they’re reruns. Symbolic AI assumed intelligence would emerge from rules and representations. Neural nets assume intelligence will emerge from scale and statistics. Both camps delivered real progress, and both ran into hard walls. That context matters, because it explains why modern systems feel powerful yet fragile at the same time.
The key takeaway—and the one that still holds up best—is her insistence that AIs are objects, not subjects. They don’t experience the world. They don’t have stakes. They don’t build models because they need them to survive or act. They optimize functions over data. That distinction cuts through a lot of current hype.
Mitchell is careful to separate performance from understanding. A system can translate text, recognize images, or generate fluent prose without having any grounded sense of what those things mean. If you’ve actually used these tools in production, this rings true immediately: they’re incredibly good at pattern completion and surprisingly bad at knowing when they don’t know. They won’t push back on bad assumptions. They won’t notice missing context unless you force it in.
What I appreciate most is that Mitchell avoids both extremes. She’s not dismissive—modern AI is genuinely useful. But she’s also not impressed by surface fluency. The book pushes back hard on the idea that scaling alone gets you to general intelligence, common sense, or reasoning about the real world. Text about the world is not the same thing as interacting with it.
That framing ages well. Even now, with far more capable language models, the same limitations show up: brittleness, shallow abstraction, and a lack of causal understanding. These systems don’t understand—they approximate. That doesn’t make them worthless; it makes them tools that need careful boundaries, human judgment, and verification.
If there’s one lesson worth keeping, it’s this: don’t anthropomorphize your tools. Treat AI like engineered machinery with uneven strengths, not like an emerging mind. Mitchell’s book doesn’t tell you what to believe about the future of AI—it gives you a way to think clearly about it. And in a field full of hype cycles, that’s unusually valuable.