Adam Haman, Why A.I. Will Never Rule The World | Hn 181, interviewing Jobst Landgrebe and Barry Smith, re their book Why Machines Will Never Rule the World: Artificial Intelligence Without Fear 2d ed. (Routledge 2025). Fascinating conversation.
I enjoyed this bit (at 33:10):
Haman: I have one simple question because it relates, and it has to do with certainty. Would both of you say you’re 100% certain about the claims in your book about the limits of artificial intelligence? Not 99%, not “to the best of my knowledge,” but 100%?
Smith: So I’ve found a new slogan which I’m trying to practice using for questions like this. So it doesn’t matter whether we believe in this 100%. What matters is that you only ever achieve greatness through bravery.
Haman: [laughs]
Smith: And so we it would be rather boring to say, well, “maybe we’re wrong, but it seems like a good thing to explore.” But what we say is that it’s impossible.
And now I’d like to come back to you and make a challenge. So you’re the first person I’ve ever met, I think, who knows something about poker. Now, we know that the AI world is really good at finding ways of beating human beings for games like chess and go, and many video games also. But they’re all closed systems. So, that you can use mathematics to find winning strategies and you can train machines to deploy those strategies. And they’ve done it for poker, but only for poker without visual interaction. Is that correct?
Haman: As far as I know, they haven’t even bothered trying to get the computer to read their opponent. They’re just using …
Smith: Yep, right. So, then, this would be a very good example of a phenomenon which a brave person might say is impossible to replicate on a machine. So, I then say it because I’m brave. It will be impossible to create a machine that can beat people at poker where their visual element is preserved.
See also:
Current AI Models have 3 Unfixable Problems Sabine Hossenfelder
Grok: Sabine Hossenfelder argues that current generative AI models (large language models, diffusion-based image/video generators, etc.) have three fundamental, essentially unfixable limitations that prevent them from ever reaching artificial general intelligence (AGI).
- They are purpose-bound by design – These models are trained to detect statistical patterns in specific data types (text tokens, image patches, video frame relations). They lack the ability to perform abstract, general-purpose reasoning that can be applied to any domain or task. Scaling them up will not magically create true abstraction.
- Hallucinations are manageable but inevitable – LLMs don’t retrieve facts; they generate the most statistically plausible continuation. When the training data contains little or no information on a topic, they confidently produce nonsense. Hossenfelder notes that recent OpenAI research suggests rewarding models for saying “I don’t know” when confidence is low could largely mitigate the practical problem, even if it never eliminates hallucinations entirely. She considers this limitation acceptable rather than fatal.
- Prompt injection is fundamentally unsolvable – Because LLMs treat all input (system instructions and user prompt) the same way—token sequences—they cannot reliably distinguish instructions from data. Adversarial prompts can always override intended behavior (“ignore previous instructions…”). Workarounds (formatting rules, external filters, better system prompts) are fragile and will never make the models fully trustworthy for critical tasks.
Hossenfelder also emphasizes a related deep flaw: current models interpolate well within their training distribution but cannot reliably extrapolate or handle truly out-of-distribution scenarios (evident in bizarre failures of video generators and the inability of LLMs to produce genuinely novel ideas). In her view, these architectural limitations mean today’s deep-learning paradigm is a dead end for AGI. Companies like OpenAI and Anthropic that have bet everything on scaling LLMs will face serious trouble when the expected massive revenues fail to materialize. True general intelligence will require entirely new approaches—most likely some form of abstract world models, neurosymbolic reasoning, or a “logic language” that can represent concepts independently of specific modalities. Until then, she quips, the fastest path to human-level machine intelligence may be humans simply getting dumber.












