≡ Menu

Barry Smith on Artificial Intelligence

Adam Haman, Why A.I. Will Never Rule The World | Hn 181, interviewing Jobst Landgrebe and Barry Smith, re their book Why Machines Will Never Rule the World: Artificial Intelligence Without Fear 2d ed. (Routledge 2025). Fascinating conversation.

See also:

Current AI Models have 3 Unfixable Problems Sabine Hossenfelder

Grok: Sabine Hossenfelder argues that current generative AI models (large language models, diffusion-based image/video generators, etc.) have three fundamental, essentially unfixable limitations that prevent them from ever reaching artificial general intelligence (AGI).

  1. They are purpose-bound by design – These models are trained to detect statistical patterns in specific data types (text tokens, image patches, video frame relations). They lack the ability to perform abstract, general-purpose reasoning that can be applied to any domain or task. Scaling them up will not magically create true abstraction.
  2. Hallucinations are manageable but inevitable – LLMs don’t retrieve facts; they generate the most statistically plausible continuation. When the training data contains little or no information on a topic, they confidently produce nonsense. Hossenfelder notes that recent OpenAI research suggests rewarding models for saying “I don’t know” when confidence is low could largely mitigate the practical problem, even if it never eliminates hallucinations entirely. She considers this limitation acceptable rather than fatal.
  3. Prompt injection is fundamentally unsolvable – Because LLMs treat all input (system instructions and user prompt) the same way—token sequences—they cannot reliably distinguish instructions from data. Adversarial prompts can always override intended behavior (“ignore previous instructions…”). Workarounds (formatting rules, external filters, better system prompts) are fragile and will never make the models fully trustworthy for critical tasks.

Hossenfelder also emphasizes a related deep flaw: current models interpolate well within their training distribution but cannot reliably extrapolate or handle truly out-of-distribution scenarios (evident in bizarre failures of video generators and the inability of LLMs to produce genuinely novel ideas). In her view, these architectural limitations mean today’s deep-learning paradigm is a dead end for AGI. Companies like OpenAI and Anthropic that have bet everything on scaling LLMs will face serious trouble when the expected massive revenues fail to materialize. True general intelligence will require entirely new approaches—most likely some form of abstract world models, neurosymbolic reasoning, or a “logic language” that can represent concepts independently of specific modalities. Until then, she quips, the fastest path to human-level machine intelligence may be humans simply getting dumber.

Share
{ 0 comments… add one }

Leave a Reply

© 2012-2025 StephanKinsella.com CC0 To the extent possible under law, Stephan Kinsella has waived all copyright and related or neighboring rights to material on this Site, unless indicated otherwise. In the event the CC0 license is unenforceable a  Creative Commons License Creative Commons Attribution 3.0 License is hereby granted.

-- Copyright notice by Blog Copyright