Current AI Models have 3 Unfixable Problems Sabine Hossenfelder
Grok: Sabine Hossenfelder argues that current generative AI models (large language models, diffusion-based image/video generators, etc.) have three fundamental, essentially unfixable limitations that prevent them from ever reaching artificial general intelligence (AGI).
They are purpose-bound by design – These models are trained to detect statistical patterns in specific data types (text tokens, image patches, video frame relations). They lack the ability to perform abstract, general-purpose reasoning that can be applied to any domain or task. Scaling them up will not magically create true abstraction.
Hallucinations are manageable but inevitable – LLMs don’t retrieve facts; they generate the most statistically plausible continuation. When the training data contains little or no information on a topic, they confidently produce nonsense. Hossenfelder notes that recent OpenAI research suggests rewarding models for saying “I don’t know” when confidence is low could largely mitigate the practical problem, even if it never eliminates hallucinations entirely. She considers this limitation acceptable rather than fatal.
Prompt injection is fundamentally unsolvable – Because LLMs treat all input (system instructions and user prompt) the same way—token sequences—they cannot reliably distinguish instructions from data. Adversarial prompts can always override intended behavior (“ignore previous instructions…”). Workarounds (formatting rules, external filters, better system prompts) are fragile and will never make the models fully trustworthy for critical tasks.
Hossenfelder also emphasizes a related deep flaw: current models interpolate well within their training distribution but cannot reliably extrapolate or handle truly out-of-distribution scenarios (evident in bizarre failures of video generators and the inability of LLMs to produce genuinely novel ideas). In her view, these architectural limitations mean today’s deep-learning paradigm is a dead end for AGI. Companies like OpenAI and Anthropic that have bet everything on scaling LLMs will face serious trouble when the expected massive revenues fail to materialize. True general intelligence will require entirely new approaches—most likely some form of abstract world models, neurosymbolic reasoning, or a “logic language” that can represent concepts independently of specific modalities. Until then, she quips, the fastest path to human-level machine intelligence may be humans simply getting dumber.
There are three collections I’m aware of, of people notable enough for inclusion in such but not major enough figures to warrant their own biographies (e.g. Rothbard, Mises).
Walter Block, ed. I Chose Liberty: Autobiographies of Contemporary Libertarians (Mises Institute, 2010). The only one online (my entry is in this one, “How I Became A Libertarian“). 1 Thanks Jeffrey Tucker and the Mises Institute’s once-great open publishing policy. 2 (After Tucker and Doug French left Mises in 2011 or so, the Mises Institute has has unfortunately switched from CC-BY to the useless and restrictive CC-NC-ND, which is worse than nothing.) See what open publishing does? It preserves knowledge.
As I note here, the latest PFS book has just been published by Sebastian Wang and his Hampden Press, co-published with the Property and Freedom Society.
Among libertarians I am known most for my intellectual property (IP) and general libertarian theorizing, as in my books Legal Foundations of a Free Society (Papinian Press, 2023), Against Intellectual Property (Mises Institute, 2008) and other publications. In my libertarian writing and theorizing I have tried to blend my practical and theoretical legal knowledge (of IP law, oil & gas law, international law, Roman/Louisiana/civil law, and common law) with libertarian and Austrian economics scholarship and insights.
I viewed this “libertarian legal” writing as my hobby or avocation, although I devoted a lot of time to this research and writing, and in retirement it is what occupies much of my time and attention. In my vocation, 1 I also practiced law for over thirty years, initially in the fields of oil and gas and international law, then specializing in IP and patent law, and general commercial law as general counsel of a high-tech company. [continue reading…]
Objectivists and other statists like to retreat to emotivism and irrelevant issues like manners when debating with libertarian anarchists. The low-IQ Jan Helfeld tried this tack when I debated him years ago. He was upset that I would follow his ridiculous debate rules, to which I retorted that he favors taxing me so he is worse. It led to some pretty funny exchanges. See below. [continue reading…]
It may be true that lovers of liberty, originally steeped in society’s preferred form of social democracy, must travel along the spectrum of the state via small (“minimal”) before reaching the conclusion that the state must go. But logically, this is not the case. To cure cancer, it is not necessary to reduce the size of a tumor bit by bit. The cure is to remove it. Similarly, if a rock upsets the flow of a stream, the solution is not to change the size or shape of the rock, to make it more streamlined, but to simply remove it. [continue reading…]
I just watched (because I’m a masochist) a video over 3 hours in length by a fellow with the handle “LiquidZulu (LZ)”. He used that time to blast Dave Smith for being unsound on libertarian theory and “afraid to debate him” or something. [continue reading…]
Walter E. Block, “Rejoinder to Kinsella on ownership and the voluntary slave contract,” Management Education Science Technology Journal (MESTE) 11, no. 1 (Jan. 2023): 1-8 [pdf]
I stumbled across some pages I had scanned from my notebook for my final semester or so of my first degree, my BSEE at LSU, Fall 1986 and Spring 1987 semesters. My courses included:
Real Time Computer Systems EE 4770 (Dr. Klinkachorn, Docka Klink)
Digital Integrated Circuits EE 4250 (Burke Huner)
Introductory Sociology SOCL 2001
History of Contemporary America HIST 4065 (Culbert) (with my friend Ben Favrot, or “Fartov”.)
I liked to doodle a lot and was at the time fascinated with Douglas Hofstadter’s “Ambigrams,” making words with mirror images of themselves. (Metamagical Themas; Ambigram (Wikipedia); My Life in Ambigrammia; Ambigrammia.) Nicknames and pet names like Faggot Lip, Smoochball, and so on. Many of my EE buddies were in these classes–Ben Favrot (“Fish”), Chris LeBlanc (“Duck Butter”), Damon Smith, Sal Bernadas, Jimmy1, Jimmy2, “Booger” Wayne LeBlanc, “Pretty” Wayne Speeg, Fat Wayne, and so on. Culbert is the one that had me read Charles Murray’s Losing Ground, Oswald’s Game (which persuaded me Oswald acted alone), and others.
Recent Comments