Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

Language Models Misunderstood: Science Says Thought Isn't Built on Words

Artificial Intelligence Large Language Models Cognitive Science Human Intelligence Neuroscience AI Hype Language Models
November 25, 2025
Viqus Verdict Logo Viqus Verdict Logo 8
Reality Check
Media Hype 6/10
Real Impact 8/10

Article Summary

Benjamin Riley’s article presents a critical analysis of the hype surrounding large language models (LLMs) and their potential to achieve artificial general intelligence (AGI). The core argument is that LLMs, such as ChatGPT and Gemini, are fundamentally models of language—primarily designed to predict the next word in a sequence based on massive datasets of text. However, recent scientific research, spearheaded by a team including Evelina Fedorenko, Steven Piantadosi, and Edward Gibson, suggests that human thinking is not reliant on language. The article highlights evidence from neuroscience, specifically fMRI studies, which demonstrate distinct brain activity patterns during cognitive tasks that are not linked to linguistic processing. Furthermore, it cites examples of individuals with severe language impairments who retain intact cognitive abilities—demonstrating the capacity to solve complex problems, understand others, and engage in reasoning. The piece draws a crucial distinction between language as a *tool* for communication and language as the *foundation* of thought. Riley argues that the unquestioning belief in LLMs’ intelligence is built on a flawed assumption: that sophisticated language modeling equates to genuine cognitive understanding. He emphasizes the importance of separating communication from cognition, arguing that the industry's reliance on scaling data and computing power (specifically Nvidia chips) as a solution to the AGI problem is scientifically misguided. The piece concludes with a compelling observation: take away language from a human, and we still think, but take away language from an LLM, and it simply ceases to exist.

Key Points

  • Language models are primarily tools for communication, not inherently intelligent systems.
  • Human thinking is fundamentally independent of language, as demonstrated by neuroscience research and individuals with severe language impairments.
  • The industry’s belief in scaling data and computing power as a solution to AGI is scientifically flawed.

Why It Matters

This analysis is critically important for professionals in AI, tech, and even the broader public. It challenges a widespread and arguably premature narrative surrounding AI development. By exposing the limitations of current LLMs and highlighting the established scientific evidence that language is not the foundation of human intelligence, the article forces a more realistic and informed discussion about the true potential—and limitations—of artificial intelligence. It also prompts a reevaluation of the massive investments being made in scaling these models, suggesting a shift towards a deeper understanding of human cognition.

You might also be interested in