AI Problems Index
Back to Not Really AI Problems

LLMs don't understand anything

On many reasoning benchmarks they beat humans; 'understanding' is philosophically contested.

Description

The claim that language models don't 'understand' anything is complicated by the fact that they outperform humans on many reasoning benchmarks. The concept of 'understanding' itself is philosophically contested.

Origin of the Concern

Claims that language models are merely statistical pattern matchers without any real understanding.

What's Misunderstood

Modern language models outperform humans on many reasoning benchmarks. The concept of 'understanding' itself is philosophically contested and lacks a clear definition.

Empirical Context

This is a theoretical issue with no single definitive paper, but draws on work by researchers like ARC, Chalmers, and Chollet who explore the philosophical dimensions of machine understanding.