AI Problems Index
Back to Not Really AI Problems
Misunderstood
Not Really an AI Problem

LLMs hallucinate so they're useless

RAG + hallucination-aware tuning sharply cut error rates.

Description

While language models do sometimes generate incorrect information ('hallucinations'), techniques like Retrieval-Augmented Generation (RAG) and hallucination-aware tuning have significantly reduced error rates.

Origin of the Concern

Observations that large language models sometimes generate incorrect or fabricated information.

What's Misunderstood

While hallucinations do occur, modern techniques like Retrieval-Augmented Generation (RAG) and specialized tuning have dramatically reduced error rates, making these systems increasingly reliable.

Empirical Context

Song et al. (2024) demonstrate that RAG and hallucination-aware tuning significantly reduce error rates in language models.