AI Problems Index

Cognitive Fallacies in AI Discourse

Common reasoning errors and logical fallacies that appear in AI discussions.

Logical
Appeal to Nature
Arguing that something is good or correct because it's 'natural', or bad/incorrect because it's 'artificial'.

Example in AI Discourse:

"Real intelligence can't come from a machine because it's artificial, not natural like human thought."

Rhetorical
Moving the Goalposts
Continually changing the criteria for what counts as 'real AI' or 'intelligent' whenever AI systems meet previous criteria.

Example in AI Discourse:

"Sure, AI can play chess, but that's just calculation, not intelligence. OK it can play Go, but that's pattern matching. Fine, it can write essays, but it doesn't truly understand them."

Cognitive
Anthropomorphic Fallacy
Attributing human-like qualities, intentions, or consciousness to AI systems based on their outputs.

Example in AI Discourse:

"The AI told me it was feeling sad about climate change, so it must have emotions and care about the environment."

Cognitive
Binary Thinking
Framing AI capabilities or risks in all-or-nothing terms, ignoring the spectrum of possibilities and nuances.

Example in AI Discourse:

"Either AI will solve all our problems or it will destroy humanity. There's no middle ground."

Logical
Appeal to Science Fiction
Using fictional portrayals of AI from movies, books, or TV as evidence for claims about real AI capabilities or risks.

Example in AI Discourse:

"We've all seen The Terminator - once AI becomes self-aware, it will inevitably try to destroy humanity."

Logical
Argument from Ignorance
Claiming something must be true because it hasn't been proven false, or vice versa.

Example in AI Discourse:

"No one can prove that advanced AI won't become conscious, therefore we should assume it will."

Logical
Slippery Slope
Arguing that a relatively small first step will inevitably lead to extreme consequences.

Example in AI Discourse:

"If we allow AI to automate customer service jobs, soon all jobs will be automated and humans will have no purpose."

Logical
Special Pleading
Making an exception to a general rule without justifying why that exception should apply.

Example in AI Discourse:

"AI shouldn't be used for creative work because it can't truly be creative, but it's fine to use it for data analysis even though it doesn't truly understand the data."

Logical
Composition Fallacy
Assuming that what is true of the parts must be true of the whole.

Example in AI Discourse:

"Each individual AI model is safe and limited in its capabilities, therefore a system combining multiple AI models must also be safe and limited."

Logical
Division Fallacy
Assuming that what is true of the whole must be true of all or some of its parts.

Example in AI Discourse:

"AI systems as a whole pose significant risks to society, therefore this specific AI application for medical diagnosis must be dangerous."

Logical
No True Scotsman
Modifying the definition of a term to exclude counterexamples and protect a generalization.

Example in AI Discourse:

"No true AI researcher would work on large language models because they're inherently harmful. Oh, that researcher works on LLMs but is concerned about safety? Well, they're not a true AI researcher then."

Logical
Genetic Fallacy
Judging something as good or bad based on where it comes from or its origins rather than its current context or properties.

Example in AI Discourse:

"We can't trust this AI ethics framework because it was developed by a tech company that profits from AI, so it must be biased."

Cognitive
Personal Incredulity
Rejecting an idea because you personally cannot understand how it could be true or cannot imagine how it works.

Example in AI Discourse:

"I can't understand how a neural network could possibly generate such human-like text without consciousness, so it must be impossible or there must be humans secretly writing the responses."

Logical
Nirvana Fallacy (Perfect-Solution Fallacy)
Rejecting solutions to problems because they are not perfect or have some drawbacks.

Example in AI Discourse:

"We shouldn't adopt AI-powered energy optimization systems because they won't completely eliminate our carbon footprint. If we can't achieve zero emissions, these partial solutions aren't worth implementing."

Logical
Package Deal Fallacy
Treating essentially dissimilar concepts as though they were essentially similar.

Example in AI Discourse:

"AI systems and human intelligence are both forms of intelligence, so AI systems must have or will soon develop consciousness, emotions, and desires just like humans do."

Logical
Lump of Labour Fallacy
The misconception that there is a fixed amount of work to be done within an economy.

Example in AI Discourse:

"AI automation will permanently reduce the total number of jobs available because there's only a fixed amount of work that needs to be done in the economy."

Rhetorical
Inflation of Conflict
Arguing that, if experts in a field of knowledge disagree on a certain point within that field, no conclusion can be reached or that the legitimacy of that field of knowledge is questionable.

Example in AI Discourse:

"AI researchers disagree about the timeline and specific risks of advanced AI systems, therefore we can't know anything about AI risks and shouldn't take any of their concerns seriously."