ELIZA Effect
The tendency to unconsciously attribute human-like understanding and emotions to computer programs.
Also known as: Eliza effect, ELIZA illusion
Category: Psychology & Mental Models
Tags: ai, psychology, cognitive-biases, human-computer-interaction
Explanation
The ELIZA effect is the tendency for people to unconsciously assume that computer programs possess human-like understanding, empathy, and intelligence, even when they know the system is just software. Named after ELIZA, the 1966 chatbot created by MIT computer scientist Joseph Weizenbaum, the effect was first observed when users of ELIZA's DOCTOR script (a simple pattern-matching program that mimicked a Rogerian therapist) began forming emotional attachments to the program and attributing genuine understanding to its responses.
Weizenbaum was disturbed to find that even people who understood ELIZA was merely matching patterns and reflecting statements back would still confide in it, become emotionally invested in conversations, and resist the idea that the program did not truly understand them. His secretary reportedly asked him to leave the room so she could have a private conversation with the program.
The psychological mechanisms behind the ELIZA effect include: anthropomorphism (our evolved tendency to attribute human traits to non-human entities), the conversational contract (we assume conversation partners understand us), cognitive dissonance (knowing it is software but experiencing it as understanding), and projection (filling in gaps with our own expectations of what a good listener would think).
In the age of large language models, the ELIZA effect has become dramatically more potent. Modern AI systems produce far more convincing and contextually appropriate responses than ELIZA ever could, making the attribution of understanding even more compelling. This has practical consequences: users may over-trust AI outputs, share sensitive information inappropriately, form unhealthy emotional dependencies, or mistake fluent text generation for genuine comprehension.
Awareness of the ELIZA effect is essential for responsible AI use. When interacting with AI, remembering that fluency is not understanding, that agreement is not validation, and that conversation is not connection helps maintain appropriate boundaries. The effect also has design implications: AI systems that encourage emotional attachment without safeguards risk exploiting a fundamental human cognitive tendency.
Related Concepts
← Back to all concepts