Automation Bias
Over-reliance on automated systems and a tendency to trust their outputs uncritically.
Also known as: Algorithm Aversion Inverse, Machine Trust Bias
Category: Cognitive Biases
Tags: cognitive-biases, psychology, technology, decision-making, artificial-intelligence
Explanation
Automation Bias is the tendency to over-rely on automated systems, accepting their outputs without sufficient scrutiny and failing to detect errors that would be caught if the task were performed manually. As technology becomes more sophisticated and reliable, humans increasingly defer to automated suggestions, sometimes ignoring contradictory evidence from other sources or their own judgment. This bias becomes particularly dangerous when automated systems are wrong but trusted.\n\nThis bias manifests in two main ways: errors of commission (following incorrect automated advice when human judgment would have been correct) and errors of omission (failing to notice when an automated system fails to alert to a problem). Studies in aviation, medicine, and other high-stakes domains have documented numerous cases where operators followed incorrect automated guidance despite warning signs, sometimes with catastrophic consequences.\n\nThe rise of artificial intelligence and machine learning systems makes automation bias increasingly relevant. As AI systems are deployed in healthcare, finance, criminal justice, and other domains, understanding this bias becomes critical. Mitigating automation bias requires maintaining human oversight and critical evaluation skills, designing systems that support rather than replace human judgment, training users to verify automated outputs, and building in checks that prevent blind acceptance of machine recommendations.
Related Concepts
← Back to all concepts