It would be inconvenient if something bad happened, so it can't happen
Scott Alexander's review of 'If Anyone Builds It, Everyone Dies' reveals how our brains malfunction when faced with scary but uncertain threats that would require life changes. We suddenly become strict scientists demanding proof, applying standards we ignore elsewhere—what he calls 'insane moon epistemology.' This cognitive bias explains why we sleepwalk into obvious disasters while nitpicking warnings about inconvenient possibilities.
In this post, Scott reviews If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares. While the review itself is thoughtful—as is everything Scott writes—what really caught my attention was the section where he discusses the logical fallacies that shape how people think about AI risk. Scott neatly unpacks the psychological flaws that distort our perception of risk, showing how the mental frames people use to think about AI often shift when they consider other existential issues like climate change or the global decline in fertility. It's a sharp, insightful piece that's well worth reading.