This is an insightful review of the book If Anyone Builds It, Everyone Dies by Scott Alexander. While the book review is interesting, what was interesting to me was the section on how most people think about long-term risks like AI danger. The reason why this stood out to me is because of Bhuvan’s Law: “Any discourse about artificial intelligence is indistinguishable from talking out of your ass.”
Everybody has an opinion on AI; no two people mean the same when they say AI, and most of the discourse is a precious waste of words and, more importantly, human calories—calories that would otherwise be valuable in the bodies of people in impoverished countries. The ease with which people talk about things they have no bloody clue about never ceases to amaze me.
It’s because of this reason that Scott’s dismantling of the logical inconsistencies, logical fallacies, and downright stupid reasoning stood out to me. Unlike most people, even in AI, he’s done his homework before talking about it.
Our brains completely malfunction when someone tells us something scary that would require us to actually change our lives. Scott calls it “insane moon epistemology”—basically, when faced with a threat that’s both uncertain AND would be a huge pain to deal with, we suddenly become the world’s strictest scientists. We demand mathematical proof for everything, we nitpick methodology, and we point out that “this has never happened before” as if that settles it.
But we don’t apply these same standards to anything else.
These mental gymnastics follow predictable patterns. The “nothing can happen for the first time” argument treats the absence of historical precedent as proof of impossibility—until the same person worries about novel threats like social media addiction or gene editing gone wrong. The “one failed prediction disproves all predictions” gambit cherry-picks examples of forecasting failures while ignoring the countless accurate warnings that prevented disasters.
Perhaps most common is the “real danger is X” mental yogasana. When presented with concern about superintelligent AI, someone objects that the real danger is near-term AI bias. When warned about climate tipping points, they insist the real threat is ocean acidification. They mistake showing that X is dangerous for showing that the original concern isn’t, even when both could easily be true.
Most of us handle speculative risks with what Scott calls the “shrug strategy”—watchful waiting until threats prove themselves sufficiently to justify action. This approach has prevented countless overreactions to false alarms. The population bomb panic of the 1970s led to forced sterilizations; a truly global panic could have been far worse.
But the strategy also enables sleepwalking into real disasters. COVID-19 offers the clearest recent example. Even when the virus had escaped containment in China, even when exponential spread was mathematically inevitable, experts insisted concerns were “speculative” and warned against focusing on hypotheticals when “real” problems like xenophobia needed attention.
The deeper issue isn’t just cognitive bias—it’s that most people can sustain concern for only one or two major issues at a time. Alexander suggests we each have limited “crusade slots,” and once occupied by climate change or AI safety or political polarization, these slots resist new entries. Having already committed to one transformative concern, we unconsciously raise the bar for others.
Read the full article twice. It’s worth it.