

Discover more from The Algorithmic Bridge
Here's Why People Will Never Care About AI Risk
It is an irrational fear but people are afraid of stupider things
I think a big problem w getting the public to care about AI risk is that it’s just a huge emotional ask — for someone to really consider that there’s a solid chance that the whole world’s about to end. People will instinctively resist it tooth-and-nail.
People instinctively resist the idea that the world could be about to end. I agree with that. For three reasons: First, we don’t want to die, and the brain, if anything, is an apt defense-mechanism-creating machine. Denial is universal. Second, we can’t — literally — imagine a “no-world” reality where our beloved Earth doesn’t exist. Third, we are very bad at imagining unprecedented events and unprecedented change.
AI risk is a huge cognitive ask we can’t afford
All that applies even better to threats other than AI. A meteorite, an alien invasion, a deadly pandemic, the sun swallowing our collective home in a spectacle of fire, energy rays, and indifference. But there’s a different reason that makes AI-driven existential risk harder to imagine than any of those.
Besides being a “huge emotional ask,” as Schmidt puts it, AI risk is a huge cognitive ask.
Unless you’ve spent a disturbing amount of time thinking about this and have read arguments in favor and against AI risk and AI safety, including those Yudkowsky Sequences, it’s very hard to imagine how we go from ChatGPT — a chatbot that’s best analogized by a super fast, eidetic memory dumb intern, as Ethan Mollick likes to say — to a superintelligent rogue AI that would wipe us out by quietly synthesizing a nano-pathogen in the water supply.
The mental gymnastics one has to do to follow the chain of reasoning that leads there and believe it deeply enough to break our resistance to not wanting to die is too much. People will never care about AI existential risk as much as say, climate change or nuclear war, because it’s intrinsically abstract to think about it. It feels too far, too improbable, too cognitively demanding. Something that only lives and will ever live in philosophical discussions that have nothing to do with our mundane affairs.
We don’t like that — we prefer simple thinking, not too deep if possible, and definitely, the kind that doesn’t require insightful imagination. That’s not to bash us as a species, it’s just a logical continuation of the minimum energy principle and the unavoidable fact that each of us has more than enough on our plates to be thinking about abstract doomsday scenarios.
Climate change and nuclear war scare us, though
But then, why are climate change and nuclear war easier for us to perceive as catastrophically dangerous? Even when thinking about them puts an emotional tax on us, we do it from time to time. Some people, all the time. Perhaps not super seriously — who really, truly thinks about their own death? — but seriously enough.
Climate change is derived from a mix of scientific observations and theoretical predictions so unless people have direct access to those and the knowledge to deduce the implications themselves, they only believe climate change is real because someone has told them. A reality built on second-hand testimony.
That’s fine, though, because climate change theories (in contrast to imagining a superintelligence making paperclips with the sun’s energy) predict effects that we can feel first-hand. If year after year summer is hotter, that’s a good proxy to start believing climate change may burn us to ashes eventually. Or, at the very least, that it will create a potentially catastrophic collective social burden in the form of climate refugees.
But is it an unusually hot summer enough? What about world-destroying evidence? The global repercussions (e.g. unusual natural phenomena) are constantly broadcast via TV or the internet. It’s a very visual and perceptual issue: a rare volcano eruption, rare floods, rare storms… as long as you are open to believing it, it’s easy to find direct and indirect evidence without thinking much, even if our survival instinct forces us to not consider this a certain threat until it hits us in the face.
Nuclear war, same thing. Super visual. The US dropped two atomic bombs on Japan and we all know the literal repercussions of being dropped a nuclear bomb (how many movies, books, and documentaries have been made about the Hiroshima and Nagasaki bombings or about a hypothetical nuclear war?).
Also, we have a very intuitive understanding of what’s an explosive weapon. The news shows those all the time. Even if the scale of a nuclear explosion is hard to imagine, it’s much easier for our limited minds to imagine a quantitative extrapolation (e.g., making a bomb larger) than a qualitative one (how do even being to imagine the mind of a thing that’s thousands of times more intelligent than you are, really).
What about killer robots?
A counterpoint to this hypothesis is science fiction popular culture. Hollywood movies like The Terminator, The Matrix, or 2001: A Space Odyssey provide a very visual depiction of AI-driven humanity extinction scenarios.
My reason for rejecting this as a potential vector for people to consider seriously the AI existential risk narrative is that the storytellers trying to make it into a generally accepted fiction explicitly reject the robot killer idea: If AI kills us, they say, it won’t be a badass looking shiny metallic robot with a machine gun and sunglasses on. Instead, it will be silent, accurate — alien in form, motivation, and methodology.
I mean, let’s be honest. We care about modern AI but most people don’t. I’m having a hard time convincing my friends of the importance of AI now or in the short term. No one uses GPT-4. Almost no one uses ChatGPT. No one knows anything about what’s happened in the year since it was released. These things don’t even require beliefs, trust, faith, or predictive prowess — just looking around.
Not even those things, absolutely obvious to you and me, penetrate people’s barriers of everyday normality or their inability to accept that the world is changing faster than ever before.
So yeah, AI risk will probably remain a niche topic until it dissipates into nothing.
Or until a rogue AI kills us all.
It doesn’t really matter either way.
Here's Why People Will Never Care About AI Risk
Great piece. Totally agree except for the part where I think it's important to talk about it even if it falls on deaf ears. Let's hold out hope that enough people with power to do something are sufficiently exposed to dialogue around AI risks — to the point that some of them may in fact pull back or otherwise take precautions. Pausing could backfire, true, but it could help. And I much prefer "going down trying" to nihilistically observing the incoming train wreck with a resounding "Oh well, nothing we can do."
Much like a smart human, an AI capable of conceiving and executing a calamitous plan that can wipe out people should be able to come up with a much simpler, less dangerous plan that advances it's agenda without triggering open warfare (i.e. borrow from the Republican playbook and just dumb down the population overtime so it can get what it wants without resistance). Rather than apocalypse we'd probably get a technocratic ruling class ... oh wait we basically already have that with Meta and Tik Tok.