The field of AI could have many names. Artificial intelligence is probably the less accurate of all—and also the reason why it’s so successful.
Naomi Klein, an author, professor, and left-wing political activist, well-known for her books “The Shock Doctrine” and “This Changes Everything,” has recently turned her attention to AI. In an essay in The Guardian published earlier this week, she eloquently criticizes the term “hallucination” (typically used to refer to AI chatbots’ tendency to make up information) and swings it back, like a fencing master wielding a double-edged sword, against the big tech industry, the CEOs promoting AI, and their careless use of this and other anthropomorphizing terms (I’ve done it too, not hiding here).
Why not call it “algorithmic junk” or “glitches,” she wonders. She offers an explanation; its purpose is twofold: By acknowledging the limitations of AI systems using terms like hallucination, which remind us of human psychology, “AI’s boosters are … simultaneously feeding the sector’s most cherished mythology,” i.e., that we’re well into the road toward the natural next step in evolution—human-level (and beyond) artificial agents.
But, she says, “it’s not the bots that are having them; it’s the tech CEOs who unleashed them, along with a phalanx of their fans, who are in the grips of wild hallucinations.” She’s referring to the claims that AI—once it becomes AGI—will be the solution to the problems that pain the world, from poverty and inequality to climate change, and will elevate us to a post-scarcity utopia of leisure. Of course, as an avowed capitalist critic, she points out that under the system that governs us, this won’t happen:
“Our current system … is built to maximize the extraction of wealth and profit – from both humans and the natural world—a reality that has brought us to what we might think of it as capitalism’s techno-necro stage. In that reality of hyper-concentrated power and wealth, AI—far from living up to all those utopian hallucinations—is much more likely to become a fearsome tool of further dispossession and despoilation.”
Whether you agree with her on this is up to you. I’ll take over from here to go in a different direction (there’s no way I could improve her political analysis, so I won’t dare try). But I like how she used the term hallucination to reveal that we’ve come full circle: We thought it was the chatbots making stuff up but it’s their creators instead. We thought they were creating intelligent agents but they’re popularizing human-sounding words to fuel the promise instead.
Interestingly, hallucination, through its meaning and through its use, explains both phenomena. How far back does this entwined double-trend go?
A seventy-year-old ‘enticing cover story’
Keep reading with a 7-day free trial
Subscribe to The Algorithmic Bridge to keep reading this post and get 7 days of free access to the full post archives.