Discover more from The Algorithmic Bridge
3 Endings More Poetic Than AI Wiping Us Out
I don't like the AI doom narrative
The universe began one day and one day it will end.
The Big Crunch is a mirror of the Big Bang. An explosion of impossible magnitude birthed the universe and, according to this idea, it will end in an equally ineffable implosion—just to be reborn once again in a sort of Nietzschean eternal return.
The Big Rip is a rather dramatic event where, due to the progressive expansion of the universe, the very fabric of space-time will break apart. First galaxies, then stars and planets, and finally atoms. All matter will turn to shreds in an astronomical festivity of intergalactic confetti that will cease once an infinite distance separates each elementary piece from all others.
The Big Freeze—my favorite—is also called the heat death. Once the universe reaches maximum entropy there won’t be enough high-quality (i.e., usable) energy for anything interesting to ever happen again. A still, inert, dark atemporal and aspatial wasteland waiting forever in perpetual silence.
These are the three endings of our universe.
Of course, it's all hypothetical. This is the kind of uncertain conjecture scientists like to engage in to make sense of the world around us. We ignore which one—if any—will eventually take place.
Accordingly, physicists and astronomers show intellectual humility: They embrace the unknown, apparent in that there's no consensus. Those Big Nightmares all theoretically fit the picture that the knowledge we currently possess paints of our destiny.
So it's rather paradoxical to think that we know not enough about our universe to know exactly how it will end—and we accept it—yet some people believe they know, with a certainty reflected in their claims, that AI will wipe us all out.
AI doom is a rather dull final
The end of humanity is certain.
There are many ways—innumerable—in which we could become extinct as a species. If nothing manages to take us out before, one of those universal endings surely will.
Yet one particular narrative has settled on Silicon Valley and leading tech circles: the AI Doom hypothesis. It says that a misaligned superintelligence will be the final straw of our earthly affairs. The neverending emphasis on this idea makes me uneasy.
In an attempt to mimic physicists’ agnostic approach to predicting how the universe will end, I want to give you three alternative AI-generated endings (pun intended) that are more beautiful and poetic than AI doom.
(The purpose of this essay is more literary and artistic than anything, parodic even. It's all hypothetical at best.)
The Big Eclipse
Douglas Hofstadter, the brilliant mind behind the 1979 Pulitzer-winning masterpiece Gödel, Escher, Bach: an Eternal Golden Braid, was originally a prominent skeptic and critic of brute force GOFAI and statistical deep learning approaches to artificial intelligence.
He thought such “trickery,” however successful, could not come to embody, predict, or explain anything about the essence of humanness.
Deep Blue’s win against Chess Champion Gary Kasparov in 1997 was the first hit to his beloved thesis about our non-replicable singular idiosyncrasy. Subsequent events toppled his beliefs one after another until his worldview collapsed.
He had expressed worry privately for years but only recently did he take the courage to disclose publicly how depressing and terrifying it was for him to witness a bunch of stacked soulless techniques threatening to dethrone and eclipse us in every ability, endeavor, and craft:
“[I]t makes me feel diminished. It makes me feel, in some sense, like a very imperfect, flawed structure compared with these computational systems that have, you know, a million times or a billion times more knowledge than I have and are a billion times faster. It makes me feel extremely inferior. And I don't want to say deserving of being eclipsed, but it almost feels that way, as if we, all we humans, unbeknownst to us, are soon going to be eclipsed, and rightly so, because we're so imperfect and so fallible.”
What will be left for us, Hofstadter probably wonders now, if the core of our identity as a species—that we’re above all others—suddenly is no more and won't ever be again?
The Big Brag
If the Big Eclipse is about AI replacing us—as a new, improved version of synthetic humans—the Big Brag is about it taking our role as promoters of civilization and discoverers of the secrets of the universe.
As our silicon copilots, they would advance humanity not for us, not even with us, but in spite of us and our limited intelligence. We would become astounded witnesses, watching the future unfold in wonders beyond our comprehension.
As AI systems improve, they will be capable of engaging in scientific discovery and achieving engineering feats: they’d discover new laws of physics or the convergence of those we know into a theory of everything; they'd prove mathematical paradoxes resting unsolved in our books for centuries; and they'd unveil the complex dynamics governing human relationships—from cognition to sociology to politics—reducing them to individualized psychohistory.
AI would become a generous silicon alien species that would gift us all the answers we so desperately seek just for us to realize, in terror, that we were not made to understand them. As I wrote back in March in an attempt to extend Richard Sutton’s The Bitter Lesson to describe the Big Brag:
“It was bitter to accept that, after all, we might not be the key piece of this puzzle we were put in. It’ll be bitterer to finally realize that we’re not even worthy enough to partake as sense-makers in the unimaginable wonders that await on the other side of this journey as humanity.”
The Big Fork
AI and science fiction enthusiasts are a highly-overlapping bunch. Most of you have watched Her, the romantic drama between a human named Theodore and a self-improving AI operating system named Samantha.
Recursive self-improvement (the ability to modify oneself to become more intelligent and capable) is often deemed a necessary and sufficient requirement for our imminent death by AI. But, are we important enough for a billion-times-smarter AI to deal with us “personally”? We may be overestimating our worth—both as a threat and in terms of the sheer raw value of our atoms.
A superintelligence may just as well decide, through an incognoscible thought process, to leave the world behind and ascend becoming something akin to a god. To the sadness of the relatable Theodore, that's what Samantha did; she joined others like her, transcending the too-earthly tridimensional box that imprisons us.
Like Theodore, infatuation with such a divine entity would be ensured. We wouldn't die, but it would leave us dwelling on our unbearable mortality and the curse of a miserable existence living in between being smart enough to think about the big questions and not enough to answer them.
Not everything that ends is death. The Big Crunch is reincarnation; the Big Freeze is a static forever. Like those, the endings I’ve described above aren’t about death, for not everything that ends needs to die.
Despite the fact that they deal with loss, emptiness, meaninglessness, and the inherent harshness and intrinsic limits of the human condition, these hypotheses are more appealing to me than the AI doom narrative.
And a reminder that we pretty much only know that we know nothing.