AI Is a Celebrity Technology
A suiting metaphor—and the consequences it entails—for the most well-known technology of our times.
A few days ago I had a conversation with a couple of reporters that were researching large language models for a super popular news TV program. I was explaining to them what makes AI attract so many eyes and interest and why it has become the buzzword of the decade. As I was talking, I came up with a good metaphor for anyone to understand why: AI is a celebrity technology.
People—including politicians, investors, company executives, and researchers—treat AI and derivative applications differently than they treat other technologies with comparable upside (think: biotech, quantum computing, fusion energy, space travel). The amount of interest and attention AI gets doesn’t correspond to the value it provides—as great as that may be.
Although the metaphor is nowhere near perfect, it’s exactly what happens with human celebrities in sports, music, or cinema: They grab most of our attention and it’s not because of a rational assessment of the value they provide, but because of something else. Kim Kardashian is up there next to the definition of celebrity in the dictionary but if you think about it you can’t come up with a better explanation than “she’s famous because she’s famous.”
Although it’d be very unfair to compare AI with Kim K, it’s undeniable that AI’s popularity is through the roof—whereas the results can’t always explain the hype. This is something I’ve thought about for a long time but couldn’t quite find the word that described the phenomenon. Celebrity is the closest I’ve got. This article is the explanation behind the assertion: what makes AI a celebrity tech and what are the consequences of having this singular status.
What makes AI a celebrity technology
AI has been subjected to high scrutiny and expectation since its very conception as a field of research in 1956. At first, no one could predict what we’d be able to build by combining the cognitive sciences with Shannon’s information theory, Wiener’s cybernetics, and Turing’s theory of computation. But even then, AI’s goals were great enough to make the most skeptical raise their eyebrows with interest. Solving intelligence and revealing the deepest mysteries of the human brain (the most complex structure known to us) are quite ambitious goals.
The deep learning revolution we’ve been submerged in during the last decade has only reinforced AI’s status as a celebrity tech, but it isn’t the cause. The source of AI’s status lies in a unique combination of features no other technology shares.
AI’s goals are as ambitious as they get
This is probably the single most influential factor that makes AI stand out. Some argue that AI could be the only means to solve some of the greater future problems (I disagree with this longtermist view). Building an artificial general intelligence—that then would evolve into a superintelligence—could theoretically solve any problem we may think about (and also those we can’t think about). In the words of philosopher Nick Bostrom, “machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are.”
If AI can solve all of our problems, it’s expected that those who don’t have that many problems now are fixated on it. And it’s precisely those who have a lot of money the ones who can influence the field’s direction at the research and application levels. If we’re seeing so much advancement in the last years it’s because AI is moving a lot of money from these people. But other “emerging” technologies like biotech, cloud computing, or renewable energy also move a lot of money, so this alone doesn’t explain AI’s status.
AI points to the core questions of humanity
This factor, although not necessarily the one that moves the money, is the one that makes AI feel special to us. AI and the very meaning of being human are entwined. From its very name “artificial intelligence” to its ultimate goal—solving the mysteries of intelligence and the brain and building a superhuman “AGI”—to its core processes of learning, reasoning, and understanding, AI is indissociable from us.
The concepts we use to describe AI systems and processes stem from the cognitive sciences (e.g., neural networks, deep learning, attention mechanisms), companies build robots that resemble humans and train models to master language and vision. AI is a field of research structured—in part—after the cognitive sciences. Both are looking to solve the mysteries of the only instance of intelligence we have: us.
While neuroscience shares with AI that it’s all about humans, there’s an important difference. Neuroscience is focused on discovery, on understanding how biology has modeled us. It’s slow in unveiling the secret workings of the brain—the work of an archeologist of the mind. In contrast, AI is about inventing new “humans,” it’s about building and creating. Each month can bring novelty and excitement. It’s faster and that makes it more attractive than other disciplines also concerned with our cognitive prowess.
Our perception of AI is shaped by popular culture
Culture evolves in parallel with technological progress and can radically influence the common understanding of science or tech. There are innumerable books and movies on AI, which deform and perpetuate particular narratives of what it is and what it could be. The collective imagination is significantly different for AI than for neuroscience or biotech and the role of popular culture is key here.
If I talk about the future of AI, who doesn’t think of Skynet from Terminator, Hal 9000 from 2001: A Space Odyssey, or the three laws of robotics from I, Robot? And that’s just the highest level of cultural influence—we’re surrounded by ubiquitous, but subtler, pieces of culture that affect our conceptualization of AI in the same way. Do the test, go to Google search images and write “AI.” You’ll only find humanoid robots, glowing brains, and electric blue backgrounds. (Initiatives are trying to transform the collective imagery of AI into a more faithful depiction of what AI is.)
I’d even argue that culture has a second-degree effect on our understanding of AI. Because there are a lot of works and stories on robots and intelligent systems, it’s very easy for us to imagine (even if wrong) how the world would be if we co-lived with these systems. Culture frees and empowers our imagination and builds reality. We also do this with other super-technologies like time travel or superluminal speed spaceships.
It’s also because of cultural influence that AI makes us emotional. Very few people can think of AI rationally in terms of what it can give us. Most are either afraid of what’s to come (in terms of job losses or of x-risks) or hopeful for the possibility of a utopia that’d put an end to the world’s suffering.
AI works now better than ever before
Finally—and this ingredient may seem obvious, but isn’t—current AI paradigms work better than any other in history. Deep learning-based systems—powered by supercomputers and trained on tons of data—have been able to solve problems we believed were impossible for a machine, like detection and recognition tasks beyond human experts or mastering games, language, and now even artistic creation.
The transversality of AI’s applications and deep learning successes back up the early claims of what this paradigm would achieve and give investors strong reasons to keep pumping money in. This isn’t trivial because, historically speaking, AI has been a rollercoaster when it comes to successes and failures. The fact that we’re living through an era of constant new applications that work better than the previous makes these years the golden age of AI.
AI could have the wildest goals, touch the deepest questions of humanity, and be seamlessly integrated into our culture but, if it didn’t work, I wouldn’t be writing this article today.
What are the consequences of AI’s celebrity status
No other technology brings together such a particular combination of features and those are the reasons why we go wild thinking about AI (of course, those features don’t make AI inherently celebrity-like. It’s our tendency to celebritize things that makes it so).
However, remember that celebrity isn’t necessarily equal to good. AI has great features, but they’re accompanied by unwanted consequences that we all suffer—and can be counterproductive for the field and for those working hard to fulfill the world’s expectations.
Companies are incentivized to use and sell AI—at all costs
The immediate effect of AI being so popular is that enterprises will try to integrate AI into their processes, products, or services. This means that companies that don’t need AI will forcefully use it to sell that they’re AI-powered to consumers and investors—even if it means killing a flea with a sledgehammer. Others will use simple statistical modeling to analyze their data and pass it as AI.
Another story is that of companies that don’t just use AI but create it—the Googles and Metas of the world. Those have strong economic incentives to commercialize AI systems regardless of whether the tech is ready or not. In a recent article, I mentioned a few newsworthy cases of AI systems deployed into the world with harmful consequences: Crime prediction systems in Chicago, image classification software in Google and Facebook photos apps, and the TikTok For You recommendation algorithm.
The repercussions vary—from black people being labeled “primate” to several kids killing themselves because TikTok’s promoted a deadly challenge—but in all cases, companies make it clear that the incentive to have AI in the wild is greater than the incentive to put safety first.
People get desensitized and thus unprepared for the future
The same way tech companies are incentivized to be about AI, news outlets are incentivized to talk about it—and oftentimes they shamelessly exaggerate the results or the potential of these systems. The reason is a concept you’ve probably heard already: we live in the “economy of attention.” Internet headlines and news articles compete to grab our attention. If you read that “Tesla will soon have a humanoid robot,” you’re more likely to click than if it says “Tesla wants to build a humanoid robot but they don’t know when it’ll be ready.”
The more we read about AI, the more our expectations grow and the more the reality behind can’t keep up with them. This has happened to me. If I get my knowledge of AI from these sources, I eventually realize they’re just trying to keep me reading—even with blatant overstatements. This reduces my interest and I become tired of reading about AI through clickbaity articles. I could simply go find other sources, but many people who feel the same as I do, won’t.
Most people interested in AI fit in this group: They care about it but don’t want to or don’t have the means to go find alternative sources that contrast with an embellished reality their usual sources convey. These people who will develop a wrong idea of what’s happening in AI. This can produce a serious disconnection between people working on AI and people outside—which is bad for both groups and the field in general.
AI hype: When you’re a celebrity, hype never leaves you
All I’ve mentioned in the “consequences” section is related to AI hype—the misalignment between the reality of the AI field and the perception of the general public. These inflated expectations are created by AI’s celebrity status and reinforced by (most, not all) the researchers who build it, tech companies who sell it, enterprises who use it, politicians who talk about it, and investors that pump money into it. All the people who work in AI and aren’t explicitly talking about the problems of AI hype in one way or another are participating in making AI a celebrity technology.
The ultimate consequence for AI when it can’t live up to the hype is de-celebritization. Also called AI winter. There’s no other technology as prone to funding winters as AI. Because of its unique characteristics, the value we ascribe to AI becomes uncorrelated with what it can actually provide—and it eventually crumbles down.
This has happened twice in the past and nothing stops it from happening again (even with the super streak of successes of deep learning systems). We idealize AI like we idealize the celebrities we like. Investors, who ultimately decide if AI succeeds or fails, wouldn’t acknowledge their misjudgment as disproportionate. They’d simply decide AI wasn’t that impressive in the end and would withdraw their money until we re-celebritize AI to start the cycle once again.