Generative AI: My Enhancement, Your Replacement
A response to Noah Smith and roon's post, "Generative AI: autocomplete for everything."
Today we're up for a challenge.
A couple of weeks ago I read this thought-provoking article co-written by Noah Smith and roon on AI and automation: “Generative AI: autocomplete for everything.”
The thesis they defend, rather eloquently, is that generative AI isn't going to destroy jobs or replace humans to the degree less optimistic people think. They claim AI-driven automation shouldn't be so fear-inducing because the risks and harms it entails are vastly overstated: “AI is far more likely to complement and empower human workers than to impoverish them or displace them onto the welfare rolls.”
They acknowledge this isn’t a black-and-white issue and recognize that “there will certainly be some people who lose out,” but they don’t believe there’s anything special about this new wave of AI. As they see it, “Generative AI … will largely behave like the productivity-enhancing, labor-saving tools of past waves of innovation.”
In case you don't know it from my previous posts on the topic, let me be clear: I disagree (not in everything, but in a major part).
Yet, I found their arguments compelling and their stance worth discussing. That's why I decided to write this article (I couldn't publish it earlier because ChatGPT got me busy!).
I'll explain my stance on the topic using their argumentation as a starting point. I don't claim to be right on this—I think there's enough uncertainty to make room for debate—but they're missing key aspects of AI and automation that are worth mentioning—aspects that, as I see it, radically transform the picture they draw.
This article-tandem is a great opportunity to listen to both sides of the story, with arguments and counterarguments (similar to what I did here with Gwern's arguments on AI's potential to pollute the internet) which you wouldn't be able to if I just laid out my opinion.
Of course, my conclusions are as biased as theirs, so you’re the ultimate judge. I'd love to see you continue the debate in the comments!
Clarifications on my approach
Before I start, let me make two clarifications.
First, Smith and roon focus their arguments on generative AI (hence the headline). Although they use robots and other non-AI automation technologies to exemplify, they frame the issue as a white-collar problem that targets especially artists, writers, and coders. Also, their predictions refer to the near-term future—who knows what the world will look like in 50-100 years. I accept both premises for my arguments, too.
Second, their blog post is divided roughly into two parts. In the first two sections, they lay out the arguments to defend their thesis. They use the last few sections to illustrate what the world (and the workplace) would look like if their predictions turned out to be right. I will exclusively focus on the first part. I’ll counterargue their claims and develop my own so there’s no point to explore the second-order implications of their predictions, too.
The Algorithmic Bridge is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
AI takes over tasks, not jobs
Smith and roon start their argumentation with a claim: People’s fear of replacement is “unwarranted”. AI is “far more likely” to act as an enhancer than as a replacer. It’ll “empower human workers” instead of “impoverish[ing]” them. This prediction is the main thesis of their article. I’ll try my best to explain where and how this claim breaks apart.
You probably realize that this claim is in sharp contrast with the generalized sentiment that AI automation is dangerous for our jobs. The issue with the typical arguments that defend this sentiment, they say, is that they view the labor market at the level of jobs instead of tasks. This is their core argument: “AI doesn’t take over jobs, it takes over tasks.” And, if that’s the case, it’s unreasonable to believe it has the capacity to replace us, deeming people's fears unreasonable.
I agree with their core argument. If you think about it, it’s quite intuitive. For instance, ChatGPT doesn’t decide what I write. I do. Decision-making and writing are distinct tasks in my job as a writer. When I decide to use ChatGPT, it enhances me. So far so good.
We only talk about “AI taking our jobs” because it’s the usual expression. However, this framing adds a subtle component of absolute danger that isn’t there—there are a lot of things any human can do that AI can’t.
What I don’t accept is their defense that the task-vs-job reframe is a sufficiently strong argument to reject the possibility that AI could act as a replacer (in contrast to as an enhancer). Let’s go step by step.
To reinforce their central argument Smith and roon resort to historical evidence: “If AI causes mass unemployment among the general populace, it will be the first time in history that any technology has ever done that.” Technology innovations, generative AI included, don’t substitute humans in their jobs, they simply change our way of doing things.
Let’s agree here. Mass unemployment is unlikely to happen (unless we consider a longer time window into the future so that AGI-related concerns about complete human unemployment start to gain relevance).
Generative AI—even considering its wild speed of development—isn’t that different from any other previous innovation wave (let’s accept this assumption although I have my reservations). If mass employment has never happened before, it’s hard to argue it’ll happen this time so let’s give them this one.
However, I disagree with the relationship they implicitly assume exists between the idea of AI taking over tasks and people’s fear of replacement. Their argument goes like this: AI is no different than other innovations (it takes over tasks, not jobs). Previous innovations haven’t caused mass unemployment, so it won’t happen this time. People care about something that won’t happen, therefore their fear is “unwarranted”.
Here’s my take: People don’t really care about mass unemployment. They care insofar as the expression implies that they’re also inevitably impacted—they care about whether they, specifically, will lose their jobs. The validity of that argument (i.e. there has never been tech innovation-driven mass unemployment in history so it’s unlikely to happen now) is irrelevant because that’s not what people fear.
Indeed, not everyone will lose their jobs to AI. But people’s fear points to real danger—even if not everyone, some people will lose their jobs to it.
(later I’ll lay out my central counterargument: Why AI taking over tasks doesn’t rule out the possibility of indirect human replacement, which explains why the danger is real and hence the fear is justified).
That’s their fear. Smith and roon dismiss it as “unwarranted” on the basis of the inexistence of previous instances of mass obsolescence. Yet, when examined closely, this historical argument clearly doesn’t apply to the discussion as it doesn’t take into account partial unemployment.
They double down with a subsequent claim: “[P]retty much everyone who wants a job still has a job.” Again, true. The percentage of unemployment doesn’t decline as technology advances—it stays stable within the normal fluctuations. If anything, “evidence shows that adoption of … automation technology … is associated with an increase in employment at the company and industry level.”
This seems to rule out the idea of partial unemployment—the argument I used to counter their previous claim—and dismisses people’s complaints (“did you lose your job? Just find another one!”). But it’s tricky: The devil is in the details.
An eventual increase in employment opportunities doesn’t necessarily entail immediate access to them for those who suffer replacement. In the long term, innovation creates more jobs (just look around) but the people who live through it and endure the instant consequences may not be the best suited for the newly created tasks. It’s often the case that they don’t benefit from innovation but have to survive despite of it.
Related to this, having access to a job doesn’t equal having access to a good job. If you’re a taxi driver and self-driving cars replace you, you can always go to McDonald’s (nothing against fast-food workers!) but that’s a weak argument in favor of automation.
If you measure the costs of AI automation by the number of available jobs, you may find that yes, it doesn’t seem that impactful after all. However, if you measure the subjective decrease in life quality for those who've been displaced you’ll get a very different result.
People’s fear isn’t just “I’ll lose my job.” Most know they’ll be able to find something else. Their fear is, more accurately: “I’ll lose my job.” A job isn’t just a way to earn a living—although in most cases that’s its main purpose: it serves as a carrier of meaning in our lives. Not anything goes.
(Of course, many people don’t work where they’d like to anyway. That’s a larger sociopolitical and economic problem that touches all corners of the human condition and is way beyond the scope of this essay.)
My enhancement, your replacement
As I said, I agree with the idea that AI takes over tasks instead of jobs. The problem is that it only tells half the story.
If I think of my experience with generative AI as a writer, I can see where they’re coming from. I’ve used ChatGPT, GPT-3, Lex, and other AI writing tools and none of those can do my job for me.
I’ve used these tools to help me come up with ideas, co-write a couple of pieces, and serve as self-exploratory tools. None of that comes close to even hinting at being a replacement for me. They enhance, in one way or another, part of my job. They replace me on some tasks.
One-to-one, no instance of generative AI (not now nor in the near future) can replace the job a white-collar person does. Being a team of one I’m irreplaceable by any sort of current—or even realistically imaginable—AI. Generative AI is definitely an enhancer for me.
However—and here’s where I fundamentally disagree with Smith and roon—I believe that task enhancement for one person can, down the line, imply job replacement for another. And I believe that, as generative AI improves, this will become quite common.
The “AI takes over tasks, not jobs” idea is great if we don’t explore the consequences further. If we do, a darker perspective unfolds.
Smith says that “dystopia is when robots take half your jobs. Utopia is when robots take half your job.” As catchy as that is, I believe a more truthful phrase—hopefully equally catchy—is: my enhancement, your replacement.
To explain what I believe, let me tell you a story.
You’ve been a tech writer all your life.
Recently, you got a job at your dream magazine (let's calle it “TYN”). TYN—well-known for its tendency to adopt new AI tools to empower its writers—has employed a team of 10 tech writers for years. They focus on the hottest space. Right now, that’s generative AI.
One day, the news arrives: OpenAI, the popular AI company, has decided to commercialize GPT-5, an amazing AI language model that writes prose like an elite journalist.
TYN, hungry to leverage GPT-5, decides to contract OpenAI’s services. GPT-5 provides such an edge that your team can now make 200 articles per month instead of 20—a 10x improvement!
You aren’t afraid, as you recall reading somewhere that AI takes over tasks, not jobs. You’re sure: This is just an enhancement. It’s my empowerment.
It may write your articles, but only you can tell if the topic is appropriate, the conclusions are in line with the editorial, and whether your boss will be happy with the result.
Free from any worry, you begin a synergic relationship with your autocomplete AI companion. This is the future, you think.
The days pass by and a growing realization takes over TYN execs: Why do we need 10X more articles when the demand hasn’t grown anywhere as much as our capacity to create new content? If anything, such a high amount of published pieces will decrease their value.
They recognize the potential of the new AI tool but they’re not saving any money for such a productivity boost. The decision is clear.
The next day, an email hits your inbox: “You’ve been laid off,” it reads.
How? GPT-5 gave me superpowers. I was invencible! You’re startled.
“We’ve made a decision: From now on, GPT-5 will write all the articles and one of your colleagues will edit, review, and polish them. Thanks to this measure, TYN’s costs have decreased by 10x.”
Then it struck you: GPT-5 replaced no one, true, it’s simply an enhancement. Yet, together they—AI and human—replaced me.
Me and my other eight colleagues…
“But don’t be afraid. You can always find another job. Remember, mass unemployment isn’t real!”
This sarcastic fictional story, which paints a seemingly distant future, may not sound so crazy to some people.
AI may not be able to replace a human one-to-one, but that’s an unrealistic view of the problem. The example I describe in the story is much more realistic: People work in groups so a mix of people and AI tools could certainly replace a larger group of just people.
This applies not just to writers, but also editors, artists, designers, coders, and so many other white-collar workers.
Smith and roon write: “Daron Acemoglu and Pascual Restrepo … find that … new production technologies like AI or robots can have several different effects. They can make workers more productive at their existing tasks. They can shift human labor toward different tasks. And they can create new tasks for people to do. Whether workers get harmed or helped depends on which of these effects dominates.”
Generative AI, like previous innovation waves, will create a similar set of effects. The above story reveals how a change in perspective reveals quite a different phenomenon: It’s not going to be as easy and good for everyone as Smith and roon claim.
As Smith and roon assert, mass unemployment won’t happen (in the short term), and most people will still have job options because automation will directly take over tasks, not jobs. All that is true.
However, none of those statements is sufficiently soothing.
First, not everyone will lose their jobs, but some people will be displaced (not just at the task level but at the job level) because of those task-taking AI enhancers: my enhancement may be your replacement.
Second, even if innovation creates new tasks—and thus new jobs—displaced people will have to pay a high cost to relocate (e.g. accepting a lower income, a temporal sacrifice to go back to study, and enduring psychological damage) which entails a drastic reduction in life quality.
And finally, who is going to harvest the fruits of the productivity increases driven by generative AI? Certainly not workers—displaced or not.
So, even if they’re optimistic about generative AI-driven automation, it’s quite misleading to say that AI will act as an enhancer when it’s not possible that it enhances each and every one of us.
The Algorithmic Bridge is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
I think the framing of “if demand growth for X is outpaced by productivity growth in producing X, unemployment will increase” makes a lot of sense.
I would be very curious to see what happens with employment/contracting at stock photo companies now that the marginal cost is going to zero... the global demand for stock photos is not infinite!
As far as I can see Alberto and Noah agree with each other. Both parties agree that there is not going to be any wholesale replacement -- like what happened when agriculture got mechanized -- but both agree that there will be local changes that will seriously inconvenience some people.
What interests me about this technology is that it wasn't invented with any specific application in mind. That is unusual, at least in my experience. ChatGPT was invented because it seemed like a cool tool, and now we are busy trying to figure out where it fits in the society. I am sure we will succeed in doing that, but we are not there yet -- so far as I know, there is no corner of the economy where generative AI is used routinely.
That defines for me what the news of interest is. I really want to know where this technology finds purchase. I suspect that pornography is a good candidate but there will be others. I would be grateful for any news of these developments.