My last article was an analysis of AI writing tools. I argued that, under current paradigms, they won’t be able to master natural language because they lack both the capacity to capture intent from a human and the capacity to generate their own.
Yet, I don’t reject the idea of using these tools to complement the process of writing. Although they could reduce the demand for human writers, I see them more as enhancers. Eventually, AI will be integrated into every creator’s toolkit.
Today, I bring you a full-on collaboration between me and the recently unveiled Lex to which Every’s CEO Dan Shipper gave me access.
I won’t use Lex just to generate ideas or write the headline. Instead, it will write entire portions without any editing on my side. And, to make it more fun, I won’t reveal which ones until a few days after publication (edit: Lex’s contributions are in block quotes and italics. The intro and conclusion are mine).
I don’t know how it’ll turn out but I advance that, even if Lex is better than I expect, I won’t use these tools in the future without disclosure—I want to preserve my intent intact.
Anytime from now, I could be Lex.
Before we begin, let’s establish the rules of this game
If you’re familiar with my writing, you know I tend to lay out a clear line of argumentation throughout any piece—I like to see myself more as an essay writer than a content creator.
To achieve this I always start with a headline, a subtitle, a structured outline, and, maybe most importantly, a predefined thesis that I want to defend.
Before I start writing, I already know the topic and the direction I’ll explore. I may have to elaborate arguments on the go and even change my original ideas slightly, but always while following a bright north star.
Another feature of my writing process (which I’m sure most writers—humans—do) is that I write my thoughts and think my words.
Writing about my thinking and then thinking about my writing becomes a closed loop of feedback. It’s closed because everything that’s on the page comes from my mind so when I use my words to clarify my thoughts, the feedback is super clean.
Today, all of the above changes. I want to explore the freedom an AI co-writer can give me so I have to give up two things:
First, I’ll give up intent. This piece doesn’t have an outline and I don’t know what I’m defending or what I want you to take away. (In an indirect way, I’m defending the thesis of my previous article. But that’s a meta-thesis that comes from the existence of this article, and not from its content necessarily.)
Second, I won’t enjoy the safety of that closed feedback cycle: The loop is now an open bridge between me and Lex—whatever Lex writes I don’t know where it came from, what it means (if anything) and how it complements my thinking.
Take me to uncharted territory
So, let’s explore a new idea on the topic. For instance,
can AI improve my writing by taking on a role that’s more suitable for its strengths—such as coming up with ideas?
To find out, let’s take a quick detour on what AI is best at: understanding and responding to natural language. AI can do this in two ways, either by extracting information from a text or by generating new text. The former is what we call information retrieval, the latter is generation. Both are important, but our focus will be on the latter.
Specifically, we’ll see how can AI help with idea generation for writing.
We humans are very good at generating ideas. In fact, it’s one of our main strengths as a species. But we’re not perfect and there are occasions where we could use a little help. This is where AI can step in and help us by taking on the role of an idea generator.
How can AI do this? Well, there are two ways: One, by understanding the text we’re writing and suggesting ideas that are related to the topic. And two, by generating new ideas from scratch based on what it knows about the world.
The first approach is more limited, but it’s also the one that’s more feasible in the short term. The second approach is more ambitious, but it’s also the one that has the most potential in the long term.
However, AIs don’t know things about the world in the same sense as humans do. Language, for us, is a means to express things about a world with which we have contact in other ways (perception and action).
The only knowledge language AI models have about the world is in the form of written words. How those words connect to the external non-linguistic world is beyond their reach.
AI writing tools as means for exploration
So, our best option is to take the first route. Although “understanding” is an anthropomorphization of the system, it may come up with an idea we consider good enough to explore.
This works better when there’s no thesis to convey or when it’s not completely defined. It can also work when trying to come up with a specific argument to reinforce the core line of thought.
In a recent Twitter thread I counterargued that point: “In using AI writing tools, you risk replacing your sensible, interesting, or useful communicative intent with whatever the AI decides to output. As soon as you start saying, "that's good enough," your presence in the finished piece starts to shrink.”
But, does intent matter that much when the article is exploratory instead of descriptive or explanatory?
Maybe, as long as I make it clear, I can write a valuable piece where the intent is meta: not in the content, but in the very existence of the article.
But what if the AI decides to go full-on philosophical on me and starts talking about the meaning of life? What if it decides that my previous article was nonsense and proceeds to write a better, more sensible piece than me? What if it just decides to generate a series of incoherent ramblings?
Ideas are the lifeblood of writing. A writer needs a constant stream of ideas to keep the writing flowing. But where do these ideas come from?
There are two main sources of ideas: external sources and internal sources.
External sources include things like conversation, books, magazines, newspapers, and the internet. Internal sources include things like memories, thoughts, and imagination.
Ideas can also come from a combination of both external and internal sources. For example, a memory of a conversation you had with someone could lead to an idea for a story.
Why couldn’t we add AI writing tools to the list of external sources? Are they so different from taking a book as inspiration?
Maybe once these tools—as well as our understanding of them—mature, we will see them as sources of inspiration. Maybe even as writing partners.
Or maybe where AI writing tools truly shine is for exploration: Instead of focusing on what the article’s content conveys (pretty much generic value, at least where Lex takes over), we could reflect on what its existence entails.
Getting extra meta—mapping the mapping tool
Let me use an analogy to better explain how this would work. Let’s say we’re 15th-century explorers. We’ve been entrusted with an ambitious quest; we have to map the entire uncharted world.
We have a tool that can map it for us. As it happens, an advanced civilization of aliens gave it to us as a welcoming gift to the era of exploration—but they never explained its workings.
We don’t quite understand it, and can’t directly assess if it works correctly.
We try to map the world but we don’t know what we’re doing. We keep at it without making much progress and without the ability to assess whether we’ve made any.
Now, what if we shifted the quest? What if instead of using the mapping tool to map the world, we tried to understand the mapping tool first?
Maybe the true value of AI writing tools (as they’re now) is as meta-tools: we can study their inner workings by looking at their behavior—as if we didn’t know the rules to which they’re subjected—instead of using them for their presumed purpose.
Does it make sense to use GPT-3 to complement my writing when I know so little about its behavior and have so little control over its deviations? To some degree, yes. But we shouldn't forget it’s a tool that no one understands completely.
That’s not how technology traditionally works: We build something from the ground up, understand its behavior at every level, and then we find new applications.
No surprises. No emergent properties.
AI systems (deep learning-based) are different. AI writing tools, being unexplainable and uninterpretable, have broken that tradition: We’re looking for applications without having mastered the tool—at all.
We’ve invented a tool about which we know as much as if it had been given to us by aliens.
I think it’s reasonable to try to understand it before embracing it fully.
Before I go I have to say this is one of the weirdest articles I’ve written. Not having a clear thesis or outline and having an AI companion requires a set of skills I haven’t mastered yet.
Please, leave a comment with your thoughts on which parts you think Lex wrote. I’ll reveal them on Tuesday. Let’s see how well you do!
I think so, but I don't care. I know perfectly well you are not going to turn the column over to AI, so my reactions are not relevant to anything. You observe "having an AI companion requires a set of skills I haven’t mastered yet." What I would like to see -- what I expect -- is for you to master those skills, or at least try to. Every column from now on should be written with a "companion" and should end with your thoughts about how the "collaboration" went . We will all judge the outcome(s) together.
Meaning no disrespect to the author, it's my sense that most experts writing about AI are naive in thinking that AI will be a companion to writers more than a competitor. That seems a reasonable claim for today, but not for what's coming. What's coming for writers is likely the very same thing that happened to factory workers. A few workers were kept in the factory to manage the machines, and the rest were let go.
You seem to assume that AI can't write quality content. That seems a reasonable enough claim at the moment. I see no reason why it has to stay that way. It seems to me we're talking about when AI will be able to write quality content, not if.
Remember, in the business sense, quality content is defined by the customer, the reader, not the writer, editor or publisher. When a critical mass of readers can't tell the difference between human written content and AI generated content, the new era will begin. Publishers will choose AI content over human content for the same reason factory owners chose robots, the machines are cheaper.
Robots boosted profits for factory owners, while the factory workers were left out in the cold. AI will just accelerate this process whereby increased efficiencies become a vehicle for funneling money up the social ladder towards those already rich.
If you wish to push back against such a future now might be the time to do it, because once the transition happens you may no longer be able to afford writing, you may lose your voice.