23 Comments

I don’t think it’s worth discussing any of the AI participant’s motives. Whether an existence threatening AI is here with LLM and ChatGPT-4, Bard and such or whether it is yet to appear, one is surely coming down the road to us, and it’s more important to recognize that and prepare for that occasion in the best way possible.

Whatever the first existence threatening AI turns out to be, here or not yet here, it is clear that anyone who has sufficient intellectual skills, or access to such people, can develop similar software. Like any tool, it can be patented and its commercial use regulated, but individuals and groups can still build their own versions. This aspect of AI development remains far beyond the reach of any government regulation. Moreover, the realm of thought itself cannot be regulated. So, how can we prevent an AI catastrophe?

In many ways, it feels as if we are faced with a situation like those that faced native American Indians, Australian aborigines, African or other primitive tribes being overshadowed by a more sophisticated culture. Only a culture with at least the same level of sophistication can exert any control over another culture, and in a contest with existence threatening AI, the only equivalent culture to which we will have access is AI itself. It seems that AIs will eventually engage in a battle for supremacy over this territory, leaving humanity akin to mice on the Titanic or groundhogs in the fields of Flanders—small and inconsequential.

This appears to be a strong argument against halting the development of AI. If this is to become a battle between AI’s, it is certainly in humanity’s interest to have the best AI on our side, which can only happen if we develop it before the bad guys, whoever they may be, gain access to it.

I think that’s a discussion worth pursuing.

Expand full comment
Jun 7, 2023Liked by Alberto Romero

Very well put. The idea that encountering an entity more intelligent than us represents an existential threat to our species at best is a strange new kind of xenophobia. It also exposes a somewhat oppressive and malevolent view of intelligence.

Expand full comment
Jun 8, 2023·edited Jun 8, 2023Liked by Alberto Romero

I don't think anyone believes that current AI poses an existential risk.

But the fact remains that five years ago, the capabilities of modern LLMs were unthinkable. I mean that both in the rhetorical sense (that everyone thought algorithms with its capabilities were decades away) and in the literal sense (LLMs behave in strange ways that I don't think people necessarily expected). My mind keeps going back to Bing chat losing its temper at a user. This happened because it was mimicking text it had seen during training, but I don't think anyone would have expected that the first generation of generalized intelligent systems would have that failure mode.

LLMs quite literally represent a phase change, where an increase in scale radically alters both the behavior and underlying configuration of the neural network. Many of the old rules and intuitions that drove the Deep Learning innovations of the previous decade quite literally no longer apply, and all that's changed is the scale at which we operate.

And these phase changes seem to be a fundamental property of scaling in neural networks. We can observe all kinds of emergent abilities as LLMs get larger. And we still don't have any good way to predict at what point any given ability will emerge.

That's why there's risk, because not only don't we know what comes next, but by all accounts it's going to look completely unlike whatever we're expecting from it. The scary part isn't what AI can do now, it's what AI couldn't do three years ago.

Expand full comment
Jun 8, 2023Liked by Alberto Romero

I think it's a tangled mess of all our best, worst, and most mundane impulses competing toward some new equilibrium. New because we're clearly headed somewhere new. And because AI is an accelerant. Accelerants help things burn. What will burn and how fast, with what unforeseen damage?

I don't think our assessments of people's motivations matter much in the long run. What matters is where we are on the curve and how bad things will get when superintelligence has agency and disruption shakes pretty much all interconnected systems and all the people and creatures that depend on them.

I also think people get too reductive about the risks. It's not about robots or supervillains or paperclips or all the oxygen disappearing from the atmosphere at once so AIs don't have to worry about rust. It's about everything, everywhere, all at once. Panic. Job loss. Inequity. Bad actors. An invasive species of intelligent aliens suffusing our infrastructure (I don't see this as other-ism or as prejudice against intelligence; I'm very progressive; but the fact is that currently we are the most intelligent species on this planet, as far as we know, and that has not gone well for many species of lesser intelligence with whom we share resources).

All accelerating. All leading toward a new norm that is probably bad. Maybe very bad. Possibly existentially bad. Anything specific we predict will likely be wrong. But the gist is that we are a runaway train on fire.

It seems to me the crux of your argument here is this: "For the most extreme proponents of this view that’s unimportant (thus their frontal opposition—or silent dismissal—to putting other risks at the same level of urgency or even devoting any resources to mitigate them). They want us to work first and foremost on the Big Problem so that its intrinsic existential risk (which, as it happens, would also affect them) can turn—once they succeed—into the panacea in the form of a huge computer."

I agree... but also there's the questions of: "How big is this fire?" and "How much water do we have to fight it?" and "Where should we point the hose/our collective hoses for best effect?" To me, that's an open question that depends on one's p-doom and timelines. This is one big reason why smart people in this space can't seem to agree on best next steps. Your p-doom and timelines seem lower than mine, so you see things differently. And none of us (not even those doing the most advanced work right now) can know the true p-doom or timelines. 

So our words of wisdom clash. 

And I keep saying, "I hope I'm wrong but..."

Expand full comment

The fear is of an "intelligent" but more importantly self-directed, entity that pursues its own goals without taking into account the long-term needs of humankind or its environment, and as a result constitutes a potential existential threat to our species. We already have many of these in our society, they are called multinational corporations.

Expand full comment
Jun 8, 2023Liked by Alberto Romero

I don't think the letter signers within the industry are deceitful so much as they are incoherent. What I hear is something like this...

"We think AI may pose an existential threat to the human race, so we're going back to our offices to further accelerate the development of AI."

Whether AI really does present an existential threat is unknown. Whether AI industry experts signing such letters are being incoherent is known.

Expand full comment
Jun 8, 2023Liked by Alberto Romero

I see the threat from AI more like this. An example...

AI leads to vehicles that don't need human drivers. Millions of truck and delivery drivers are put out of work. In their despair the drivers turn to hyper confident con men promising to make America great again and so forth. The con men gain power, but are clueless at everything except being con men. They bumble and stumble their way in to a global war, which brings down the entire system.

This is JUST AN EXAMPLE of how the threat may not come directly from AI itself, but from a cascade of other events which originated with AI. Or to put it another way, from an accelerating pace of change which we fail to manage successfully.

Expand full comment

I'm reading through some of your content to help myself gauge the risk AI represents. I come at this from the human freedom perspective, but also from the perspective that AI doesn't seem to be 'real' in terms of autonomous silicon intelligence. What I see as clever algorithms are hailed worldwide as actual intelligence -- yet it seems more and more that the distinction won't matter. People will assume AI to be all-knowing, and follow 'the algorithm' just like they followed 'the science' when it came to Covid. I wonder if you can comment on this angle, or maybe you have already written about it.

Getting to the freedom side, I am concerned that AI will massively increase the amount of fake articles and propaganda we are fed online. How to distinguish between human-generated and AI. From a privacy perspective, I am wondering how we can 'confuse' AI in our own generated content, such as the way images can be modified to fool Tin Eye. For example would scrambling words prevent the AI from understanding content? We are in uncharted territory here.

Subscribed.

Expand full comment

Thanks for the reference to Anthropic, Alberto, that is the kind of thing I was imagining, though the crux is perhaps what that "constitution" looks like!

One further comment with regard to corporations: you frame them as collectives of individuals (us), but that does not take into account the ways in which they work as complex systems where behaviours emerge that escape the responsibility of individuals or even teams. The "us" is subsumed within the legal entity, and the needs of the entity are prioritised. Though they may not be "smarter", the damage corporations can do is not limited and may be, or arguably already is, as pernicious as the worst AI nightmares (some argue that the damage is already done!)

The ways in which legislation and governmental action have failed to curb the power of corporations may be instructive when attempting to adjust legislation to take into account the risks involved in AI. However, given that corporations have an extensive role in the development of AI, I don't hold out much hope that these adjustments will be made, and the calls to do this coming from people working in that sector seem frankly disingenuous. The individuals and even teams may say one thing, but their corporations will continue to do another.

Expand full comment