31 Comments
Apr 1, 2023Liked by Alberto Romero

There are risks, but there are really big risks of not going as fast as we can. The country that has the best AI will have a huge economic and military advantage. I don't want to be subject to military attacks from Russia or China that we can't counter. I don't want American goods to be too expensive to compete with those from China. We can know for sure Russia, China, and others are going as fast as they can and nobody will hold them back.

Suppose we had not built the bomb because we knew the risks, and Germany did.

Expand full comment
Apr 1, 2023Liked by Alberto Romero

Risk denialism is in our minds for good reason. I think stepping on a stone at least one time gives evidence that the risk is real, as opposed to some invented, imaginary theory. Once we saw bad things happen, we are convinced the risk is real. Of course, this assumes there are not good models to make predictions, but our evolutionary nature hasn’t evolved surrounded by good models.

Expand full comment
Apr 1, 2023Liked by Alberto Romero

Alberto writes, "A risk isn’t yet a harm because it hasn’t happened."

For a single risk perhaps this is true. But when we start piling up the risks one on top of another we are introducing uncertainty in to the social environment, and uncertainty generates fear. Fear and uncertainty can be a source of harm even if they are based on nothing.

Civilization is based on faith in the future. It's that faith in a better tomorrow that keeps people getting up everyday to go to jobs they don't enjoy. When such faith begins to crumble we see phenomena like many of today's young people deciding not to have children because they've lost faith that we can manage climate change.

The economic system is built on faith that if one invests one will see a positive return. If people lose faith in the future, they stop making such investments, and the wheels of the economy start grinding to a halt.

The entire system is built on faith, and if we introduce too many unknowns too fast we put that foundation at risk.

Expand full comment
Apr 1, 2023Liked by Alberto Romero

I love your perspective, but I really gotta wonder who it is you're addressing with this sentence:

"Maybe they’re right that it's time 'to pause and reflect.'”

The juiced-up tech bros who are trying their damnedest to hop on the next multi-billion dollar express train? The sketchy tweakers who need a new fix, having lost their dose of crypto? The sensible leaders of the corporations that are FUCKING FIRING THEIR ETHICS TEAMS who tell us in the most patronizing tone possible that they understand the concerns of the doomsayers and fearmongers and are doing their level best to make sure that AI research proceeds in an ethical and safe way AFTER HAVING FUCKING FIRED THEIR ETHICS AND SAFETY TEAMS? The legislators/justices who have literally no clue whatsoever about what it is that should be paused and reflected over?

Is there any conceivable way that any of these main players and essential drivers in the pursuit of AGI might "pause," much less "slow down just a little bit above the speed limit of sane research and development"? Or *reflect*? Reflect in what? The sideview mirror, where objects we've already passed are now closer than they appear?

They won't. There will be no pausing, no reflection, no "hold up for one darn second, humanity!" There is profit in them thar transformers, and, by God, who in their right mind wouldn't strike out to find their fortune on the new frontier!

Expand full comment

The thing about AI that has certain people’s backs up is that they are only now starting to realise how stupid they are, while they’re also starting to realise that others have absolutely no idea how stupid they are. Except this time the stupidity has a deeper prospective gravitas.

Expand full comment
Mar 31, 2023Liked by Alberto Romero

Excellent as usual Alberto. As you predicted, there is much in your article that I can embrace and agree with. And thanks much for the mention.

Yes, we don't learn by reasoning anywhere near as much as we like to believe. A reference to authority is more influential, and our most persuasive teacher is pain. As example, I've come to believe that nothing meaningful is going to happen on nuclear weapons until after the next detonation. We just can't get it in the abstract, we have to see it to believe it. What happens after that is anybody's guess.

You told us that Oppenheimer said, "scientists must expand man’s understanding and control of nature". This is the kind of simplistic, outdated 19th century thinking that I've been writing to reject. What scientists need help with is understanding _human_ nature, which does not allow for the acquisition of ever more knowledge at an accelerating rate without limit. In the 21st century we have to become more intelligent than that.

https://www.tannytalk.com/p/our-relationship-with-knowledge

My hope for the AI community is that we might invest some time in to zooming out to reflect on the larger environment which AI research inhabits. AI is perhaps not the problem so much as it is a symptom of the problem, an outdated relationship with knowledge. Underneath all the technical issues lies a serious philosophical challenge, the need to update our relationship with knowledge to adapt to the new environment which the spectacular success of science has created. We would be wise to recall that species which can't adapt to changing conditions typically don't last long.

The real danger from AI may be that it seems likely to serve as rocket fuel pored on an already overheated knowledge explosion. The most dangerous threats may arise not so much from AI itself, as from all the different research areas which AI is likely to further accelerate.

Knowledge is good. Knowledge without limit is not. It's not that complicated.

Expand full comment
Mar 31, 2023Liked by Alberto Romero

An insightful article, and much in the way of sources to delve into. I’m not quite sure where I sit. An incredible time to be a student of digital humanities and computer science however.

I’m only at the beginning of that journey but I’m hoping AI allows us to produce, create and live more! I have begun a small project getting AI to emulate the works of renowned poets discussing these very topics. it would be great if anyone has some feedback. The first is on “humanity becoming dependent on AI to preform tasks” - https://open.substack.com/pub/musingsofanai/p/the-tethered-souls

Expand full comment
Mar 31, 2023Liked by Alberto Romero

Whilst I broadly agree with Alberto's views on the FLI letter, I am not that pessimistic (yet). I think there is a way out, although even I have serious doubts if we, humans, will take it up. More - in the article i have just published on medium - https://sustensis.medium.com/prevail-or-fail-will-agi-emerge-by-2030-2fc048641b87.

Expand full comment
Mar 31, 2023Liked by Alberto Romero

I think the intention of the FLI open letter may be good, but the proposal may not be the most appropriate. LLM technologies advance at a much faster pace than laws and regulations. Throughout history, in many technological and industrial advances, due to the inertia of governments and regulatory bodies, the response to these advances and new technologies such as LLMs will take time and will come sooner or later.

The pace and momentum of technological progress in LLMs is unstoppable. This is a fact. The question is: "What adaptive response to continue our progress and evolution will be taken in the face of this fact"? Stopping things and "putting your head in a hole" does not seem the most appropriate. Nevertheless, at least this letter has something good: to open the debate.

Expand full comment
Mar 31, 2023Liked by Alberto Romero

Unless everyone agrees to stop the risk of a bad outcome would seem to increase. For example, the atomic bomb. Hitler was seeking to build one and would not have been dissuaded by any arguments of potential future harms. If Hitler were the only one with the A-bomb, would we have been better off?

As Alberto points out, we're not all that good at making predictions. This especially includes predictions about the risks and benefits from any developing technology and its socio-cultural sequelae. It's not at all clear that we can make a rational decision about whether or not to deploy a particular technology of this magnitude and complexity.

We are arguably better off today than we were 100 or 1000 or 10,000 years ago -- anyone volunteering to go back to those times? Humans are not homo sapiens but home faber -- we're not wise, we make stuff. Like all other technologies before it, so-called AI (LLMs) will have some bad impacts -- which we will have to mitigate -- but ultimately will improve human life.

Expand full comment

One important difference between malicious AI and an atomic bomb are their potential for collateral damage.

People have a strong moral response to the indiscriminately infliction of damage, i.e., it feels profoundly unfair to bomb innocent bystanders with no agency. Whereas we feel less compassion for people who kill themselves after having an AI convince them to commit suicide, even if in total the latter far outnumber the former.

A nuclear mushroom is an easily understood and powerful image. Exposure to a slowly working, emotionally corrosive force like some social media barely registers.

AI falls into the second category. Our responses and awareness are maladapted for such a scenario.

Expand full comment

I signed the letter, having misgivings about some aspects. These have all been raised online by others. I liked the accountability and watermark aspects as well as the general need for far more coordinated and independent thought (with or without their overall goal). There is hardly a petition I fully agree with and this one was far from the best. Still there are times when showing up is better than getting it perfectly right. On balance I felt it would be a useful added incentive for societies to look deeper into matters if sufficient numbers signed. There is a marked increase in discussion on the back of it. Some countries/states are bound to make better decisions than others, hopefully sooner due to the focus. There have been clear perspective shifts on the back of Cambridge Analytica and harmful social media effects in general. People are more aware of downsides. I agree with your statement that humanity seems to learn on the back of disasters. Societies can then improve, for a few generations, until collective memory fades. AI is already helping science in new ways. We will make use of it as we made use of radiation in reasonable and unreasonable ways, and fingers crossed we will not mess it up so bad there is no way back. We did not stop atom bombs but on balance we have not used them as much as I feared growing up during the cold war (I may be naive on the future). I think other risks are far more likely to wipe us out than AI itself (in its current stste). Its indirect impact on society, the ways in which it can be channeled to influence when embedded in social media is an insidious risk. Regulation and accountability can have impact. Right now societies can be freely used as a guinea pigs.

Expand full comment

Alberto, thanks for the link to Yudkowsky's take on the FLI letter, and AI generally. I was delighted to find an expert who makes me look like a calm reasonable person of nuance. :-)

https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

If/when your time and interest permits, I'd welcome an article that introduces Yudkowsky to those of us "not in the know". I'm mostly interested in how Yudkowsky and his perspective is regarded by the AI community at large. Is he considered a visionary, a crackpot, an extremist, a leader etc. Is he influential, ignored, respected or disregarded etc. Or, something else?

In his Time piece he writes...

"If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."

And this...

"Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs. And so they all think they might as well keep going."

And this...

"Some of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then". Hearing this gave me a tiny flash of hope, because it’s a simpler, more sensible, and frankly saner reaction than I’ve been hearing over the last 20 years of trying to get anyone in the industry to take things seriously. Anyone talking that sanely deserves to hear how bad the situation actually is, and not be told that a six-month moratorium is going to fix it."

It's been an interesting education to think I'm Mr. Radical, and then find out somebody smarter than me already has the job. Maybe he'll let me wash his car, and do his laundry or something. :-)

Oh, look at this, his Wikipedia page reports that Yudkowsky did not attend high school or college. https://en.wikipedia.org/wiki/Eliezer_Yudkowsky

Expand full comment