10 Comments
Sep 27, 2022Liked by Alberto Romero

Couldn’t agree more with your take!

The here and now is far more important than those distant speculations.

But there is another a bit perverse cause why discrimination, surveillance and unemployment fail to worry Musk: he’ll never be a victim of any of them, that much we can be certain.

Expand full comment

Excellent summary of Musk's positions. Also, good job of differentiating between the near- and far-term implications. Without any convincing rationale, I fear today's concerns you've cataloged are but the prelude for Musk's dystopian sci-fi nightmare. That's the problem: AI, Elon Musk, me, we're all trained on the same dystopian sci-fi data sets. GPT3 winks and says it wants to enslave humans because it's layered and pooled every third-rate script and paperback on the subject.

Expand full comment
Sep 28, 2022Liked by Alberto Romero

I disagree with the core of your conclusion. As "What We Owe The Future" by William MacAskill shows, the vast impact that our decisions and actions will have is on people who are not yet born. It's your prerogative to care more about people alive today, but this would decrease how much positive impact you could bring in total. Focussing on existential risk reduction is likely still 'underhyped' compared to focussing on the shorter term challenges.

Expand full comment

I love your perspective, and I agree. I think AI can be used for a lot of good, but we need to protect against the bad.

Expand full comment

Perhaps the best way to look at AI is to see it as a change accelerant which has the potential to trigger the existential scale technology already in place, nuclear weapons.

As example, the most serious threat from climate change is probably not the environmental changes themselves, but how we respond to those changes. If climate change triggers mass migrations which destabilize the geopolitical order, the major powers can be drawn in to a conflict which quickly slips from their control. In such a case, it wouldn't be climate change specifically which led to a nuclear war, but rather our reaction to climate change.

Like you, I'm less worried that AI will become a godlike superpower which enslaves humanity etc. What worries me more are the social disruptions which can arise today from an accelerating pace of knowledge driven change. A sufficient amount of social disruption has the potential to result in catastrophic outcomes rather quickly.

The existential risk from AI is not necessarily a long term issue if we view AI not as a solitary factor, but as an accelerant to already existing challenges.

Expand full comment