26 Comments

Alberto, great post! I coincidentally wrote on very similar lines as yours with my Substack post yesterday: https://trustedtech.substack.com/p/trusted-ai-005-our-average-future

I used Google Translate (and machine translation more broadly) as my example of a technology that was also hyped as world-changing and utopia-bringing and ended up...not.

Expand full comment
May 11, 2023Liked by Alberto Romero

My understanding is that the 1956 conference took it on itself to decide explicitly what the new field was going to be called. The two candidates were "Artificial Intelligence", which Marvin Minsky argued for, and "Automatic Programming". I have no idea who championed that alternative but Minsky won the day and we have dealing the consequences ever since.

Expand full comment
May 11, 2023·edited May 11, 2023Liked by Alberto Romero

Experts strive to reach their audience with their worldviews and information, and in this endeavor, they must navigate a media landscape that rewards soundbites over substance. Moreover, they must also secure a continued presence in mainstream media, a task often contingent on the splash their appearances make rather than the depth of their insights.

How can the best knowledgeable and well-intentioned experts develop nuanced perspectives amidst the attention-grabbing and oversimplification tactics employed by media, algorithms and self-proclaimed and less scrupulous "experts"?

The responsibility rests not only with the media or algorithms, but also significantly with us, as consumers of this information. We must cultivate a demand for more nuanced and in-depth analysis, signaling to these platforms that there is an audience for such content.

The challenge is not simply about modifying the behavior of the experts, media or algorithms but about reshaping the entire ecosystem of information dissemination. This involves fostering a culture that values in-depth analysis over sensationalism and empowers experts to engage directly with the public--as you do here, Alberto.

It's a tall order, but one that could significantly improve our collective understanding of complex and world-bending issues like AI.

Expand full comment
May 11, 2023Liked by Alberto Romero

A factor that I think weighs pretty heavily is that there is a sector of the population that worships -- and I do mean worships -- what they call and see as "intelligence". "Intelligent people" were the people that they respected and deferred to and tried to model themselves after. (Or perhaps what I mean is that they saw the people who got all the goodies as being "intelligent". ) "Intelligence" was the way they ranked humans, very much including themselves. If they had an IQ of 125 they felt superior to people with IQs of 100 and inferior to those who boasted an IQ of 150. People with high IQs were just more likely to be "right" about just about everything than people with lower. A world in which the average IQ was higher than theirs would be a nightmare. They would go through life feeling inferior to everyone. Reading about machines that are "artificially intelligent" triggers all these responses.

Expand full comment

Emotions have always been the enemy.

Expand full comment
May 10, 2023Liked by Alberto Romero

Excellent piece Alberto!! Will re-read tomorrow. I just came back from a giant big box home reno store with almost no staff except for two security guards to direct you to self-check out and look at your purchased items and receipt. Two things we will live for sure with AI - surveillance capitalism and rapid change driven by the need for ROI on the huge investments AI requires. Those investments will pay off with automation that deskill detask and lead inevitably to job losses on a huge scale. But it won't be utopia or a horror show, that's just a way of framing the debate to drive clicks.

Expand full comment
May 10, 2023Liked by Alberto Romero

Hi again Alberto...

A quick first impression, then I will reread.

I find this part to be, well, apologies, completely wrong...

"Laypeople lack the criteria, knowledge, and background to decide over matters like the present and future of AI. In taking these conversations to the public town square and debates about the questions they pose to social media we are irremediably undermining AI as a scientific endeavor."

Scientists are the least objective observers of how much science we should do, and how fast we should do it. AI industry experts are the least objective observers of where we should go with AI. And neither of these parties has any special expert knowledge of the crucial factor, the human condition.

Scientists and AI engineers are intelligent well educated people whose careers are focused on very narrow highly specialized technical subjects. We should respect them for what they're good at, but not look to them as some kind of clergy who have answers to all the biggest questions, such as what direction this civilization should take.

On questions of that scale, scientists and AI engineers are just like the rest of us, intelligent educated people who are entitled to have and express their opinions. As we listen to their opinions please keep this in mind. The incomes of scientists and AI engineers are dependent upon us doing more science, and more AI.

Should a reader like evidence in support of the above claims, here you go:

The biggest most immediate threat we face today is not AI, but nuclear weapons, a subject overwhelmingly ignored by the intellectual elite class as a group, with only a relatively tiny number of exceptions.

If we as a society, all the way up to the highest levels, don't possess the ability to focus our attention on a known proven threat which could erase the modern world in the next hour, then there's no credible argument for us being ready for yet another significant threat, whatever it's exact nature may turn out to be.

Expand full comment

Alberto writes, "...if we choose well, we can achieve the end of suffering instead of causing the end of the world?"

If our goal is to achieve the end of suffering (or radically reduce it at least) one answer is staring us in the face. The overwhelming majority of violence at every level of society all over the world for thousands of years is committed by men. I would argue that it is this phenomena which poses the most serious threat to AI.

Expand full comment

I've said this before so I'll be brief. The threat from AI may arise less from AI itself, but instead from AI's role as another source of fuel pored on the knowledge explosion. So far at least, I've not seen any discussion of this angle in any AI commentary. It may exist, just haven't found it yet.

Expand full comment