31 Comments
May 20, 2023Liked by Alberto Romero

Thanks for the great essay. I would respectfully suggest that what you're feeling is the anxiety of the liberal humanist, the center-left intellectual who fundamentally agrees with the emancipatory goals of the far left, but regards their methods as counterproductive. As someone who got his doctoral dissertation on Rawls 25 years ago, and was sneered out of academia as a tool of the patriarchy and insufficiently radical, I share you concerns.

What I most admire about your piece is your refusal to succumb to contempt. This is the fatal flaw of the radical critique. Theorists of the far left construct a Manichean world where there are only two groups of people, and its their job to sort them (workers vs parasites, patriarchy vs feminists, woke vs. benighted). It is a politics fundamentally fueled by contempt. It is why following the MLK Jr or Camus playbook for liberalism is so difficult; it constructs a world of reasonable pluralism and requires you to view with respect those you profoundly disagree with.

You see the same familiar brush strokes across the blank canvas of every wave of technological innovation: VR and metaverse, crypto and NFTs, and now AI and LLMs. The same familiar heroes and villains, the same I-speak-for-the-voiceless rhetoric, the same snide dismissals. Unfortunately, and as you correctly point out, those obligatory openings moves against generative AI smack of intellectual dishonesty. Lum's tweets acknowledge that. I admire you defending your nuanced position; your fundamental sympathy for the project as a whole but your rejection of a politics that lacks the conceptual tools to truly grapple with what's going on. Keep up the great work.

Expand full comment

Did a test a few days ago myself with Bard and GPT and a well-respected translation of the Bible. I asked the LLM questions about Bible quotes and it got them wrong. Both systems. It’s like they’re getting worse. And that should be the most basic text to have stored in the system.

I don’t get it.

Expand full comment

Following these debates on Twitter and Mastodon, I think there are really a couple of key voices to which this critique especially applies. And I share your dismay. I too agree with their important interventions, but also agree that the tone and focus of their discourse over the last few months has made me want to back away slowly. And that’s a huge loss because their approach has divided the community, arguably more over tone than substance. To be fair, I can understand how frustrating it must be to hear Google’s Pichai talk about the need for AI ethicists after one has been fired by Google as an AI ethicist, or hearing “stochastic parrot” attributed to Sam Altman instead of oneself. Agreed that Mitchell is a good exception.

Expand full comment
May 18, 2023Liked by Alberto Romero

Fantastic read! Being interested enough in AI, I agree with something you wrote in the intro to this series: "Friends and family know very little about AI and how it influences our daily lives."

I wonder how topics around AI ethics and its impact on society can reach family dinner conversations with the same ease and fluency with which I can share fun, daily ChatGPT examples (birthday poem, email to customer support, a trip itinerary, etc.)

Southpark had a fun episode about ChatGPT [1], bringing it to life with school and dating topics. And I imagine many people saw the Pope at Burning Man [2]. But we would need many more everyday examples pointing to possible risks+challenges to come close to the incredible benefits that are, indeed, "obvious to anyone who has signed up for a ChatGPT account."

Eh, without claiming to be an AI ethicist, here's one example: an AI-powered advice column that tries to humanize a few examples of potential AI impact on our daily lives - from AI bias in recruiting to secret affairs with ChatGPT: https://dearai.substack.com/p/ai-powered-relationship-advice-for-the-ai-age

Thank you for this article and this series!

[1] https://southpark.cc.com/episodes/8byci4/south-park-deep-learning-season-26-ep-4

[2] https://www.nytimes.com/2023/04/08/technology/ai-photos-pope-francis.html

Expand full comment

This expresses just how I’ve been feeling about their arguments. I want to support AI ethics but the constant criticism of generative AI... it’s just as you say. Thanks for writing this.

Expand full comment
May 17, 2023Liked by Alberto Romero

This is spot on, very well argued, 100% on the money!!!

Expand full comment
May 17, 2023Liked by Alberto Romero

As usual, I will argue that we leap right over all these AI specific debates to focus on a wider view.

Pretend for a moment that we decisively resolved every single concern about AI. That doesn't matter. The knowledge explosion will keep rolling along, faster and faster, producing ever more powerful forces at an accelerating rate. AI is not the end of the 21st century, it's the beginning.

Forget about all the details of the present moment. Clear your mind. Sweep all that off the table. Focus on the big picture bottom line.

1) IF human beings are of limited ability (like every other creature on the planet)....

2) THEN a process which has the goal of developing ever more, ever greater powers, at an ever accelerating rate will inevitably exceed those limits sooner or later.

Don't focus on the particular products rolling off the end of the knowledge explosion assembly line.

Focus on the assembly line itself. If we don't learn how to take control of the assembly line, it will inevitably produce forces that we can't manage.

Would you buy a car that only had a gas pedal, but no brakes?

Expand full comment

Currently the swing is in the other direction: so many specialists claim that AI is life threatening and will come to haunt us. A lone voice of reason is LeCun. Most alarmists must be aware their claims are off he wall based on current progress. Paraphrasing a recent claim: "there is a chance AI will annihilate us, and the chance is close to zero". This vacuous claim made the news channels gobble up the first part. It makes me wonder whether the alarmist claims serve to detract from the criticism (which also swung too far as discussed in this article). If AI is seen as life threatening then this quiets criticism. It makes AI seem real and fully in place (currently it captures a slice of intelligence, detecting patterns). A position of power is created as those who claim to see massive danger ahead are the most likely port of call for people looking to do something about it. It is a great play if viewed as chess, but it hollows out trust in science. The seesaw between both parties is tiresome and it is sad to see it occur in a field I love. I respect your level headed contributions.

Expand full comment

Hey Alberto, hope you are doing great!

I am Soumya from ByteBrief (bytebrief.co)

We love your newsletter and we have a good news for you!

We're also running a beehiiv newsletter based on AI with 19K+ readers, and we are inviting writers to share their best knowledge on any topic that you love on our next issue, completely dedicated for you!

If it is tech/AI, our audience will be more than happy to read your work on our newsletter.

We can explore cross-promotion opportunities, if you're interested?

Or if you have any other proposal, we can proceed with it too.

Contact us: hello@bytebrief.co

Thanks

Expand full comment

Muchas gracias por este boletín y los anteriores, información que vale oro para los que no conocemos mucho de este tema de la IA. Con tus boletines empiezo a comprender las ventajas y desventajas de la IA y los riesgos que podemos alcanzar en la humanidad si no le ponemos responsabilidad. Gracias

Expand full comment

The over-hyped state of the field is distasteful as is is the over-criticism. The beauty of the contribution does get lost. I agree that this is a shame. Nice you point out who is more level headed amidst all this. I look forward to read more by M. Mitchell.

Expand full comment

U would have A for this article without the need to specify that U are a:

« white male ».

Expand full comment

Could part of the issue be that those in the technical class are so focused on a singular set of current beliefs around a specific definition of race being at the center of every ethical harm that we’ve rendered ourselves unable to deal with any broader conditions that may be causing the negative outcomes we’re trying to solve for?

Data driven AI ultimately has no interest in our biases and beliefs of the current moment. And if you ask it the right questions it’s perfectly happy to reflect answers back to us that reveal fundamental truths about the deeper limits and flaws of humanity that may be too unpleasant or inconvenient to accept.

But until we accept them we can never begin to come up with a genuine way to move past them. Instead we’ll try and (and fail) to hobble AI so it can no longer reveal them.

Expand full comment