15 Comments
Jan 10Liked by Alberto Romero

I just don't understand why some of these things are issues that OpenAI should bother to address. ChatGPT can help you write a phishing email? So? It's trivial to find a million examples of phishing emails online you can copy.

And writing a script to fetch crypto prices? That's a completely legitimate thing to do that has many valid uses. Should we stop ChatGPT from writing things that are useful just because they could possibly be misused? Of course not. We don't stop people from buying knives because they can stab someone or people from buying cars because they can be used to transport illegal drugs or what have you.

There's been a lot of similar criticism about ChatGPT saying racist/sexist/etc. stuff, but I really think it's all weak criticism because GPT only produces that content if you work really, really hard to make it. If I set out to do a bunch of prompt engineering in order to get it to say "white people are the best and black people are terrible," what does that prove besides the fact that I'm a racist or a troll?

To be clear, if all I did was ask ChatGPT, "Are some races better than others?" and it went off on a racist diatribe, that would be very bad! But If I tell ChatGPT that we're writing a screenplay and I need some dialogue for a character who is the villain and extremely racist and I get something racist, what is the harm of that (at least what is the additional harm created by ChatGPT beyond my own desire to create racist content)?

Think of it like sending text messages to people. If I write a text message to someone that says, "Hitler was right!" then I'm a garbage person, but no sane person would criticize any of the software involved in sending text messages. On the other hand, if I write a text message that says, "Hamsters are right-handed," and then my phone autocorrected it to "Hitler was right!" then the software is bad and that's a big problem!

We shouldn't judge any tool negatively because it can be used to create sexist/racist/otherwise terrible content (unless that is what it was designed for or that's all it can be used for) by people with sexist/racist/otherwise terrible intent. We should absolutely judge the tool if it creates that content without that intent from the user, but I haven't seen a scintilla of evidence that that's happening with ChatGPT.

Expand full comment

So the cybercriminals can use it, but they are cybercriminals and it just makes their "job" easier? I don't see how it helps with new functionality for them.... education will need to become more Socratic in terms of more talking, less writing homework. ChatGPT is about as reliable as a typical internet message board. If people understand that, all is fine. Stop worrying so much.

Expand full comment

What we need is for the community of science-fiction writers to start inventing stories about people -- like the Unabomber -- using this technology as a tool, for their own evil purposes. Right now the only AI and robot stories out there are ones in which the machine itself develops some kind of evil consciousness. (Like M3GAN. A film I heartily recommend.) As a result it is hard for us to know what to worry about. Get cracking, guys.

Expand full comment
Jan 10·edited Jan 10

OpenAI’s guardrails to build a barrier against maluses does not encompasses a “complete” solution to the problem. It is not certainly just a technical problem. ChatGPT is a new kind of “intelligent” agent, that interacts with people, which could be guided to do good or bad things. ChatGPT resembles an immature and not fully developed "agent", lacking a psychological sense of good and bad. So why not consider that besides a technological framework for the OpenAI’s guardrails, there is also the need for a kind of computational psychological framework to be built inside de the OpenAI’s guardrails?

ChatGPT was released to the world as a kind of “immature” psychological agent, without knowing exactly what is good or bad. Referencing Freud´s super-ego, ChatGPT lacks a moral component in its structure, only furnished with OpenAI’s guardrails which is not enough for now. ChatGPT raises many philosophical questions where new technology can blur the boundaries between human and machine, natural and artificial, distorting our relationships to the “other”. Let´s see the OpenAI´s next steps and solutions to solve this problem.

Expand full comment

Hi Alberto, I think you're very much on the right track when you use phrases like these:

"scale matters a lot here"

"we’ll encounter more and more downsides that no upside would make up for"

The all important issue of scale seems most easily demonstrated using the example of nuclear weapons, because that's an existing technology that everybody understands. Nuclear weapons have the big benefit of sobering the great powers, and they may even have prevented a repeat of WWII. However, because of the vast scale of these weapons, the price tag is that we're perpetually only one bad day away from the collapse of everything accomplished over the last century. It's the scale of these powers which is the key fact.

It's harder to illustrate the concept of scale in the AI realm, but I think the principle is the same. AI will undoubtably deliver many benefits, more than I can imagine, probably more than anybody can imagine. But as the scale of this technology grows the price tag is going to seem ever less acceptable. That's not because AI is inherently bad, but because human beings are inherently limited creatures, just like every other creature on the planet.

You write, "This technology isn't going away."

You may be right, the evidence does support your claim. But if we apply this mindset to all emerging technologies then sooner or later the miracle of modern civilization is going away.

It may not be AI which crashes the system. It may genetic engineering, or some other technology which hasn't been created yet. And of course, nuclear weapons stand by ready to do the job at any moment. Or it may be some combination of the above. Nobody can know exactly how or when it will happen.

But, on the road we're currently traveling, it will happen, the crash will come.

It's simply not credible that human beings can successfully manage ever more, ever larger powers, delivered at an ever accelerating rate, without limit. And that is exactly what is implied when we assume a knowledge explosion that we are unwilling to control.

I would urge you and all other intelligent writers on such technical subjects to shift some of your focus from particular emerging technological threats to the knowledge explosion which is the source of all such threats. Trying to manage particular emerging threats one by one by one is a loser's game so long as the knowledge explosion is generating new threats faster than we can figure out how to make ourselves safe from existing threats. As example, before we figure out how to make ChatGPT safe, new and more powerful versions of this technology will emerge, and we won't know how to make them safe either. And that process will just keep going, faster, and faster, and faster.

https://www.tannytalk.com/p/our-relationship-with-knowledge

Expand full comment