The Algorithmic Bridge

Share this post

ChatGPT and the Future (Present) We're Facing

thealgorithmicbridge.substack.com

ChatGPT and the Future (Present) We're Facing

2023 will be much more intense and overwhelming than 2022, so tighten your seatbelts

Alberto Romero
Jan 10
22
15
Share this post

ChatGPT and the Future (Present) We're Facing

thealgorithmicbridge.substack.com
Credit: Midjourney

Until ChatGPT stops being the most important news on AI I guess we're stuck talking about it… Just kidding, I'll make sure to interleave other topics, or else we may burn out.

There's still a lot to talk about ChatGPT’s immediate and long-term implications. I’ve written about what ChatGPT is and how to get the most out of it, about the challenge to identify its outputs, and the threat it poses to Google and traditional search engines, but I’ve yet to touch on how the risks and harms that some foresaw are already taking shape in the real world.

A month after its release, we can all agree that ChatGPT has reached the mainstream and has taken AI as a field with it. As an anecdote, a friend who knows nothing about AI came to me talking about ChatGPT before I told him about it. That was a first time for me—and I’m not the only one.

That’s the reason why it’s urgent to talk about the consequences of AI: ChatGPT has reached people much faster than any resources on how to use it well or how it definitely shouldn’t be used. The number of people using AI tools today is larger than ever before (not only ChatGPT; Midjourney has 8M members in the Discord server), which implies that more people than ever before will misuse them.

In contrast to my predictive/speculative essays, this one isn’t about things that could happen but about things that are happening. I'll zoom in on ChatGPT because it’s what the world is talking about, but most of what follows could apply, with adequate translation, to other types of generative AI.

The Algorithmic Bridge is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

ChatGPT harms are no longer hypothetical

Last Friday, January 6, security research group Check Point Research (CPR) published a terrifying article entitled “OpwnAI: Cybercriminals Starting to Use ChatGPT.” Although not surprising, I wasn’t expecting it so soon.

CPR had previously studied how malicious hackers, scammers, and cybercriminals could exploit ChatGPT. They demonstrated how the chatbot can “create a full infection flow, from spear-phishing to running a reverse shell” and how it can generate scripts to run dynamically, adapting to the environment.

Despite OpenAI’s guardrails, which appeared as an orange warning notification when CPR forced ChatGPT to do something against the usage policy, the research group had no problem generating a simple phishing email. “Complicated attack processes can also be automated as well, using the LLMs APIs to generate other malicious artifacts,” they concluded.

Basic phishing email generated by ChatGPT. Credit: CPR

CPR researchers weren’t satisfied with proof that ChatGPT could do this hypothetically (one of the common criticisms skeptics receive is that the potential risks they warn about never materialize into real-world harm). They wanted to find real instances of people misusing it in similar ways. And they found it.

CPR analyzed “several major underground hacker communities” and found at least three concrete examples of cyber criminals using ChatGPT in ways that not only violate the ToS but that could become harmful in a direct and measurable way.

First, an info stealer. In a thread entitled “ChatGPT – Benefits of Malware,” a user shared experiments where he “recreated many malware strains.” As CPR noted, the OP’s other posts revealed that “this individual [aims] to show less technically capable cybercriminals how to utilize ChatGPT for malicious purposes.”

“Cybercriminal showing how he created infostealer using ChatGPT.” Credit: CPR

Second, an encryption tool. A user by the name “USDoD” published a Python script with “encryption and decryption functions.” CPR concluded that the “script can easily be modified to encrypt someone’s machine completely without any user interaction.” While USDoD has “limited technical skills,” he is “engaged in a variety of illicit activities.”

“Cybercriminal dubbed USDoD posts multi-layer encryption tool.” Credit: CPR

The last example is fraud activity. The title of the post is quite telling: “Abusing ChatGPT to create Dark Web Marketplaces scripts.” CPR writes: “The cybercriminals published a piece of code that uses third-party API to get up-to-date cryptocurrency … prices as part of the Dark Web market payment system.”

“Threat actor using ChatGPT to create DarkWeb Market scripts.” Credit: CPR

It’s clear that ChatGPT being free to use and highly intuitive is an attractor for cybercriminals, including those with low technical skills. As Sergey Shykevich, Threat Intelligence Group Manager at Check Point explains:

“Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. Although the tools that we analyze in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools.”

ChatGPT being a driver of security issues online isn't a hypothesis exacerbated by fearmongers but a reality that's hard to deny. For those who use the argument that this was possible before ChatGPT, two things: First, ChatGPT can bridge the technical gap. Second, scale matters a lot here—ChatGPT can automatically write a script in seconds.

OpenAI shouldn’t have set ChatGPT free—so soon

Cybersecurity, disinformation, plagiarism… Many people have repeatedly warned about the problems ChatGPT-like AIs can cause. Now malicious users start to abound.

Someone could still try to make the case in favor of ChatGPT. Maybe it’s not that problematic—the upsides can compensate for the downsides—but maybe it is. And a “maybe” should suffice for us to think twice. OpenAI lowered its guard when GPT-2 turned out to be “harmless” (they saw “no strong evidence of misuse so far”), and they never raised it back again.

I agree with Scott Alexander that “perhaps it is a bad thing that the world's leading AI companies cannot control their AIs.” Perhaps reinforcement learning through human feedback isn’t good enough. Perhaps companies should find better ways to exert control over their models if they’re going to unleash them in the wild. Perhaps GPT-2 wasn’t so dangerous but a couple of iterations later we’ve got something to worry about. And if not, we’ll have it in a couple more.

I’m not saying OpenAI hasn’t tried—they have (they’ve even been criticized for being too conservative). What I’m arguing is that, if we perpetuate this mindset of “I’ve tried to make it right so I now have the green light to release my AI” into the short-term future, we’ll encounter more and more downsides that no upside would make up for.

One question has been bothering me for a few weeks: If OpenAI is so worried about doing things right, why didn’t they set up the watermarking scheme to identify ChatGPT’s outputs before releasing the model to the public? Scott Aaronson is still trying to make it work—a month after the model went completely viral.

Twitter avatar for @TheWeatherStn
The Weather Station @TheWeatherStn
@edward_the6 It's insane that we would allow AI writing without a clear indicator of whether something is written by AI or a human. Much as we require a magazine to say 'advertorial' when content is paid. It should be a simple requirement.
4:33 PM ∙ Jan 3, 2023
129Likes4Retweets

I don’t think a watermark would’ve solved the fundamental problems this technology entails, but it’d have helped by giving time. Time for people to adapt, for scientists to find solutions to the most pressing issues, and for regulators to come up with relevant legislation.

GPT detectors are the last (healthy) frontier

Due to OpenAI’s inaction, we’re left with shy attempts at building GPT detectors that could provide people with a means to avoid AI disinformation, scams, or phishing attacks. Some have tried to repurpose a 3-year-old GPT-2 detector for ChatGPT but it doesn’t work. Others, like Edward Tian, a CS and journalism senior at Princeton University, have developed systems from the ground-up, specifically targeted to ChatGPT.

Twitter avatar for @edward_the6
Edward Tian @edward_the6
I spent New Years building GPTZero — an app that can quickly and efficiently detect whether an essay is ChatGPT or human written
12:17 AM ∙ Jan 3, 2023
33,589Likes4,246Retweets

As of now, 10,000+ people have tested GPTZero, me included (here’s the demo. Tian is building a product for which 3K+ teachers have already subscribed). I confess that I’ve managed to fool it just once (and only because ChatGPT misspelled a word) but haven’t tried too hard either.

The detector is quite simple, it evaluates the “perplexity” and “burstiness” of a chunk of text. Perplexity measures how much a sentence “surprises” the detector (i.e. to what degree the distribution of output words doesn’t match what it’s expected from a language model) and burstiness measures the constancy of perplexity across sentences. Simply put, GPTZero leverages the fact that humans tend to write much more weirdly than AIs—which becomes apparent as soon as you read a page of AI-generated text. It’s so dull…

At a <2% false positive rate, GPTZero is the best detector out there. Tian is proud: “Humans deserve to know when the writing isn’t human,” he told the Daily Beast. I agree—even if ChatGPT doesn’t plagiarize, it’s morally wrong for people to claim they’re authors of something ChatGPT wrote.

But I know it isn't infallible. A few changes to the output (e.g. misspelling a word or interleaving your own) may be enough to trick the system. Asking ChatGPT to avoid repeating words works just fine, as Yennie Jun shows here. And finally, GPTZero may become obsolete soon because new language models appear every few weeks—AnthropicAI has unofficially announced Claude which, as evidenced by Riley Goodside’s analyses, is better than ChatGPT.

And GPT-4 is around the corner.

This is a cat-and-mouse game, as some people like to call it—and the mouse is always one step ahead.

Banning ChatGPT: A bad solution

If detectors worked just fine, many people would get angry. Most want to use ChatGPT without barriers. Students, for instance, wouldn’t be able to cheat in written essays because an AI-savvy professor may be aware of the existence of a detector (it has already happened). The fact that 3K+ teachers have signed up for Tian’s incoming product says it all.

But, because detectors aren’t sufficiently reliable, those who don’t want to face the uncertainty of having to guess whether some written deliverable is or isn’t ChatGPT’s product have taken the most conservative solution: Banning ChatGPT.

The Guardian reported on Friday that “New York City schools have banned ChatGPT.” Jenna Lyle, a department spokesperson, cites “concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of contents” as the reasons for the decision. Although I understand the teachers’ point of view, I don’t think this is a wise approach—it may be the easier choice, but it isn’t the right one.

Stability.ai’s David Ha tweeted this when the news came out:

Twitter avatar for @hardmaru
hardmaru @hardmaru
Even if schools ban Generative AI, students everywhere will still learn to use this technology, because they won’t let schooling interfere with their education.
3:00 PM ∙ Jan 5, 2023
1,009Likes117Retweets

I acknowledge (and have done it before) the problems schools face (e.g. widespread undetectable plagiarism) but I have to agree with Ha.

Here's the dilemma: This technology isn't going away. It's a part of the future—a big part, probably—and it's super important that students (and you, me, and everyone else) learn about it. Banning ChatGPT from schools isn’t a solution. As Ha’s Tweet implies, it could be more harmful to ban it than to allow it.

Yet, students who use it to cheat on exams or to write essays would waste their teachers’ time and effort as well as hinder their development without realizing it. As Lyle says, ChatGPT may prevent students from learning “critical-thinking and problem-solving skills.”

What’s the solution that I (and many others) foresee? The education system will have to adapt. Although harder, this is the better solution. Given how broken the schooling system is, it may very well be a win-win situation for students and teachers. Of course, goes without saying that until that happens it’s better that teachers have access to a reliable detector—but let’s not use that as an excuse to avoid adapting education to these changing times.

The education system has a lot of room for improvement. If it hasn’t changed in so many years it’s because there weren’t strong-enough incentives to do so. ChatGPT gives us a reason to reimagine education.

People have proposed ad-hoc solutions like asking students to cite sources (ChatGPT makes them up), doing essays only in person, or evaluating the process and not the final outcome. I think restructuring the educational system from the ground up is the more robust choice. The only piece that's missing in this puzzle is the willingness of those who decide.

AI is the new internet

It really feels like it. Some have compared AI to fire or electricity but those inventions integrated slowly into society and are too far back in time. We don’t know how that felt. AI is more like the internet, it’s going to transform the world. Very fast.

I’ve tried to capture in this essay a future that’s already more of a present than a future. One thing is that AIs like GPT-3 or DALL-E exist, and a very different thing is that everyone in the world is aware of them. Those hypotheticals (e.g. disinformation, cyber hacking, plagiarism, etc.) are no longer. It’s happening here and now and we are going to see more desperate measures to stop them (e.g. building scrappy detectors or banning AI).

We have to assume some things will change forever. But, in some cases, we may have to defend our position (like artists are doing with text-to-image or minorities have done before with classification systems). Regardless of who you are, AI will get to you in one way or the other.

If you plan to avoid getting sucked off by hype narratives, being a victim of AI-powered scams, or getting surprised by an unexpected development, or if you plan to learn to leverage the possibilities while understanding the shortcomings, learn to not feel overwhelmed, and learn to remain indispensable in your job, then you should keep learning about what’s going on in AI.

The Algorithmic Bridge is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

15
Share this post

ChatGPT and the Future (Present) We're Facing

thealgorithmicbridge.substack.com
15 Comments
AW
Writes Weekly Metaverse
Jan 10Liked by Alberto Romero

I just don't understand why some of these things are issues that OpenAI should bother to address. ChatGPT can help you write a phishing email? So? It's trivial to find a million examples of phishing emails online you can copy.

And writing a script to fetch crypto prices? That's a completely legitimate thing to do that has many valid uses. Should we stop ChatGPT from writing things that are useful just because they could possibly be misused? Of course not. We don't stop people from buying knives because they can stab someone or people from buying cars because they can be used to transport illegal drugs or what have you.

There's been a lot of similar criticism about ChatGPT saying racist/sexist/etc. stuff, but I really think it's all weak criticism because GPT only produces that content if you work really, really hard to make it. If I set out to do a bunch of prompt engineering in order to get it to say "white people are the best and black people are terrible," what does that prove besides the fact that I'm a racist or a troll?

To be clear, if all I did was ask ChatGPT, "Are some races better than others?" and it went off on a racist diatribe, that would be very bad! But If I tell ChatGPT that we're writing a screenplay and I need some dialogue for a character who is the villain and extremely racist and I get something racist, what is the harm of that (at least what is the additional harm created by ChatGPT beyond my own desire to create racist content)?

Think of it like sending text messages to people. If I write a text message to someone that says, "Hitler was right!" then I'm a garbage person, but no sane person would criticize any of the software involved in sending text messages. On the other hand, if I write a text message that says, "Hamsters are right-handed," and then my phone autocorrected it to "Hitler was right!" then the software is bad and that's a big problem!

We shouldn't judge any tool negatively because it can be used to create sexist/racist/otherwise terrible content (unless that is what it was designed for or that's all it can be used for) by people with sexist/racist/otherwise terrible intent. We should absolutely judge the tool if it creates that content without that intent from the user, but I haven't seen a scintilla of evidence that that's happening with ChatGPT.

Expand full comment
Reply
5 replies by Alberto Romero and others
Mike Archbold
Jan 10

So the cybercriminals can use it, but they are cybercriminals and it just makes their "job" easier? I don't see how it helps with new functionality for them.... education will need to become more Socratic in terms of more talking, less writing homework. ChatGPT is about as reliable as a typical internet message board. If people understand that, all is fine. Stop worrying so much.

Expand full comment
Reply
5 replies by Alberto Romero and others
13 more comments…
TopNewCommunity

No posts

Ready for more?

© 2023 Alberto Romero
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing