Calling language models "bullshit generators" and people who want AI regulation "modern Luddites" is bad for everyone
This essay is a good take. I think one reason these AIs are so disconcerting is that the future evolution of AI poses both potentially unbounded downside risk *and* potentially unbounded upside. There's no consensus on which outcome is more likely, and there won't be consensus for a while, which is also unsettling and makes people feel uncomfortable. It's easier to put things--and people--in a box, so the immediate reaction may be to try to do that.
There aren't many risks that fall into this uncertain/unbounded bucket (most risks are clearly asymmetric in one direction or another--with either the downside or upside outcome having higher likelihood--and many are also bounded in magnitude on the upside or downside or both).
"Don’t fall for the easy argument" is good advice. Here is a less simple argument. The system has a limited amount of knowledge that can be accommodated. The number of questions it can generate answers to is infinite. Therefore, most of the possible answers must certainly turn out to be based not on the knowledge gained in the process of training.
As to history, we can learn what we need to know about the future of AI by studying the readily available history of nuclear weapons. AI, like nukes, is a historic game changing technology first developed with the best of intentions, which will evolve in to some form of unacceptable threat.
We will obsess about the AI threat in the beginning, like we did in the 50s and 60s with nukes. And then, when no easy answers are found, we will proceed in to a pattern of ignoring and denial. We will comfort ourselves to sleep with the notion that "well, nothing too bad has happened so far" while the scale of the technology and threat grows and grows, marching steadily towards some type of game over event.
As to the label "Luddites", the irony here is so rich.
It is today's technologists who are clinging to the past, to a simplistic, outdated and increasingly dangerous 19th century "more is better" relationship with knowledge. They are so enamored of the science hero stories of previous centuries that they don't even know that they are clinging to the past.
It is today's technologists who are stubbornly refusing to make the shift from the knowledge scarcity environment of earlier centuries to the knowledge excess environment of today. It is today's technologists who refuse to learn the maturity skills which will be necessary for our survival as we go forward.
Today's AI technologists sincerely mean well, just as Robert Oppenheimer and those working with him on the Manhattan Project sincerely meant well. And like Oppenheimer and his team, they are ignorantly opening a pandora's box that their successors will have no idea how to close.
The “stochastic parrot” analogy doesn’t look good to me, because it implies that probabilities are calculated (stochastic means probabilistic), but LLMs don’t use explicit probabilities.
I’d better go with the “autocomplete on steroids” metaphor…
I *am* a human and a stochastic parrot / prediction model, though. Granted, only a GPT-2 equivalent, maybe.
I told ChatGPT a not too complex topic (single topic I have some knowledge about), and told it we would be taking turns, generating only one word each.
When I typed in my word - I was screen-and-mic recording the session - I said the word I predicted ChatGPT to generate next, after my word, aloud before submitting my word. Then, I used OpenAI's Whisper to generate subtitles [of my predicted, vocalized words] and used the transcription on the screen recording video.
Turns out I archived >70% accuracy - including, but not limited to, near-deterministic cases dictated by grammatical rules.
I may not be able to pass an inverse Turing test to trick a human into believing I am an AI, because I easily stray off-course into multi-modality as per my humanity / Natural General Intelligence, but - I'd say that I am quite satisfied with my performance as a stochastic parrot, drawing fron mere implicit schemas of ChatGPT (because I am not an AI and can't just remember absolutely everything ChatGPT ever responded to me in an explicit manner).