A critical analysis of OpenAI's latest blog post, "Governance of superintelligence"
Imagine that I built a car that can go 800mph. And I confidently predict that next year's model will achieve 1200mph. This may all sound quite impressive, until we realize that my "genius" invention ignores the fact that pretty much nobody can control a car at those speeds. That's what I see happening here, highly skilled technicians with a very limited understanding of the human condition, or even that much interest in it.
AI or SI has nowhere to get it's values but from us, and/or perhaps the larger world of nature. In either case, evolution rules the day, the strong dominate the weak, and survival of the fittest determines the outcome. If Altman's vision comes to pass, we humans will not be either the strongest or the fittest. Altman's vision might be compared to a tribe of chimps, who invent humans with the goal of using the humans to harvest more bananas. That story is unlikely to turn out the way the chimps had in mind.
Speaking of chimps, a compelling vision of the human condition can be found in the documentary Chimp Empire on Netflix. Perhaps the most credible way to predict the future is to study the past, and Chimp Empire gives us a four hour close up look at our very deep past. The relevance is that the similarity between our human behavior today and that of chimps is remarkable.
The point here is that the foundation of today's human behaviors was built millions of years before we were human. The fact that these ancient behaviors have survived to this day almost unchanged in any fundamental way reveals how deeply embedded they are in the human condition.
AI is not going to change any of these human behaviors, it will just amplify them. Some of that will be wonderful, and some of it horrific. When the horrific becomes large enough, it will erase the wonderful.
In April I was in San Francisco and heard Sam Altman speaking. It was at that time that it dawned on me on how much he and the people in the same tech bubble appear to have lost touch with reality.
I couldn't find a better example of Heidegger's Gestell: Tech has become so pervasive, that they can't think out of technical solutions anymore. Every problem needs an AI solution.
But does it?
Great article, Alberto.
1. One gets the impression from their post that OpenAI haven't thought through the physical and other impacts of radically expanding economic growth.
• How is this to occur? Making more stuff? That obviously isn't a good idea, from an environmental POV.
• Increasing services? That raises more questions:
-- (i) will the exchange value for services that expand GDP be paid to a very small oligarchy of companies, like MS? How will the benefits of that growth be shared?
-- (ii) if SI will usurp the economy of services, what's left for most humans to do - how will their standards of living increase, and what sort of productive work will be available for them?
-- (iii) again what will be the environmental impact? Most of the G7 countries started getting most of their GDP from services back in the 1950s or 1960s -- obviously increases in the service sector nonetheless entrain exponentially growing physical impacts.
• And is the idea that an SI will come up with solutions to all our environmental problems? Sounds a bit magical. And how will it physically effect those solutions? By enforcing its will on us humans? Also, an SI can't bring extinct species back to life, or restart ocean currents like the Atlantic Meridional Overturning Circulation -- but it does require a lot of power to stay running, and all the more so if it starts messing with the physical world.
2. Thinking about regulation and the IAEA model, your cat analogy is apt if you limit the regulatory authority to software development, training, etc. But couldn't part of this IAEA-type approach be direct regulation of the requisite semiconductor ICs as "controlled substances"? Require inventories of current stocks of GPUs and the like, and register all sales of them? And perhaps register sales of semiconductor fab equipment, and possibly some 3D printers? This would give enforcement authorities a handle on who truly has the potential to implement an extralegal SI, and perhaps a separate legal basis for derailing those villains. My cat may be smarter than me at finding spaces I can't reach, but even he can't build a secret fab.
Great analysis - I JUST published my own take on it too, wondering why nobody was talking about this :)
I am sorry to say this. I should be circumspect and reflective and respectful in the way you are but he sounds like someone who has lost his mind. Or Elon Musk. Both/and.
I have only three coherent points:
1. “The (human) governance of superintelligence”, is the most painfully humorous and incredibly naive concept I’ve ever heard of.
Let me try to frame this in a way that’s a hilarious metaphor. Let’s consider some of the most talented fliers on the planet: the crows. Now imagine that there was a large flock or gaggle of crows and they were clearly above-average and got to talking. Keep in mind that crows are some of the most intelligent birds on the planet, so I know it’s a bit of a reach but just bear with me. Imagine that one of them is named Bob, And he sort of fancies himself as the leader of this bunch, so he marshals them all together and makes this fabulous pitch.
He’d start out saying something like this: “You know guys I’ve been thinking. We are a really talented bunch of flyers. We’ve been doing this since birth; we are super agile in the air and we never bump into each, or hardly ever; we’re so talented at flying that we can land on powerlines and tree branches. We are just incredibly good at this aviation business. So I’ve been giving it a lot of thought and I think we should take over and manage this organisation called United Airlines. It might be a bit of a challenge but I think we can do it.
Tim: “Do you really think we could pull that off??”
Bob: “Well, we’ve got a reasonable shot at it. It might be pretty involved and downright complicated, but we’re not so dumb. I mean we’re talking to each other now, all be it in kind of a squawky pre-language way. So we’re pretty sophisticated, am I right?”.
Steve: “So what would we have to do? How can we take over and actually manage this, what do you call it,…an airline?
Bob: “Well we each of us would need to assume different roles in a corporate hierarchy. Somebody would have to be president and CEO. I think I’d be good at that, so I’ll take a turn at that if you don’t mind. But we also need a board of directors; a director of flight operations; a chief pilot; head of maintenance; a head of HR…..”
Steve: “Whoa, whoa, whoa, what’s “HR”?
Bob: “Well it’s like “CR”. You know, like Crow Relations, except for these creatures called “humans”, who are really a lot more finicky, complicated and much more involved to work with. They’re going to want things like pay, sick leave, health insurance, rest on duty period delineation, hiring and firing policies, for start…. And other crazy things like sexual harassment policies; we’re going to need those too”.
Steve and the rest of the crows: “…..huh??, …Wah??” (Generally looking more confused than a crow has ever looked).
Bob: “Oh, and then some of us are going to have to actually learn how to fly these rather large and complicated things called airplanes. Oh, and maintain them. And regulate them. But hey! We’re crows!”
So, if you haven’t figured it out by now, we’re the crows. We may have a sense that some of us are pretty smart, but that’s mostly half-assed aspirations. Truth be told, we don’t even begin to know what the fuck we’re doing, even abstractly, when it comes to “governing a super intelligence”. I mean, look, I’m a human type-rated 747 pilot, and I couldn’t even begin to tell you how to run an airline schedule. I wouldn’t even know where to start. The crows? Fugetaboutit. The humans governing super intelligence?? Fugetaboutit!!
2. “AI is the tech the world has always wanted” -Sam Altman
Well maybe not all of us, but a lot of us. Me included. I want it for very personal and selfish reasons. Namely, my grounded lack of confidence in humanity to move us forward as a civilisation in any way that’s not about fits and starts of greed and corrupted incentives. Let me illuminate further.
I have a chronic, degenerative, incurable health condition. It may not ultimately kill me, but it might. I seriously doubt humanity alone can get their shit together in any coordinated manner enough to help me get over this situation without the help (and rather undeniably blatant inroads to assisting in the creation of a cure) from AI/ AGI. Human crafted capitalism has a habit of profiteering from the treatment of disease, not curing it. (There’s BIG money in treatments, not cures). So yes, I want AI/ AGI, or even SI in my corner. I don’t trust humanity to do right by me, or even its capacity to do right by itself. I’m 63, so I don’t truthfully want humanity to get all timid if there’s a 10% chance of human extinction. I’m fine with a 90% chance of survival. I’ve personally been through worse. Does that make me selfish? Perhaps. That’s a human nature quality. Does that make me timid? Fuck no! And I’m guessing a lot of humans feel the same way. Throw the damn dice already!
3. Realize that humanity is, in the big picture, just a boot-up species for Super-intelligence. We are the dinosaurs or the Intel 386 chips running on a mix of DOS or Windows 3.1, of our era.
People get all worked up in their worry about AI goal misalignment, but it’s really the human bad behaviours that are completely out of alignment with long term survival. This actual Human conduct has already demonstrated that it’s the real threat to all that lives. AI/AGI/ SI will at least be coherent in goal attainment with a statistical likelihood that humans could never match. Humans will never be uniformly coherent about anything. That human quality has serious drag on civilisation and it’s long term prospects. Humans are generally the problem, just like religion (a human invention and maladaptive practice) tends to fuck up everything it touches.