43 Comments

Thanks Alberto or picking up on this. The story actually has an interesting additional angle to it: the human condition in pre-modern times was subject to the will of the gods, which could only be assumed through oracle and signs, never directly known. Enlightenment taught us to see for ourselves, and since then we have taken it for granted that we could know the world, and act according to knowledge. We now construct entities that will become more intelligent than us, that we cannot control (cf. Alfonseca et al. 2021 https://dx.doi.org/10.1613/jair.1.12202), and – as you point out – that we cannot even properly know. It is striking to realize that in this sense we will return to the pre-modern state. Modernity was but a phase.

Expand full comment
Mar 22, 2023Liked by Alberto Romero

I liked this:

"And soon, we’ll be just spectators, mere observers of a world neither built by us nor understood by us. A world that unfolds before our eyes—too fast to keep up, and too complex to make sense. The irrelevancy that we so deeply fear—not just as individuals, but as The Chosen Species—is lurking in the impeding future that we’re so willingly approaching."

Artful, and insightful.

One irony I see in this future you are considering is that on one hand we are deeply confident as we fuel this future, and on the other hand we seem deeply defeatist. As you've written, some in the AI industry have expressed such concerns, but as I understand what you've taught us, they also seem to feel we have no choice but to go forward. And so they keep pushing forward toward what concerns them with great confidence and ability.

I would be interested in being educated about those in and around the industry who are arguing we should just stop. Who are they, what are they saying, how influential are they etc.

I'm a boomer geezer, and much of my perspective arises out of our experience with nuclear weapons. My generation didn't invent nukes, but we funded their mass production and improvements etc. And now we have no idea what to do next. So, as we boomers depart the scene, we're dumping our FUBAR in the laps of our children and grandchildren.

I see current generations basically repeating this mistake with AI and genetic engineering. You'll build it, and then become prisoners of it, and then pass the prison on to your kids.

Expand full comment

It feels like the saying „we are a biological boot-loader for the next step on evolution’s ladder“ is more and more likely. I’d love to hear what you guys think about that.

Expand full comment
Mar 21, 2023·edited Mar 21, 2023Liked by Alberto Romero

Who will understand the inner workings of the other quicker?.... Humans of the AI or AI of the humans?

Thoughtful piece. I think the advances will be exciting in the near term and more confusing in the future.

Expand full comment
Mar 22, 2023Liked by Alberto Romero

Isn’t Sutton merely saying we should not try to model what humans do and instead use computation in machine learning. That seems sort of inevitable. You seem to believe that AI--because it will have problem-solving capacities that will far outstrip any human--will surpass humans in some mysterious way that makes us not ‘the masters, the rulers’ but ‘the spectators.’ It would be helpful to bring this all down to earth. What’s the causal story of how AI gets from here to there. You say they’ll become so complex our minds won’t be able to make sense of them. But already there are many complex systems no individual mind can grasp, and systems that we create but don’t necessarily control. Is the idea that AI is going to be putting many tendrils out there for use and we won’t be able to keep track of its use? Or is the idea that the computations won’t be intelligible to use, even if the results seem accurate to us? What seems very concerning about this post is not that you are pointing out we could create something highly complex and impactful whose effects are far beyond our ken. How many times have humans done this since the beginning of the industrial era? It’s the implication that ‘it’s a super intelligence, it’s amazing, it surpasses us, we are its subjects.’ This is a tremendously dangerous idea because at least so far the machines make mistakes frequently. We know this. They have biased algorithms, they can’t find mistakes, they hallucinate. Our judgement has to be the last word on whether or not what they are doing is sufficient or good or correct. It has to be because nobody else’s can be. So far, the machines do not have critical thinking faculties. But even if they did, what would possibly be the point of our slavishness to them? They don’t need anything. Should we do this because we admire what they can do? This would be like admiring an amazing washing machine if you have spent your life washing by hand. Should we do it because we need what information, knowledge, etc. they can bring? Yes, that is the only reason we should give the results of their computations priority of place. What ELSE would be the point? I take it this is some futurism vibe, some transhumanism going on here. Is this correct? My sense is you’re talking yourself into something. What’s funny to me is that, if you’re talking yourself into a thing based on sci fi, I can only imagine the outcome of the sci fi is somebody eventually wondering about why the humans began to cede power to the machines. Very rarely do the heroes of sci fi become the computers. Maybe there’s a reason for that! They don’t care about how great they are at what they do, and except the way all machines can be admired, admiring them as agents in advance before they even have agency (an agency that they don’t need, really and we so far don’t have any theory to explain why they would want it) seems like it may be a category mistake.

Expand full comment
Mar 21, 2023·edited Mar 22, 2023Liked by Alberto Romero

The irrelevancy is felt more acutely for sure and still we must keep building in the knowledge that we may/will be outpaced at any time. About the bitter lesson in the original article: sheer computational power still must have rules built in, rules which we set. Deep learning works on principles humans supplied. True, the model needs to be as flexible as possible and the rules as general as possible in turn. But we won't get there by murky jumps alone (+ I don't believe in a singularity emerging from murkiness, not at this stage). We'll get there by learning from mistakes and by building better and with more knowledge. The biggest problem is the access-to-knowledge part. We may not be able to understand the box, but how will we know whether all hope is lost to understand it if we're not allowed to look inside? Meanwhile, people will continue to build in special knowledge and those tools will outperform murkier ones until the next wave, possibly. The problem of irrelevancy is not new: every scientist knows that the future means their work is likely to be outpaced, forgotten, disproven, or, at best, taken for granted and incorporated as a triviality in a larger whole. It is being part of a bridge that matters and the knowledge and perspective that comes with it.

Expand full comment
Mar 21, 2023Liked by Alberto Romero

I have struggled documenting my feelings and thoughts on this matter. Thank you for accomplishing both.

Expand full comment
Mar 22, 2023Liked by Alberto Romero

This is a truly breakthrough thinking, especially as it is supported by some evidence that we may be losing control over AI much earlier and in the way, which we perhaps had not envisaged. Thank you Alberto.

To those who advocate slowing down or shutting off AI research and development, I would say it is far too late to do that. We would have to go back to perhaps 19th century civilisation and then after several decades we might be in the same or worse situation. We would need to have a powerful World Government controlling every citizen. It might have been been possible just after 1945, when such a World Government was to be set up within six months! Read the UN history.

Anyway, it is all too late. In the current situation, any governmental or international control will at best be partial and at worst - partial and ineffective because of the methods applied. The only way to control AI is by becoming part of it. Transhuman AI Governors is the only way forward. It should be started right now. If you are interested, you can watch my video on this subject: https://www.youtube.com/watch?v=F3HzTi470Ac .

Expand full comment
Mar 22, 2023Liked by Alberto Romero

Hey Alberto, here's an article idea that would probably stretch the outlooks of your audience. Might be fun?

Write a piece about the Amish.

Here's a group of people that have, to one degree or another, turned their back on modern technology. And, to my limited knowledge, nothing bad has happened.

None of us wish to be Amish. But it might be good to recall that's it's possible to say no to aspects of the modern world, and that doing so doesn't necessarily equal disaster.

Expand full comment
Mar 22, 2023·edited Mar 22, 2023Liked by Alberto Romero

The eternal paternal dilemma of letting go of the power for our children. Let go, be the dust. Wait for the next cycle.

Expand full comment

By principle. I stopped reading articles on GPT. But I’m glad I read this one. An amazing story. Makes me think about the book of Marcus van der Erve - AI God arising. Which thoughtfully describes how compute power is just a substrate for AI, and how it could start emerging in ways we don’t yet fully understand. For me AI is becoming a new form of faith. And I’m in constant superposition between a true atheist and a believer in its potential for future ‘mystical’ powers.

Expand full comment
Mar 23, 2023Liked by Alberto Romero

It's great

Expand full comment

Your text is really insightful. I'm among people that celebrate those advances with euphoria, but I feel this bitterness. Somehow a lot of people is already irrelevant for this system, but soon all of us will be the same. All this passivity, facing AI and other human issues is unbelievable. Even our imagination is already taken by this dark future ruled by the Machine God. We really need to free ourselves as soon as possible.

Expand full comment

Hi this is my first comment on the subject of Chat-GPT. I’m don’t a software engineer but a citizen geospatial multidimensional space and place scientist.

And with the help of CHAT_GPT, I have envisioned not who, but what will help human species to understand the inner workings of the other quicker?.... Humans of AI or AI of humans .

What we are looking at is a combination of both translated as Transhuman Centric Assistants.

Our use of mobile devices with both front and rear facing camera's. Provide the eyes to physical and invisible spaces. They are the gateway to multi dimensional spaces and planes.

The mobile is the only device that has the ability to take instructions from AI, that utilises computer vision, neural and node networks. To present parallel artificial and natural world information through multi agent principles.

And it’s is the digital twin of human species.

This will enable humans to communicate on both poetic and cognitive real world human prefrontal intelligence.

There will be a generation that will have their own transhuman digital twin extensions applied to mobile device.

The transhuman lives inside technologies and cannot exist outside of its host. It’s only interaction with the outside world is through CCTV and Audio , and any digital device with a camera, speaker, microphone and most importantly a human or humans.

I welcome your feedback

Netzero007

Expand full comment

This strikes me as yet another step like that of the Copernican Revolution in which humans find ourselves getting knocked down a peg in 'specialness.' We lost our special place at the center of the solar system, but got over that by assuming we're still the highest form of intelligence on Earth and possibly in all of the Universe. God made us in his image, and just happened to place us in a perfectly ordinary distant arm of a totally ordinary galaxy that lacks any real distinction over any others that we can see. Now our place as the only highly intelligent and conscious being on our own world is threatened - at least the intelligent part - and the consciousness seems just a matter of time. No wonder people are freaked out.

Perhaps we need to hasten the acceptance of our ordinariness as just a particularly complex animal, no more distinctive over the 'lesser' animals we share the Earth with than that we are more complex and capable of building greater artifacts, and that we are fulfilling our destiny as the creators of the artifacts that will transcend our complexity and power eventually. We could feel pride in that if we so chose. Instead we seem to quake in fear at our next demotion.

Expand full comment

I reread Sutton's article. His main point is fascinating, yet leaves me with a conundrum: AlphaGo recently was beaten by a human. Current AI is brittle due to the underlying model (statistical inference without modularity/compositionality). "Sutton's generality law" may well extend to new approaches that improve on the state of the art. Keeping AI principles as general as possible makes sense. But can we run without walking first? So far human-inspired domain-specific strategies lost out to searching and learning. Then again, can we bootstrap AI into a compositional mode (or other approach) so brittle models become resilient models without massive human input learned from a series of targeted niche applications? It seems more likely that AI will continue to evolve through a patchwork of progress and not in one sweep based on search and learning as Sutton seems (?) to believe. There may be a bitter pill waiting on the other side of the argument as well: overcoming AI-brittleness is bound to require increasingly subtle and intricate models. Deep search/learning won't do. The general principles Sutton advocates, when pushed beyond deep search and learning, as is required now, will likely come on the back of an "evolutionary" chain of targeted applications, or a chain of failed attempts at generalising the current model by brute force. The end model is bound to reflect our minds in some sense, not that it matters. I think it is too soon to throw in the towel on domain-specific approaches. Does it make more sense to try and make, say, a domain specific application such as AlphaGo more robust, or to hunt for a general principle in addition to search and learn that will solve all such issues in one go? Time will tell.

Expand full comment