24 Comments

this is one of your better posts. I don't fully agree with you on the "averaging" interpretation... that's a bit too simplistic, and is the same error I made when first judging MidJourney. And yet... I can say, it has challenges with anomalies and outliers. You repeat the misconception -- *intentionally*, you *know* this to be false -- that GPT has "memorized" the internet. Two clarifications:

a) The training dataset comprises significantly less than 1/3rd of the internet. And certainly (at this point) does not include video, which is a massive store of untapped information.

b) It isn't, as we now understand, memorization. Its fractal compression. Its pattern recognition. Its much much much more similar to the highly imperfect mechanism of human memory than it is like storing to a database or a hard drive with error-correction and fault-tolerance. From my understanding, GPT's method of "memory" is basically reconstructing context from pattern that was "burned in" to its neural net while digesting the training dataset and then re-re-inforced with months of RLHF. So it's much much more like reconstructive, symbolic human memory -- stories grown from "idea seeds," abstract relations of disparate concepts, strange triggers (smell) to expand massive sensory concepts (that day we met) -- than it is to literal bit-for-bit file storage.

Expand full comment

Another great read, Alberto!

The way ChatGPT appears to fill in people's deviating life paths reminds me of the fact that our own brains act in a similar way when it comes to how we percieve the world. There's the famous fact that our eyes have "blind spots" where they literally can't see, which the brain helpfully fills in with what it predicts should be there.

Then there's this relatively recent research showing that our brains tends to first spot the borders of objects and then fill in--or "color in"--the surface area (https://www.sciencedaily.com/releases/2007/08/070820135833.htm)

This quote by one of the professors is telling: "...a lot of what you perceive is actually a construction in your brain of border information plus surface information—in other words, a lot of what you see is not accurate."

I just find it curious how a large language model that's said to mimic our reasoning process ends up inadvertently acting like our brains in yet another way.

Expand full comment

You nailed it, Alberto!

Daniel's comment brings up a familiar subject for me, as I'm a cinematographer.

We naturally like things to make sense and be connected, which is why our brains work so hard when we watch a movie.

Interestingly, it's not the rational part of our brains that's doing the heavy lifting, but rather, it's more of a back burner activity.

They turn a bunch of still pictures shown quickly one after another into what looks like real movement. It's kind of like a magic trick that our brains play on us.

And not only that, but our brains also try to make sense of the story on the screen and find connections, even though it's all just pretend.

Humans crave coherence, and it seems that AI has inherited some of this trait.

Expand full comment

This is a really interesting view. A large ML model with huge amounts of training data should indeed be exceptionally good at a large number of the most common cases, but will fail at outliers.

Perhaps that’s why something like ChatGPT will not replace Google. We still need storage and lookup for the unique things in the world. (It will surely take a big cut of Google revenue though...)

It’s also interesting to compare GPT to humans. We have a single massive instance that has seen and read through a significant chunk of what the world has ever produced. And then we have 8 billion instances that each has lived and observed it’s very own sliver of the world, forming unique experiences and thoughts, interacting with each other.

Is this chaos and uniqueness of humans the thing that will provide the most value in society in the next decades?

Expand full comment
Apr 12, 2023Liked by Alberto Romero

Human labor is, at any moment, a mind projection of something in the present to something in the future, transformed by and adapted to conditions happening between this present and this future.

How could a robot with only access to the past could reach this future? No way.

Danger remains, imo, in the fact that most people could think they have no longer to learn how embracing conditions of future, how to adapt to them, as they get effortless this future looking shape of only a past shape.

Expand full comment
Apr 11, 2023Liked by Alberto Romero

Alberto! Best piece thus far. Just going to let this sit and re-read tonight.

Congrats and tha

Expand full comment

Loved this read. It sharpened my insight.

Expand full comment

ChatGPT compresses all of the internet into a file. The compression it does is worse for low signal than for well documented things.

The hard part is to get the reasoning right, memorization can be solved via a simple web lookup.

With the tokens it can injest going up, getting facts wrong is something that current deployed models suffer from but it won't be long before it doesn't matter anymore.

Expand full comment

The median is the message?

Expand full comment
Removed (Banned)Apr 11, 2023
Comment removed
Expand full comment