Algorithms Know You. Soon, They Could Hack You
In the (frightening) words of bestselling author Yuval Noah Harari.
That headline isn’t intended to scare you. It’s intended to keep you aware of the direction we’re going. Knowledge can sometimes give you a kind of power you can’t have otherwise. This story promises the power of awareness.
Algorithms dictate our (digital) lives. Your favorite Netflix show, your weekly customized Spotify playlist, the last YouTube rabbit hole you fell into, Instagram stories, Facebook posts… All these apps—that you probably use daily—are powered by algorithms solely designed to keep you engaged. You already know that.
What you may not know is that we aren’t even close to the pinnacle of algorithm design. Companies can get much better at it. They can make algorithms much more engaging. Much more addictive. And the info they’d gather from you could be much more private and intimate. I’ll try to make sense of a very possible future that, although invisible to many, you’ll see coming after reading this.
When it knows you better than you know you
As I wrote recently, TikTok is rapidly becoming the undisputed king of social media. So much so that others have copied its interface design and are augmenting their AI-recommended content. TikTok’s For You Page is a polished algorithm that builds on—and innovates from—all its predecessors. It’s capable of profiling your tastes and interests faster than any other, which not only makes it more addictive but also better at knowing you.
And I’m not talking just about explicit interests. Any algorithm could know you like rock music if you follow a dozen rock playlists. I’m also not talking about implicit interests that you may signal unconsciously, for instance by rewatching a make-up tutorial (as WSJ proved in its 100-bot experiment).
No. TikTok’s algorithm goes beyond that. It can get to know you so well as to know things about you that not even you know.
That’s what happened to Mashable writer Jess Joho. She revealed in an eloquent article how TikTok—with the guidance of her inner, latent desires—drove her from “straight TikTok” down the road to discover what she could have known long ago, but didn’t. That she was bi. A notable victory for the algorithm. And, as impressive as it sounds, hers isn’t an isolated case.
It’s great that an algorithm can help people get rid of layers of societal bias and pressure, as it did with Joho, but if we dig deeper we find something else. Something scary. The algorithm led her to that realization not because it’s helpful, but because it’s optimized for engagement. The For You Page may as well have shown her content that made her miserable—as long as it made her stay there.
Powerful algorithms as mindless forgers of identity
In any case, regardless of whether it helps or hinders, once the algorithm knows you better than yourself—and that certainly implies it also knows you better than those around you—, it possesses unprecedented power over you. TikTok suddenly becomes the place where you feel more at home.
Joho ended up happier, but the story could have been very different. Algorithms can’t assess the consequences of their interactions with us. They don’t know if they produce good or harm. (Of course, because algorithms know nothing really. It’s the engineers behind their design who should be pointed at when analyzing any repercussions.)
The same process that allowed Joho to accept her newfound identity could lead a young teenager—TikTok’s most engaged demography—toward a spiral of depression and sadness—as it happened to WSJ’s bot, Kentucky_96.
As Wired contributor Eleanor Cummins says, these algorithms shouldn’t be treated “as diagnostic, or even deterministic.” They’re static. They’re rigid. They simply propagate the current reality of a person into their future in the form of a box of manufactured characteristics. Teens who don’t know better could very well take it as an immutable part of their identity.
Powerful algorithms are identity forgers whose only dataset is your online behavior, their only modus operandi is mindless, and their only purpose is to exploit your psychological vulnerabilities. Doubling down on depressing content, radicalization videos, or dangerous challenges doesn’t help anyone find themselves, but lose themselves.
Outsourcing self-awareness to a digital authority
A 13-year-old (TikTok’s minimum user age) could unintendedly prompt the app to aggressively recommend content of a particular kind. He or she may then interiorize a self-image that fits that content, reinforced by the lack of mature psychological development.
As TikTok keeps imprinting the kid’s inner self with an endless feed of targeted content, it becomes a sort of authority. Something they trust. And from there, as writer Briana Brownell calls it, it turns into a tool for “outsourcing self-awareness.”
The implications are powerfully damaging. First, a teenager—who is still discovering their internal world—could refrain from developing robust introspection skills. Why would they do it when TikTok keeps divining what they need to define themselves? The path to self-discovery is abandoned by delegating the task to an algorithm whose only task is to keep the oblivious teen fixed on the screen.
Second, as an authority, it becomes “hard to argue with,” as author and mathematician Hannah Fry says. An ‘injudgeable’ black box. The opaque nature of the algorithm and our inherent credulousness—more so the younger we are—makes it a perfect cursed mirror.
Imagine a kid that every day looks at it in search of answers to their unarticulated questions. She gets what she wants. What she expects. The feedback grows strong. An invisible—but robust—facade appears. But one day, because the algorithm isn’t perfect, it fails. And keeps failing.
What would a confused teen, who has unconsciously built a powerful trust in the algorithm, do once they’re facing a reality that mismatches their feelings and sensations? Doubt at best, deny at worst.
The lack of well-developed self-awareness would leave them defenseless, without the adequate tools or strength to push back and leave the spiral that—although helpful originally—is now consuming them into being something they aren’t.
TikTok’s algorithm can know you very, very well. Still, as good as it is, it’s not even close to what AI-powered algorithms could become.
The dark future of algorithm design
Social media algorithms are powerful, but they only use simple behavioral cues people leave when using the apps. Watch time, likes, comments, and other similar variables entirely define the recommendations.
However, people are always producing far richer types of information that could allow the appropriate algorithm to not just know something about you that you don’t know yet, but things that aren’t meant to be known—not even by you. Things that are meant to remain unknown forever.
Yuval Noah Harari explains in this video how a Kindle device connected to face recognition software could correlate emotional reactions like laughing and crying to the corresponding chapters of a book, giving Amazon all the info it would require to know what you like and don’t like—and what to recommend that you buy next.
“As you read the book, the book is reading you,” says Harari.
And he goes even further.
“[In] 5-10 years, [it’ll probably be possible] to connect Kindle to biometric sensors on or inside your body which constantly monitor your blood pressure, your heart rate, your sugar level, your brain activity […] By the time you finish the book you forgot most of it but Amazon will never forget anything […] Amazon knows exactly who you are, what is your personality type, and how to press your emotional button.”
And that’s precisely the direction we’re going. That kind of technology isn't science fiction. Face recognition—although controversial—is used for hiring, surveillance, control, and profiling individuals from discriminated groups. Biometric information can identify a person or reveal details about their health.
By combining these AI-powered technologies, companies like Amazon, Google, or Meta could easily develop algorithms 100X more powerful than TikTok’s. Programs unimaginably better at keeping us engaged—or for whatever other training objective they decide to implement.
That’s what Harari refers to when he says future algorithms could be used to hack humans. And he doesn’t mean individual people who could be targeted only after a carefully customized assessment of their idiosyncrasies. No. With powerful computers and tons of data points—both owned in plenty by those very companies—they could create an algorithm powerful enough to hack humans at scale.
Such an algorithm could recommend to anyone where to live, what to study or work on, whom to date or marry, or whom to vote for. It would know your personality and character. Your strengths and weaknesses. It would know how to drive your behavior with impossible precision and manipulate you in action and intention. It would override your free will—or directly prove that it isn’t such a thing in the first place.
It would certainly be the most powerful weapon—psychological or otherwise—humankind has seen and would ever see.
The monster I’ve depicted throughout this story doesn’t exist yet. But its shadow grows big above us as tech companies lay the groundwork.
If it’s physically and biologically possible (i.e. the human mind doesn’t possess some kind of unhackable essence) this technology could be within reach, at the current pace of development of AI and data science, in a few decades.
We, either individually or collectively, can’t do much to stop technology’s progress. What we can do for ourselves, and for those around us who trust our judgment, is to be aware. Being scared is useless as any of this may never happen. Conflicting interests could get in the way. Regulation could get in the way. Even biology could get in the way.
But being aware is completely free—and makes us wiser and better prepared.