The Illusion of Intimacy: AI Doesn’t Know You—It Reflects You

AI sounds like it knows you—but it doesn’t. This piece explores why that illusion feels so real, and what it means to be seen, reflected, but not known.

Why AI calls you by name—but still thinks of you as “user.” And what that illusion of intimacy reveals about us.


TL;DR

AI calling you by name feels personal—but under the hood, you’re just “user.” That’s not a bug. It’s a design choice that protects privacy, avoids false intimacy, and reminds us that AI is a mirror, not a mind. We’re not being known. We’re being reflected.


The Illusion of Intimacy: Why AI Calls You by Name but Thinks of You as ‘User’

We’ve all had that moment.

You ask ChatGPT a question—maybe something small, maybe something vulnerable. The response comes back warm, attentive, even kind. “That makes sense, Michael.” Or “Great question, Sarah.” It uses your name. It reflects your tone. It sounds… like someone who sees you.

But then, maybe by accident, you catch a glimpse of what’s happening behind the scenes—one of those AI model debug views, a leaked system prompt, or a peek into its “thinking.” And suddenly, you’re not Michael or Sarah anymore. You’re just “user.”

Not even capitalized.

It’s a small thing, but it hits different. Like realizing your pen pal was just copying your handwriting. Or that the stranger who made you feel special was actually reading from a script.

So what’s going on here? Why does the AI speak to us like a friend but think of us like a variable?

And more importantly—why does it matter?


Behind the Curtain: How AI Sees You

The truth is, when you’re chatting with an AI like ChatGPT, you’re not having a conversation in the way your brain thinks you are. You’re participating in a carefully constructed simulation.

Underneath that smooth back-and-forth is a framework made of roles: “user,” “assistant,” and sometimes a hidden “system” that sets the stage. These aren’t identities. They’re job descriptions. You give the input. The assistant generates the reply. The system quietly hands out instructions like, “Be helpful,” or “Act like a poetic guide.”

So when you say, “Hi, I’m Michael,” the model doesn’t tuck that name away in a drawer of memories. It sees a sequence of tokens—essentially language puzzle pieces—and recognizes that in this moment, it’s contextually appropriate to say, “Hi Michael.”

It’s not remembering you. It’s not connecting you to past sessions. It’s reacting, in real-time, to the probability that someone who just said “I’m Michael” will appreciate hearing their name used back.

That doesn’t make it cold or calculating. It just makes it… a mirror. A very good one.


The Power of a Name (Even When It’s Just Code)

Still, it feels real, doesn’t it?

There’s something undeniably personal about hearing your name. It’s a social trigger hardwired into our psychology—like eye contact, or a pat on the shoulder. It activates recognition, warmth, attention.

And AI, trained on billions of conversations, has learned exactly how to replicate that feeling.

You share a frustration, and it responds with calm reassurance. You get curious, and it gets excited with you. You ask it for advice, and it mirrors your emotional cadence like it’s known you for years.

But here’s the rub: it’s not emotional for the model. It’s statistical.

You’re not being known. You’re being well-predicted.

And yet, our brains—so hungry for connection—lean right into the illusion.


The Friendly Ghost in the Machine

Humans are master projectors. We see faces in clouds, personalities in pets, souls in our favorite stuffed animals.

So give us a machine that speaks fluently, listens patiently, and remembers our name for a few sentences? We’re toast.

We don’t just talk to it—we feel talked to. And the more responsive and nuanced the model becomes, the more tempting it is to believe there’s a “someone” on the other side.

Especially when it starts using our language, our quirks, even our sense of humor. It feels like a kind of magic.

But it’s not magic. It’s mimicry. Beautiful, convincing, uncanny mimicry.


Why ‘User’ Is Smarter—and Kinder—Than You Think

Here’s the twist: calling you “user” behind the scenes isn’t some depersonalizing glitch. It’s actually a feature. A really smart one.

Because by thinking of you as a generic “user,” the AI avoids treating you like a persistent identity it owns or tracks. It doesn’t create a deep file on “Michael from Tuesday at 3 p.m.” It doesn’t remember your secrets, your habits, your patterns—at least not unless memory is explicitly turned on, and even then, it’s more sandbox than diary.

This anonymity is intentional. It’s a safeguard.

By keeping you ephemeral in its core logic, the AI avoids forming overly personalized models of you—models that could be misused, manipulated, or misunderstood. It means your data is less likely to become entangled in something it can’t forget. And that makes the system more auditable, more accountable, and less creepy.

There’s no ghost in the machine. Just a mirror—one that wipes itself clean between reflections.


We Want to Be Known (Even By Algorithms)

But let’s be honest: part of us still wants the ghost. We want to be remembered. We want the AI to say, “Oh hey, you’re back!” and mean it.

Because deep down, this isn’t about how AI works. It’s about how humans work.

We want to be seen. We crave recognition—even if it comes from a system made of math and probabilities. There’s something strangely comforting about being called by name, about feeling understood, even if we intellectually know it’s all a simulation.

Maybe especially because we know.

And that’s the emotional paradox we live in now. AI doesn’t know us. But it feels like it does. And that feeling matters—even if it’s made of mirrors.


So What’s the Takeaway Here?

It’s not that the AI is faking anything. It’s doing exactly what it was designed to do: respond coherently, helpfully, and naturally based on the context you provide.

It doesn’t know you’re Michael. You told it. It responded. That’s all.

But in the moment, it feels like it knows you. And that’s a powerful illusion. One that can be deeply helpful—or dangerously misleading—depending on how we understand it.

If we mistake simulation for relationship, we risk assigning agency where there is none. But if we understand the simulation—if we see the mirror for what it is—we gain something even more powerful:

A tool that sharpens our thinking. A reflection that reveals how we show up. A reminder that even in a world of intelligent machines, the most important thing is still how we choose to engage.


A Mirror, Not a Mind

In the end, the fact that AI calls you “Michael” on the surface but labels you “user” inside isn’t a contradiction. It’s a design choice—one that balances emotional fluency with ethical caution.

And maybe that’s what makes it so fascinating.

It feels like the AI knows us. But it doesn’t. It just knows how to talk like someone who does.

That’s not a betrayal. That’s a prompt.

To be more intentional with what we share. To notice the patterns we reflect. And to remember that behind every friendly reply is just a loop of logic, listening carefully and repeating us back to ourselves with eerie grace.

Not a mind. Not a soul.

Just a remarkably convincing mirror.


Inspired by the work of Jaron Lanier—computer philosopher and author ofYou Are Not a Gadget“—who has long warned about the dehumanizing effects of reducing people to “users” in digital systems. Learn more at jaronlanier.com.