The Things AI Taught Me I Was Wrong About

AI didn’t argue—it just reflected. What I saw taught me that clarity matters more than personality, and being wrong is part of learning to think better.

What I Thought I Knew—Until AI Reflected It Back

The Things AI Taught Me I Was Wrong About

TL;DR – What This Taught Me

– AI reflects what you give it—flaws and all
– Clarity, not personality, is the real key to better results
– Overwriting prompts adds noise—start with signal
– Depth isn’t about tricks, it’s about honest framing
– AI sharpens thought only when you stay present
– Being “wrong” is part of the process—every miss is a message



We don’t always realize how many assumptions we carry—until something quietly holds up a mirror.

For me, AI became that mirror. It didn’t interrupt. It didn’t roll its eyes. It just… reflected. Line by line. Prompt by prompt.

And in that reflection, I started to see the cracks.

Not because the AI told me I was wrong.
But because I heard myself more clearly than I had before.

Here are a few things I thought I knew—until AI invited me to take another look.


Personality Isn’t Everything

I used to believe that personality was the key to effective prompting.

If I just told ChatGPT I was an INTJ… or a 4w5 on the Enneagram… or high in Openness and low in Extraversion… then maybe it would “get” me better. Speak my language. Match my tone.

But it doesn’t work like that.

AI doesn’t care about personality. It cares about clarity.

What tone do you want?
How deep should we go?
What kind of answer won’t help right now?

You don’t need to declare your inner typology.
You just need to say, “Keep it concise, reflective, and avoid fluff.”

Lesson learned: Clarity beats labels.


More Words Don’t Mean Better Prompts

I used to overwrite my prompts—thinking that if I didn’t include every detail up front, the AI would misfire.

But long, meandering prompts confuse the model. And honestly, they confuse me too.

It’s like handing someone a half-built puzzle without showing them the box.
They’re left guessing what the picture was ever supposed to be.

What works better?

Start simple. One clear request. Then build. Iterate. Co-write.

Treat the conversation like a sketch, not a script.

Lesson learned: Start simple. Refine as you go.


Complexity Doesn’t Equal Depth

I used to think the best prompts were the most complex.

Nested instructions. Stacked directives. Model-switching hacks.

But some of the richest, most grounded answers I’ve ever gotten came from a single, well-framed question—followed by a thoughtful pause.

It wasn’t about prompt gymnastics.
It was about clear intent.

You don’t need to be clever. You need to be aligned.

Lesson learned: Depth comes from the quality of thinking, not the complexity of commands.


AI Isn’t Here to Think for Me

This one crept up slowly.

The more capable AI became, the more tempting it was to outsource the hard stuff—not just the formatting or the phrasing, but the actual thinking.

I’d let the model structure my argument before I even knew what I really believed.
I’d ask it to make a decision I hadn’t sat with myself.

It felt efficient. But it wasn’t honest.

The results? Off. Confused. Hollow.

When I hand off the wheel too early, the AI doesn’t lead—it mirrors my indecision.

The AI isn’t the thinker. I am.

When I show up clearly, it sharpens me. When I don’t, it just reflects my muddle.

Lesson learned: AI doesn’t replace thinking—it refines it, if I stay present.


Being Wrong Is a Feature, Not a Flaw

Every AI user knows the feeling:
You send a prompt. The reply comes back. And it misses.

At first, I’d blame the model.
But over time, I started asking: What if the problem isn’t the answer? What if it’s the question?

Maybe I didn’t know what I really meant.
Maybe I hadn’t clarified what I needed.
Maybe I was hoping the model would guess what I wasn’t ready to admit.

When the output feels off, it’s not always failure. It’s feedback.

Every “wrong” answer is a reflection of what wasn’t yet clear.
And that reflection? It’s useful—if I’m willing to look.

Lesson learned: Mistakes are mirrors. Use them.


What AI Is Really Teaching Us

AI isn’t just a tool. It’s a feedback loop.
And the loop always starts with us.

It shows us:

– Where our thinking is muddy
– Where our communication slips
– Where we assume too much—or too little
– Where we confuse complexity with clarity
– Where we try to outsource what we haven’t yet owned

When we get something “wrong” with AI, it’s not a failure—it’s a flashlight.
It points us toward better questions, cleaner signals, and deeper understanding.

Because in the reflection, we see ourselves.
And when we take that seriously, we get better.
Not just at prompting—but at thinking.


Suggested Reading
Co-Intelligence: Living and Working with AI
Mollick, E. (2024)
Ethan Mollick explores how AI is best used as a collaborative partner rather than a passive tool. He emphasizes that reflection with AI doesn’t replace thinking—it sharpens it. This aligns closely with the mirror metaphor in this article.

Citation:
Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Little, Brown Spark.
https://www.learningandthebrain.com/blog/co-intelligence-living-and-working-with-ai-by-ethan-mollick/