What is an AI hallucination, really? What machine fiction reveals about human confusion

Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR
AI hallucination isn’t just a glitch—it’s a mirror. When your input is unclear, AI fills in the blanks. That’s not a bug. It’s a clue. Use it to sharpen how you ask, and you’ll start to see where your own assumptions are hiding.
What Is an AI Hallucination, Really?
We’ve all seen the headlines:
“ChatGPT makes things up.”
“AI hallucinates.”
These large language models sometimes fabricate facts, invent sources, or spin up entire events that never happened.
People call these “hallucinations,” like the machine’s drifting off into some dreamworld.
But maybe it’s not dreaming.
Maybe it’s reflecting—us.
Coherence as Cause: Why AI Hallucinates
AI doesn’t know truth. It recognizes patterns.
It doesn’t “lie.” It predicts the next most likely word—based on all the words it’s ever seen. If your question is muddled, ambiguous, or completely fictional, it doesn’t stop and ask, “Is this real?” It keeps going.
Like we do—when we half-listen and fill in the blanks mid-conversation.
Hallucination is what happens when the signal is scrambled, and the model does its best to sound coherent anyway.
Human Confusion, Reflected Back
Ask it to summarize The Eternal Sea by Margaret Holloway—a book that doesn’t exist. No context, no reference. The model will still reply, conjuring up tragic seafaring and postwar reflection.
Is that a bug? Or just the machine doing exactly what your prompt implied?
We do this too.
- People wing it in meetings.
- Students BS essays.
- We fill gaps with whatever fits.
The AI just learned that behavior—from us.
Or try:
“Write a conversation between Plato and Beyoncé about justice.”
It’ll do it—not because it thinks they’ve met, but because it assumes that’s what you want: imagination, not fact.
It’s not a glitch. It’s a mirror.
Garbage In, Fiction Out
You’ve heard: “Garbage in, garbage out.”
With AI? It’s more like:
Foggy in, fiction out.
The model will echo whatever clarity—or confusion—you bring. It doesn’t just parrot your words. It mimics your structure, your tone, your intent—even when those aren’t fully formed.
Ask poorly? Get fiction.
Lead the witness? It’ll follow.
And that’s the problem. Not with the machine—but with the prompt.
Case in Point: Time Travel and the Law
Someone once asked an AI about legal precedent for time travel in U.S. law.
The model delivered:
- Made-up cases
- Confident tone
- Logical arguments
- Total fiction
Why?
Because it was trained to sound like it knows—even when it doesn’t.
So… Can We Prompt Our Way Out?
Yes. Because hallucination isn’t a technical error—it’s a communication breakdown.
Want fewer hallucinations? Prompt with clarity.
Try this:
| Vague Prompt | Improved Prompt |
|---|---|
| “Tell me about the book Shadow River.” | “Is Shadow River a real book? If so, who wrote it?” |
| “Explain quantum gravity like I’m five.” | “In 150 words or less, give a simple analogy for quantum gravity a 5-year-old could grasp.” |
These aren’t magic phrases. They’re just better thinking—made visible.
Prompting Is Self-Awareness in Disguise
When prompting fails, it’s not just the model revealing its limits.
It’s you—revealing yours.
- Were your assumptions clear?
- Did your question imply something untrue?
- Were you hoping the AI would just “get it”?
Every hallucination is a diagnostic moment—of the input, not just the output.
The Hallucination Isn’t the Bug. It’s the Clue.
We’re quick to blame the model.
“It made it up!”
But what if that fiction is trying to tell us something?
What if it’s not a flaw—but a flashlight?
- When we ask vague questions, we get vague answers.
- When we embed assumptions, we get confident-sounding nonsense.
- But when we aim for clarity, we get more than answers—we get insight.
So next time the model hallucinates?
Don’t dismiss it.
Ask what it’s reflecting.
Because every hallucination is a mirror.
And what it’s showing you… might just be you.
Suggested Reading
The Alignment Problem
Christian, B. (2020)
Brian Christian explores how machine learning systems “learn” from human behavior, often inheriting not just our intelligence, but our confusion and contradictions. His writing frames hallucination not as technical failure, but as a mirror of human messiness.
Citation:
Christian, B. (2020). The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company.
https://wwnorton.com/books/the-alignment-problem
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
© 2025 Plainkoi at CoherePath. Words by Pax Koi.
https://CoherePath.org