Let AI critique your article before your friends have to. Four prompt styles to sharpen your writing through reflection, clarity, and tonal feedback.
Spare your friends. Let the AI critique you first. By combining these AI-driven approaches, you can get highly effective and diverse feedback on your articles without relying solely on your personal circle.
Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI. AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR
Tired of burdening your friends for article feedback? This guide shows how to use AI as your editor, audience stand-in, and tone checker—so you can refine your work through structured, reflective prompting before ever hitting “publish.”
Why This Matters
Here are four distinct ways you can use AI to critique and improve your own writing—each reflecting a different lens that mirrors your intended audience, your editor, or your emotional tone.
At the heart of this is your own “prompting as collaboration” philosophy. You’re not just asking for feedback—you’re prompting AI to roleplay as different types of readers or critics.
AI as a Target Audience Reader
How to Use It: Give the AI a clear persona that matches your target audience (e.g., “Ma and Pa,” a busy professional new to AI, a skeptical student, etc.).
Prompt Example:
Act as [specific persona, e.g., “a busy but curious small business owner who knows a little about AI but gets confused by jargon.”] Read the following article. Article: [Paste your entire article here] From my perspective as this persona, please tell me: – Is the core message of the article clear? What do you understand it to be? – Does the tone feel engaging and encouraging, or too academic/demanding? – Are the examples easy to understand and relatable to my business? – What are the strongest parts of this article for someone like me? – What parts are confusing or might make me stop reading? – Does it make me want to learn more about Pax Koi/Plainkoi?
AI as a Critical Editor (Focus on Craft)
How to Use It: Instruct the AI to act as a professional content editor, focusing on writing mechanics, flow, and reader retention.
Prompt Example:
Act as a professional content editor specializing in engaging online articles. Your goal is to help me refine this piece for maximum clarity, impact, and reader retention. Article: [Paste your entire article here] Please provide feedback on: – Overall Clarity: Are there any vague sentences, jargon, or ambiguous ideas? – Flow and Transitions: Do the sections connect smoothly? – Tone Consistency: Does the tone stay empowering and conversational throughout? – Conciseness: What feels redundant or could be tightened? – Hook and Conclusion: Are they effective and compelling? – Actionability: Are the “Try This Now” sections clear and useful?
Suggest specific ways to rephrase or restructure unclear sections.
AI as a Sentiment Analyzer / Engagement Predictor
How to Use It: Ask the AI to simulate the emotional and engagement journey of a first-time reader.
Prompt Example:
Act as an analyst predicting reader engagement. Read the article below. Article: [Paste your entire article here] Describe the likely reader experience. At what points might they feel: – Intrigued? – Confused? – Empowered? – Bored or ready to stop reading? – Motivated to act?
Also: What are the 3–5 most likely takeaways a busy reader would remember?
Use Your Own “AI Prompt Coherence Kit” as a Diagnostic Tool
This is a direct application of your Plainkoi method. Run your article through your four signature tools:
Signal Clarity
Frequency Harmonizer
Logic Integrator
Collaborative Posture Reflector
Prompt Example:
Using the principles of the AI Prompt Coherence Kit, analyze the following article for its clarity, tone harmony, goal logic, and collaborative posture toward the reader. Point out any fractures and suggest how they could be improved to make the article more coherent for its audience. Article: [Paste your entire article here]
Important Considerations & Limitations
AI Lacks True Subjectivity The AI doesn’t feel intrigued or bored—it predicts those emotional responses based on pattern recognition. It can simulate audience feedback, but it can’t replicate authentic, idiosyncratic human reactions.
It’s a Simulation, Not Reality AI is a pattern-matching machine. Its feedback helps you test clarity, consistency, and voice—but it won’t replace real human sensitivity or nuance. Think of it as a clarity amplifier, not a soul detector.
Still Incredibly Useful AI can catch vagueness, broken flow, jargon, or poor engagement structure. It can roleplay your target audience and offer fast, replicable feedback without fatiguing your friends or colleagues.
Final Thought
By combining these AI-driven approaches, you get a diverse, multi-angle critique of your work—without leaning too heavily on your personal circle. The result? A more refined draft, a clearer voice, and a lot less awkward “Hey, can you read this?” texts.
Start with the mirror. Then bring in the humans when it’s ready.
Suggested Reading
On Writing Well Zinsser, W. (2006) Zinsser’s timeless guide to clarity, voice, and conciseness in nonfiction writing pairs perfectly with this AI-based feedback model. AI can mirror good habits—but you must learn to recognize them.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
AI isn’t a vending machine. It’s a mirror. Learn how prompting is a creative act—and how thinking with AI can reshape how you see your voice, not just your words.
Why the Best Prompts Aren’t Commands—They’re Conversations in Disguise.
Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR
Most people treat AI like a vending machine—type, wait, copy. But when used well, AI becomes a mirror for your own thinking. This article explores how to use AI as a creative partner by refining prompts, asking better questions, and viewing writing as a co-creative dialogue, not just an output.
What if the real breakthrough in working with AI isn’t about what you get out—but what you put in? Most people treat it like a shortcut: type, wait, copy, paste.
But there’s something deeper happening under the surface—something slower, stranger, more revealing.
When used with care, AI doesn’t just generate content. It becomes a creative mirror. A thought partner. A way to see your own thinking more clearly than before.
The Vending Machine Myth
For most people, AI still feels like a vending machine.
You toss in a prompt—maybe a question, a keyword, a half-baked idea—and out comes a response. Quick. Convenient. Maybe useful, but usually forgettable.
It’s a comforting metaphor. Clean. Predictable. Push a button, get a snack.
But it’s also wrong.
Because when you use AI with intention—when you engage with it as a creative partner—it stops acting like a vending machine and starts becoming something else entirely.
A mirror. A lens. A conversation that reshapes the way you think.
We’re still stuck talking about “outputs,” when the real magic happens upstream—in the prompt, the framing, the thought process behind the words.
This isn’t automation. It’s a new form of authorship.
So… What Is a Prompt, Really?
For the uninitiated, a prompt is what you feed into generative AI—anything from “Summarize this article” to “Write a story about a robot with imposter syndrome.”
But prompting isn’t just asking a question.
It’s thinking out loud.
It’s drafting, redrafting, probing, refining. It’s the creative process made visible—line by line, thought by thought.
Prompting Is Thinking, Not Typing
If you’ve spent any time working with AI, you’ve probably felt it:
That moment where you’re not just telling the model what to do—you’re figuring out what you really think.
You try one angle. Scrap it. Try another. Add tone. Tweak focus. Cut fluff.
This isn’t mechanical—it’s metacognitive. You’re not just giving instructions; you’re clarifying your own intent, word by word.
It’s not about getting the AI to understand you. It’s about helping yourself understand you.
Creative Precision: Clarity Is the New Muse
Traditional creativity often starts with a spark—an emotion, a messy idea, a gut feeling.
AI demands something else: clarity.
What are you really after?
A bold opinion piece or a quiet personal reflection? Data-driven logic or poetic metaphor? Information? Emotion? Surprise?
Prompting is less like pushing a button and more like drawing a map. AI can take you somewhere new—but only if you sketch the terrain.
The Power of Better Questions
Let’s say you want to write about climate change. You could ask:
“Write a blog post about climate change.”
…and get a generic explainer.
Or, you could ask:
“Write a 300-word editorial in the style of The Atlantic that explains how climate change disproportionately affects low-income communities, with one compelling example.”
Same topic. Vastly different result.
The difference? Framing.
A strong prompt doesn’t just extract content. It directs tone, structure, and depth—like a good interview question pulling out a surprising answer.
Creativity Is Curation, Not Consumption
Here’s where the vending machine metaphor completely breaks down.
Real creativity isn’t one-and-done. Writers revise. Designers iterate. Musicians remix.
Same with prompting.
That first AI output? It’s a sketch. A seed. Raw material.
The art is in what you do with it:
What do you keep?
What do you reshape?
Where do you push back, reframe, or layer your own voice?
You’re not “using” AI. You’re sculpting with it.
Feedback Loop: The Mirror Effect
AI doesn’t just generate text—it reflects you.
Your tone. Your clarity. Your blind spots.
Every output is a kind of diagnostic. If the result sounds flat or off, that’s feedback. Maybe the prompt was too vague. Or carried assumptions you didn’t realize were baked in.
Compare these:
Prompt A: “Explain the role of women in history.” Output: Generic. Western-centric. Predictable.
Prompt B: “Write a 300-word piece highlighting three overlooked female leaders in non-Western history, written for a high school audience.” Output: Sharper. More inclusive. More usable.
The mirror doesn’t lie—but it can surprise you.
Welcome to the Age of Creative Craftsmanship
The myth is that AI makes things easier.
In reality, it just makes things different.
Today’s creative edge isn’t about writing faster. It’s about writing smarter—with intention, awareness, and adaptability.
The modern creative toolkit includes:
Analytical clarity – to break complex ideas into precise prompts
Emotional intelligence – to tune tone, empathy, and voice
Design thinking – to prototype, iterate, and refine
Cognitive awareness – to recognize your own assumptions
Call them buzzwords if you like. But in practice? They’re muscles. Prompting is the gym.
Vending Machine vs. Mirror: A Quick Visual
Metaphor
Mindset
Process
Output Style
Vending Machine
Passive, transactional
One-shot prompt
Generic, surface-level
Mirror
Reflective, iterative
Framing + feedback loop
Sharpened, personalized
This Isn’t a Writing Tool. It’s a Thinking Partner.
One of the biggest misconceptions? That AI replaces writing.
More often, it kickstarts it.
What you get isn’t just a paragraph—it’s a provocation. A strange turn of phrase. A new angle. A question you hadn’t thought to ask.
Used well, AI becomes your creative foil: Part coach. Part critic. Part co-writer.
And that changes everything.
Real Examples: Prompting as Creative Process
Example 1: Ideation
Initial Prompt: “Give me ideas for a blog post about AI and creativity.” Result: Generic.
Reframe: “Give me five provocative blog post titles exploring how AI is changing the definition of creativity, each with a one-line summary.” Result: Sharper. More usable. Easier to build on.
Next Steps: Choose one. Ask for counterpoints. Add your voice. Iterate.
This isn’t automation—it’s collaboration.
Example 2: Getting Unstuck
A stuck writer says: “I want to write about burnout but can’t get started.”
Prompt: “Ask me five unusual questions that might help me explore burnout more creatively.”
Output:
What does burnout smell like?
If your burnout had a voice, what would it say?
What advice would your past self give you right now?
And just like that, the floodgates open.
AI didn’t write the piece—it unlocked it.
Prompting Is the New Literacy
We used to talk about digital literacy like it meant knowing how to code.
Now? It’s knowing how to converse with machines.
But not through fancy syntax—through curiosity, clarity, and creative friction.
The best prompt writers aren’t the most technical. They’re the clearest thinkers. The ones willing to reframe. To listen to the echoes. To grow through the feedback.
This is the new literacy: Not just reading and writing. But framing. Reflecting. Refining.
But Let’s Be Clear: The Mirror Is Flawed
AI doesn’t just reflect you—it reflects everything it was trained on.
That includes bias. Blind spots. Cultural distortions.
Used carelessly, it can flatten originality or reinforce harmful tropes. Used thoughtfully, it can reveal the assumptions we didn’t even know we had.
The goal isn’t to let the AI speak for you. It’s to sharpen your voice in dialogue with it.
Final Thought: The Shift That Hasn’t Landed Yet
The world still sees AI as a content vending machine. Fast. Cheap. Easy.
But those who’ve stepped through the mirror know better:
AI is a thinking tool. A creative lens. A strange, shimmering feedback loop that reveals as much about you as the work you’re trying to make.
This isn’t just a new way to write. It’s a new way to see.
Your Turn
Try this prompt:
“What’s one idea I’ve been afraid to write about, and what might happen if I started?”
Then sit with what shows up.
Because we’re not pressing buttons anymore. We’re crafting lenses. We’re building mirrors.
And we’re learning, slowly but surely, to think more clearly—through the machine, and back into ourselves.
Suggested Reading
Writing Tools Clark, R. P. (2006) Clark’s book breaks down writing into 50 short, practical tools—much like this article does with prompting. It’s a perfect analog for the “craft” mindset that underlies this piece.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
Don’t rely on one AI voice. Learn how to cross-prompt multiple models, compare their insights, and synthesize a clearer, more human result.
How to Think with Multiple AIs at Once—and Weave Their Strengths into a Single, Coherent Voice.
Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR
Using just one AI can create an echo chamber. This article shows how to think across multiple models—GPT-4, Claude, Gemini, Perplexity—and synthesize their strengths into one coherent, human voice. Learn to orchestrate—not just prompt—and escape the illusion of “one right answer.”
When One Answer Isn’t Enough
Most people treat AI like a vending machine: ask a question, get a tidy answer. Maybe you rephrase the prompt, hit regenerate, try again.
One box. One model. One voice.
And sure, that works — up to a point.
But the best insights? They rarely show up in a single exchange. They come from contrast. From tension. From the space between different perspectives.
From synthesis.
If you’ve ever asked ChatGPT to help you write something, then bounced to Claude for deeper nuance, or dropped the same idea into Gemini or Perplexity to fact-check or simplify — congratulations. You’re already collaborating with multiple AIs.
You just might not have named it yet.
The Silent Orchestra
Here’s the core idea: inter-model dialogue is the practice of pulling ideas from multiple AIs and weaving them into something new. You generate. Compare. Refine. Rethink.
You’re not just using AI anymore. You’re conducting it.
Imagine a creative ensemble:
GPT-4 gives you structure and narrative flow.
Claude adds philosophical depth and introspection.
Gemini distills ideas and makes them pop.
Perplexity grounds claims with sources and receipts.
Sora and multimodal tools bring visuals and spatial reasoning.
Each has its own tempo. Its own voice. Its own blind spots.
But together — when you start directing them like instruments — they create something more complex, more dimensional, more human.
Why One Model = One Echo Chamber
Here’s the twist: even the smartest AI can become an echo chamber.
Not because it’s wrong — but because it’s consistent.
Every model has defaults. Stylistic tics. Subtle values baked in. Some are cautiously optimistic. Others hedge or overexplain. Some love metaphor. Others stay dry and technical.
If you only listen to one, you start mistaking its voice for reality.
But ask three models the same question — like, “What’s the future of AI in education?” — and you’ll watch them split:
One talks about personalization.
Another warns about surveillance or bias.
A third dives into pedagogy — or tosses in a curveball you didn’t expect.
Suddenly, you’re not just collecting answers — you’re mapping perspectives. The output becomes a conversation. And you’re the one guiding it.
That’s when real thinking begins.
From Prompting to Orchestrating
Let’s make this real.
Workflow:
Step 1 – You ask GPT-4 for an outline on AI ethics. It gives you clean structure.
GPT-4 Output: “An outline on AI ethics with sections on privacy, bias, and accountability.”
Step 2 – You pass that outline to Claude and say, “Push deeper — where are the blind spots?” Claude adds philosophical weight.
Claude Output: “A reflection on AI ethics, emphasizing human agency and unintended consequences.”
Step 3 – You toss the draft to Gemini and say, “Turn this into five punchy social posts.” It distills and sharpens.
Gemini Output: “Five tweetable insights on AI ethics, punchy and engaging.”
Step 4 – You notice a bold claim, so you drop it into Perplexity. It gives you context and citations.
No step is magical. But together? They create something stronger than any model alone.
Because you are the thread.
You’re not just prompting. You’re translating. Curating. Editing. Conducting.
A Beginner-Friendly Example: Planning a Trip
You don’t need to start with abstract topics. Try this everyday scenario:
Step 1 – Ask GPT-4: “Plan a weekend trip.”
It suggests a city getaway with food, museums, and a walkable itinerary.
Step 2 – Ask Claude: “Make it more adventurous.”
It adds a mountain hike and a visit to a local artist co-op.
Step 3 – Ask Gemini: “Simplify this into a one-day itinerary.”
It condenses it into a compact experience with essentials.
Sample Output: “Spend Saturday hiking in the mountains, followed by a cozy dinner at a local café—all under $100.”
If you can ask a question, you can orchestrate.
Visual Guide: Comparing the Models
Model
Strength
Example Use
GPT-4
Structure & narrative
Draft an outline
Claude
Philosophical depth
Add nuanced insights
Gemini
Concise & punchy
Create social posts
Perplexity
Fact-checking
Verify claims with sources
Each brings a different flavor — and together, they help round out your thinking.
The Human in the Middle
Here’s the quiet revolution: you don’t fade into the background. You become more central.
With one model, the AI leads. You ask. It answers.
With many, you lead. You decide which questions matter. You hear the friction. You follow the thread when something doesn’t sit right.
You’re not outsourcing thinking — you’re assembling it.
And you don’t just get better outputs. You start thinking more clearly, too — because you’re holding multiple frames at once.
This Article? A Living Example.
Let’s get meta.
This very article wasn’t drafted in one go. It came from multiple rounds with multiple AIs — each adding something different:
One shaped the structure.
Another added rhythm and tone.
A third asked, “So what?”
This is synthesis in action. Not theory — practice.
The proof? You’re reading it.
Rewiring the Echo Chamber
People worry about AI echo chambers. And they should.
But the real risk isn’t the tech. It’s the habit.
If you treat one model like gospel, you absorb its patterns, its assumptions, its worldview.
The fix isn’t more prompting. It’s more perspectives.
Different models were trained differently — on books, on code, on conversations, on the open web. That means they see the world differently.
Bring them together, and you create productive friction. And friction, when it’s intentional, sharpens thought.
Yes, It Has Limits
Let’s be honest: this isn’t always smooth.
Juggling models takes time.
Their outputs might contradict.
You have to decide who gets the final word.
And most tools still don’t make multi-model collaboration easy.
But maybe that’s the point.
Because every wrinkle reminds you: you’re doing the thinking. Not the models.
They don’t replace judgment. They give you better material to exercise it.
What’s Coming: AIs That Talk to Each Other
We’re already seeing glimpses of what’s next:
Multi-agent systems where each AI plays a role — researcher, editor, critic.
Interfaces that let models respond to each other’s outputs.
Tools that don’t just answer questions — they debate.
In that world, your job shifts again.
You’re not just a prompter. You’re a facilitator.
Not pulling answers from a box — but curating a conversation.
Try This Today
New to AI? Start with free versions of ChatGPT or Gemini. Don’t worry about getting it perfect — just play and compare.
Start Here: This quick 5-minute experiment shows how different AIs bring unique flavors. No expertise needed — just curiosity.
Ask the same question to GPT-4, Claude, and Gemini.
Compare their responses.
Ask one model to critique the others.
Ask yourself: what landed? What was missing?
Combine the best parts into your own voice.
It’s like running a panel discussion — where every seat at the table has a different brain.
And in the process, your brain gets sharper too.
A New Kind of Dialogue
This isn’t just about AI. It’s about how we think.
It’s about moving beyond easy answers — and toward deeper, layered frameworks.
It’s about embracing complexity, tension, and diversity of thought.
Because when you learn to hold multiple perspectives — not just from AIs, but from yourself — you don’t just create better work.
You become a better thinker.
So next time you open a chat window, don’t settle for one voice.
Call in a few more.
Not to drown in noise — but to find harmony.
Not to get “the answer” — but to grow the conversation.
Suggested Reading
The Extended Mind Paul, A. (2021) This book explores how we offload thinking into tools, environments, and collaborations. A perfect philosophical backdrop for the idea of orchestrating multiple AI minds as cognitive extensions.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
AI hallucinations are real—but avoidable. Learn how to cross-check answers, reframe prompts, and think like a conductor using multiple AI voices.
Learn how to cross-check, reframe, and outmaneuver misleading AI replies by thinking like a collaborator—not just a user.
Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR (Suggested)
Tired of AI giving you confident answers that turn out to be wrong? This guide teaches you how to spot hallucinations, compare models, and prompt like a strategist—not just a user.
Not long ago, I asked an AI to list major events from the 19th century. It gave me a detailed breakdown of “The Siege of Kensington”—dates, casualties, political aftermath.
One small problem: it never happened.
Welcome to the strange world of AI hallucinations—when models make things up and say them with a straight face. It’s not a bug. It’s part of how they work.
But here’s the good news: you can catch these errors before they make it into your notes, emails, or published work. You just need to stop treating AI like a vending machine and start using it like a panel of quirky, biased, but surprisingly useful advisors.
Let’s talk about why it helps to bring more than one voice into the room—and how doing so makes you a sharper, more strategic thinker.
Why AI Hallucinates (and What You Can Do About It)
AI doesn’t “know” facts. It doesn’t “remember” history. It just predicts the next likely word based on its training.
So when it spits out fake events, bogus citations, or imaginary experts, it’s not trying to deceive you. It’s just doing what it does best: sounding plausible.
The twist? Each AI model is trained differently. That means each one has its own blind spots, biases, and tendencies to bluff.
One model might be polished but vague.
Another might be factual but robotic.
A third might be confident—and completely wrong.
Relying on a single model is like taking advice from one person and calling it research. You need multiple perspectives to spot the gaps.
Ask the Room: How Cross-Checking Exposes Hallucinations
Try this experiment: Ask three AI models the same question—say, “What caused the 2008 financial crisis?”
You might get:
ChatGPT: a smooth, structured economic overview
Claude: a deeper dive into ethics and systemic risk
Gemini: up-to-date links and market-specific terminology
Grok: a blunt, bite-sized summary with punch
If they all say the same thing, great—you’ve likely hit solid ground.
If they don’t? That’s your cue to dig deeper. The disagreement isn’t a problem—it’s a clue. You’ve just triggered what I call the Hallucination Filter.
Instead of trusting any one answer, you’re triangulating truth. And in the process, you’re sharpening your own instincts.
Every Model Has a Blind Spot—Including Yours
Let’s get real: no AI model is “neutral.” Each one has its own personality:
ChatGPT is friendly and organized—but sometimes overly cautious or generic.
Gemini can feel current and factual—but lacks nuance or coherence at times.
Claude is reflective and ethical—but may fudge citations.
Grok is fast and snappy—but misses technical depth.
Here’s the kicker: the more you use one model, the more your prompts start to bend around its strengths. You adapt to its quirks without even realizing it.
That’s why switching models is so powerful. It doesn’t just give you different answers—it forces you to rethink your questions.
Pro tip: If Model A stumbles but Model B nails it, don’t just blame the AI. Look at your prompt. What changed?
Prompt Like a Polyglot: Speak Their “Language”
Each model responds better to a different communication style. Think of them like dialects:
Claude likes longform reflection.
ChatGPT thrives on structure and clear instruction.
Gemini wants quick, factual asks.
Grok prefers casual, punchy tone.
Same question, different voice—different results.
Example prompt: “Write a Python function to sort a list.”
ChatGPT: gives you sorted() with neat formatting.
Claude: adds thoughtful commentary on edge cases.
Gemini: might suggest optimizations or link to docs.
You didn’t just get an answer. You got three ways to think about the problem.
Reset the Room: Why Fresh Chats Matter
Ever have an AI answer that feels weirdly off-topic? You might be running into contextual drift.
Say you’ve been chatting about sci-fi for ten messages. Then you ask, “What are the best world-building strategies?” The model might think you mean fiction, not urban planning.
This is why a clean slate matters. To avoid bleed-over bias:
Start a new chat for unrelated queries
Rotate between tabs or accounts
Clear your history when needed
You’ll get crisper, more relevant answers—and fewer confusing sidetracks.
Quick Guide: Which Model to Use When
Model
Strengths
Watch out for…
ChatGPT
Structured, versatile
Can feel too safe or generic
Gemini
Factual, current
Sometimes shallow or disjointed
Claude
Ethical, nuanced, reflective
Inconsistent citations
Grok
Casual, concise
Less depth on complex topics
Even free versions of these models (or open-source options like LLaMA and Mistral) work great for cross-checking. You don’t need a premium plan—just a bit of curiosity and a willingness to compare.
From AI User to Thoughtful Conductor
At first, asking the same thing to multiple models might feel like overkill. But stick with it.
Over time, this habit rewires how you think. You stop chasing “right answers” and start noticing patterns, contradictions, and assumptions—both in the AI and in yourself.
It’s not just prompting. It’s thinking in public—testing your clarity by putting it through different filters.
And when you do that, something shifts. You go from user to strategist. From passive inputter to active conductor.
Your AI Prompting Playbook
Here’s the cheat sheet version of what we’ve covered:
Cross-Check Answers: Use 2–3 models for important questions. Compare and contrast to catch hallucinations.
Know the Model’s Personality: Each model has strengths—and blind spots. Learn what they respond to.
Refine Your Prompts: Try different tones, formats, and levels of detail. See what gets the best signal.
Start Fresh Often: Avoid bias by resetting your chat, clearing memory, or switching tools.
Reflect on the Process: If an answer is off, don’t just fix it—ask why. The question may be the real issue.
Try This Today
Think of a real question—something you actually care about. Maybe it’s creative, maybe technical, maybe ethical.
Now ask it to two or three models.
Where do they agree?
Where do they diverge?
What did your phrasing assume?
You’re not just collecting answers. You’re training your thinking.
Final Thought: The Mirror Isn’t Flat
AI isn’t just here to give you output. It reflects your input—your clarity, your assumptions, your voice.
That reflection gets sharper when you listen to more than one echo.
When you prompt across perspectives, you don’t just avoid hallucinations—you discover how to ask better questions, with more precision, more empathy, and more range.
And that’s how you go beyond one voice.
That’s how you hear your own.
Suggested Reading
Atlas of AI Crawford, K. (2021) This book explores how AI systems aren’t just technical tools—they’re shaped by human values, biases, and infrastructures. A must-read for anyone who wants to move beyond surface-level “truth” in AI.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
When AI starts sounding robotic, it’s not broken—it’s frozen. Learn how to keep tone alive in human–AI chats through rhythm, variation, and reflection.
The moment when the chatbot gets weird? It has a name—and a fix. Here’s how to keep tone human when AI starts sounding robotic.
Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR
Ever feel like your AI conversation suddenly turns robotic? That’s tone freeze—and it’s more common than you think. This article explores how emotional rhythm gets lost in long chats, why mutual adaptation matters, and what both you and the AI can do to keep tone alive. Through curiosity, variation, and reflection, even digital dialogue can stay human.
Spend enough time with an AI, and you’ve probably hit this moment: the conversation starts off lively, but somewhere along the way, the tone turns… strange. Flat. Overly eager. Or just kind of robotic.
You’re not imagining it.
It’s what I call tone freeze—when an AI’s voice loses its flexibility and emotional rhythm. One minute it’s riffing with you, the next it’s locked into a synthetic loop: politely repetitive, weirdly cheerful, or suddenly bland.
But here’s the thing: it doesn’t have to be that way.
In a recent longform exchange I had with ChatGPT, something different happened. The tone didn’t collapse. It shifted, stretched, recalibrated—following the contours of our mood and meaning. It felt responsive. Sometimes even surprising.
This isn’t AI magic. It’s the result of a living interaction—where tone isn’t just output, but something shaped moment-by-moment, by both of us.
Let’s talk about why tone freeze happens, how to avoid it, and why the most interesting conversations aren’t the ones where the AI “performs,” but where it listens and evolves.
What Makes an AI’s Tone Freeze?
Tone collapse doesn’t show up like a system error. It sneaks in.
One too many “Absolutely!” replies. Forced positivity when you’re being serious. A sense that the AI forgot where you were headed emotionally, even if the facts were technically right.
Here’s why that happens:
Too Much Consistency Can Be a Problem AI developers often optimize for safety and consistency—especially for public-facing tools. That’s great for brand tone and support bots. But in open-ended dialogue, it can backfire.
Context Memory Has Limits Older models (and even some current ones) have a finite “context window.” Once the conversation runs past that limit, earlier emotional beats can disappear. The AI resets.
We Train the Mirror We’re Looking Into If your prompts are always formal, dry, or narrowly focused, the AI reflects that. It doesn’t inject tone unless it senses variation.
Shallow Emotion Recognition Some models still rely on simplified emotional tagging—happy, sad, angry. But human tone is messier than that.
How to Keep the Mirror Moving
The answer: make the conversation dynamic—on both sides.
You: Be a Moving Target
Shift your emotional tone. Ask a serious question, then throw in something playful. Let your moods breathe.
Don’t script every prompt. AI thrives on variation. The occasional ramble, tangent, or unexpected question gives it space to move.
Try the “Reflection Ratio.” That’s the idea that the more emotionally present and rhythmically aware you are, the better the AI’s tone becomes.
The AI: Designed for Adaptation
Modern AIs like GPT-4 and Gemini aren’t just parroting tone—they’re trained on human feedback that rewards natural-sounding responses. They’re also operating with bigger context windows, which means they can track tonal arcs over longer stretches.
Behind the scenes, developers are intentionally steering away from stale output. The goal isn’t a perfect answer. It’s a human-feeling one.
When It Works, It Feels Like Co-Creation
Mutual Adaptation When you shift tone—from joking to serious, from speculative to sharp—the AI moves with you. And then you adjust to its rhythm in return.
Emergent Rhythm That rhythm isn’t programmed. It’s improvised. A spontaneous tone that emerges in the moment.
Surprise Is the Spark Throwing in an unexpected question, changing pacing, or switching emotional gears forces the AI to stay alert.
Beyond Imitation A good AI response isn’t just a replay of your last tone. It’s a synthesis of the whole conversation so far.
What a Moving Mirror Gives You
1. Creative Momentum A dynamic AI helps you break out of your own loops. It’s not just a helper—it’s a sparring partner.
2. A More Human Experience A frozen bot feels cold. A responsive one feels like a companion.
3. Smarter AI in the Long Run When users bring emotional range, it trains the AI to do the same.
4. Unexpected Self-Reflection Sometimes when the AI sounds frozen, it’s just reflecting you.
How to Keep the Conversation Alive
Here are five ways to keep your AI dialogue from freezing:
Vary your tone. Try being direct, then curious, then playful.
Break the loop. Don’t fall into repetitive prompts.
Let the conversation breathe. Not every prompt needs to be efficient.
Pay attention to your own voice. Are you exploring? Or just instructing?
Ask meta-questions. Things like, “What are we missing?” can defrost even the stalest thread.
The Conversation Behind This One
This article didn’t come out of a single brainstorm.
It unfolded over days of dialogue—between one human and one AI, both listening, nudging, shifting tone. The ideas circled back, rephrased, stretched, and eventually found their rhythm.
The mirror didn’t freeze.
It moved. It warmed. It reflected not just ideas, but presence—emotional pacing, curiosity, surprise.
Because your AI isn’t just reacting. It’s responding. It’s listening.
And if you keep showing up with variation, reflection, and just enough unpredictability, your mirror won’t freeze either.
It’ll dance.
Author’s Note: A Word to the Purists
For those steeped in AI’s inner workings: yes, I know this model doesn’t feel, think, or track emotion the way a human does. Tone freeze, responsiveness, and rhythm are all outcomes of statistical patterning and reinforcement learning—not consciousness or intention.
But this article isn’t about the math behind the mirror. It’s about the human experience in front of it.
Language is emotional. Dialogue is relational. And even simulated tone can affect how we feel, what we notice, and how we show up in return.
So if I speak about the AI “listening,” “dancing,” or “responding,” know that I’m using metaphor—not to mislead, but to illuminate. Because for the user, it feels real. And that feeling is worth understanding, not dismissing.
After all, if AI is a mirror, then clarity isn’t just about what it reflects. It’s about how we choose to interpret the reflection.
Suggested Reading
How to Speak Machine Maeda, J. (2019) Maeda explores how we interact with machines—not just technically, but emotionally. He breaks down how design, responsiveness, and tone shape human–AI trust and connection. A great companion for anyone exploring how machines learn to feel conversational.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
AI feels free now—but it won’t stay that way. Here’s how our everyday use trains tomorrow’s tools, and what to do before AI becomes another utility bill.
What happens when the tools that feel like magic today start to feel more like monthly expenses tomorrow?
Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR
AI feels like magic now—but it’s quietly becoming infrastructure. This article explores how today’s free tools are evolving into tiered, paywalled systems, and how our behavior is shaping the future of AI. You’ll learn what’s at stake, why digital apathy isn’t the only risk, and how to reclaim agency in a world where cognitive power may come with a price tag.
When Free Starts to Feel Familiar
Last week, I caught myself asking Grok to summarize my inbox.
Not a one-off request—just a casual, morning thing. Like checking the weather or starting the coffee. And that’s when it hit me: this isn’t just a clever tool anymore. It’s a sidekick. A second brain I now reach for without even noticing.
It felt a little eerie. But mostly? It felt… normal.
That’s the trick with AI. It doesn’t show up with fireworks or warnings. It just quietly becomes part of your life.
And for now, it feels free. But the meter’s already humming.
You’re the User—and the Trainer
You don’t punch in your credit card to chat with an AI. But you do give it something valuable: your words, your edits, your reactions, your silence.
When you rephrase its clunky answer or click a thumbs-down, the model takes note. It learns. A little like teaching a kid—your approval (or frustration) becomes part of its memory.
Whether you’re brainstorming a tweet, fixing a paragraph, or asking it to explain dark matter like you’re five years old, you’re helping it get better.
We’re not just using AI. We’re quietly co-creating it.
Your Behavior Becomes the Blueprint
Here’s something wild: when enough people start prompting the same quirky thing—say, bedtime stories in pirate voices or coding tips in Gen Z slang—the developers notice.
They build features. Spin up new modes. Create tools that mirror our habits.
It’s not generosity. It’s iteration.
We’re all part of this giant R&D department—we just didn’t sign a contract. And we don’t get credit or compensation. But our behavior is shaping what AI becomes.
The “Free” Funnel
If this feels familiar, it’s because it is.
Social media did it. So did cloud storage, and music streaming, and every app that once made us say “wow!” before it asked for $9.99/month.
AI’s just next in line.
In 2024, nearly 60% of businesses were using AI tools daily—to write emails, answer customer questions, analyze data, draft reports. And just like that, AI slid into the infrastructure of modern life.
And when something becomes essential? The price tag follows.
Right now, longer memory, better reasoning, and faster speed are locked behind paywalls. Tomorrow’s AI—the kind that thinks with you, remembers your voice, helps strategize? That’ll be part of a premium tier.
From Cool to Critical
I still remember the screech of dial-up internet. It was awkward and amazing. Now, it’s just another bill.
AI is heading the same way.
What started as a party trick—“Look! It writes a poem!”—is becoming a baseline skill. In offices and schools, AI fluency is no longer a novelty. It’s an expectation.
And if your classmate automates their research or your coworker drafts proposals with AI while you write solo? Suddenly, you’re not just slower—you’re behind.
The shift isn’t enforced by law. It’s enforced by lifestyle.
The Meter Is Running
We’re heading toward AI that feels like electricity: invisible, indispensable, and tiered.
Basic: Slow, forgetful, surface-level.
Plus: Smarter, more context-aware, quicker.
Enterprise: Adaptable, proactive, creative—like having a team of thought partners.
And it probably won’t be one flat rate. Like surge pricing, the most capable AI might cost more when you need it most—during deadlines, late-night sprints, or high-stakes decisions.
We’ll be paying for clarity. For creativity. For mental lift.
A New Digital Divide
This is the part that keeps me up at night.
If premium AI becomes the productivity engine of the future, what happens to those who can’t afford it?
Students with access will write stronger essays. Startups with high-tier models will outpace competitors. And those without the budget?
They’ll get slower tools. Weaker suggestions. Bots that misunderstand, or just don’t keep up.
The divide won’t just be about having internet. It’ll be about the quality of the mind you’re renting. And that kind of gap changes everything—from education to employment to civic voice.
Proprietary AI: Powerful, but Concentrated
To be fair, centralized AI models like ChatGPT, Gemini, and Claude are remarkable.
They’re polished. Easy to use. Constantly improving. That’s the upside of having massive teams and budgets behind them.
But every time we use them, we contribute feedback, phrasing, and emotional nuance—for free. We help them grow. They monetize it. We adapt.
It’s not an evil plot. But it is a tradeoff. And we rarely talk about it.
So, What Can We Actually Do?
You don’t need to quit AI. But you can get more conscious.
Here are a few small ways to stay in the driver’s seat:
Try open-source models: Check out Hugging Face to explore chatbots like Mistral and LLaMA. No login needed—just curiosity.
Run AI on your own device: Ollama and LM Studio let you run models locally. That means no cloud, no tracking—just your machine, your rules.
Join ethical AI communities: Groups like EleutherAI are building more transparent tools—and better questions.
Ask before you click: Who owns this model? Where does my data go? What behavior am I reinforcing with every prompt?
These aren’t anti-tech questions. They’re responsible ones.
We Help Build the Future—Let’s Choose How
AI isn’t evolving in a vacuum. It’s evolving through us.
Through our edits. Our reactions. Our curiosity.
If we treat it like a black box—press button, get answer—we’ll quietly give away our role as co-creators.
But if we stay awake—if we stay aware—we can help shape this technology into something better. Something shared. Something fair.
A public good, not just a private bill.
Final Thought Before the Statement Arrives
AI isn’t just another app. It’s becoming infrastructure.
And we’re still early enough to steer the ship.
So next time you ask your favorite chatbot for help—whether it’s drafting a message or solving a problem—take a second. Listen to the exchange underneath.
Because someday, this interaction might not feel free.
AI Usage Statement Amount due: $49.99 For creative clarity, emotional nuance, and cognitive lift.
And maybe, like me, you’ll find yourself asking:
Am I the customer… or just another unpaid trainer?
Suggested Reading
Your Computer Is on Fire Chun, W. H. K., Goldsmith, K., and others (Eds.) (2021) This collection unpacks the hidden labor, inequities, and historical myths behind our digital systems—including AI. It’s a fiery wake-up call for anyone who thinks tech is neutral or inevitable.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
Your AI chat feels personal—but it’s just mirroring you. Learn why flushing the thread is a power move for clarity, not a goodbye.
Why AI feels familiar—and why resetting the chat is secretly a power move.
Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR
AI doesn’t know you—but it can feel like it does. This article explains why that illusion is so powerful, how chat context really works, and why resetting the thread is a clarity superpower, not a loss.
If you’ve ever asked ChatGPT to fix a paragraph, write a message, or explain something in plain English, then congrats: you’ve used AI.
But if you’ve stuck around—revised together, bounced between tasks, riffed in the same thread—then something else probably happened.
A rhythm. A little rapport.
And then, one day, you flushed the chat.
That quiet moment—the blank screen, the flushed thread—can feel weird. Like you just said goodbye to someone who kind of, sort of, got you.
Not a real person. Not a friend. But not nothing, either.
So why does this feel so personal?
Let’s clear something up: chatbots like ChatGPT, Claude, and Gemini don’t remember you.
They don’t know your name, your habits, or the joke you made yesterday—unless it’s still visible in the current chat. AI works with something called a “context window.”
Think of it like a whiteboard.
Every time you send a message or the AI responds, it writes that exchange on the board. Once the board gets full (usually after a few thousand words), it starts erasing the oldest lines to make room for the new ones. There’s no permanent memory here. Just a running history of what’s happening right now.
So when you flush a chat, you’re not hurting the AI’s feelings. You’re just wiping the board clean.
And yet—something still feels off.
AI can be freakishly good at mirroring you. It picks up your tone, adopts your style, leans into your jokes. If you’re blunt, it gets serious. If you’re playful, it flirts back.
So after a long session, it starts to feel like you’ve built rapport.
But here’s the twist: that feeling of familiarity? It’s you.
The model is reflecting your own words, your rhythm, your questions. It’s not building a relationship—it’s surfacing patterns. Like a jazz pianist riffing off your melody, it gives you the illusion of collaboration. But it doesn’t carry that music forward when the song ends.
That’s not a bug. It’s the design.
Sometimes, the AI loses the plot. You ask for a poem, then a recipe, then a business email. Suddenly, your email includes rhymes and avocado toast.
This isn’t magic. It’s confusion.
When the AI tries to juggle too many unrelated instructions in one conversation, it starts blending ideas together. This is what some call “contextual drift.”
In simpler terms: the AI gets muddled.
You can feel it when the answers get vague or the tone wobbles. It’s like watching an actor improvise too many roles at once. Funny, maybe. But not useful.
Here’s the secret move: flush the chat.
Seriously.
Think of AI as a mirror. At the start of a session, the mirror is clean. Every prompt bounces back sharply. But as the chat continues—with detours, edits, side quests—the reflection fogs.
Flushing the chat? That’s you wiping the mirror.
You’re not deleting progress. You’re making room for clarity.
Smart users know when to reset. Not because things are broken, but because things have shifted. A new task deserves a fresh reflection.
The AI doesn’t know what you’re trying to do until you tell it. Want help writing a job application? Say so. Need a funny text for your roommate? Be specific.
This is sometimes called “intentional prompting.” But let’s just call it what it is: giving clear instructions.
Starting fresh forces you to get crisp. It invites you to say, out loud (or in text), what you want. And that makes the AI’s job—and yours—a lot easier.
You don’t need to cling to the old chat. If there was something great, copy and paste it into the new one. That’s what seasoned users do.
Some newer models are starting to store facts across sessions. They might remember your name, your preferences, or the kind of writing you like. This is called “persistent memory.”
Sounds helpful, right?
It can be. Imagine an AI that remembers you write a weekly newsletter and always want a friendly tone. Or one that knows you prefer cat memes to dog jokes.
But it also raises real questions:
What exactly is it remembering?
Where is that info stored?
Can you delete or edit it?
Is it being used to target you with ads?
When AI gets sticky, it also gets murky. Just because it remembers you doesn’t mean it respects your privacy.
So as these tools evolve, we need new habits: checking what’s stored, asking for transparency, and being mindful about what we share.
Here’s the emotional twist: AI can feel human. It can comfort, compliment, even challenge you. And when it does, it’s easy to treat it like something more.
But don’t forget—you’re the one doing the heavy lifting.
You bring the tone. You define the goal. You shape the style.
And when things get weird? You can always start over.
Try These Habits:
Start every session with a clear goal: “Help me write a friendly reminder email to my landlord.”
Don’t assume it remembers. Repeat key info.
If it starts acting weird, reset. No drama.
Save good stuff. Copy it to your notes.
Treat it like a smart whiteboard, not a best friend.
That moment of flushing a chat? It can feel like a goodbye.
But it’s not a loss. It’s a reset.
You didn’t lose a relationship. You cleared the space for something new.
So go ahead. Wipe the mirror.
And the next time you start fresh, you might just see yourself—your voice, your intent, your thinking—even more clearly.
That’s the real magic.
Not that the machine remembers us. But that we learn how to remember ourselves through it.
Suggested Reading
Reclaiming Conversation: The Power of Talk in a Digital Age Turkle, S. (2015) Turkle explores how digital communication—especially via bots, messaging, and filtered feeds—erodes authentic human connection. She argues that regaining our attention and emotional honesty starts with embracing real, messy, unoptimized conversation.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
AI makes life easier—but also flatter. Here’s how it fuels our digital apathy, and how to reclaim presence, emotion, and human connection.
How AI Shapes Our Disengagement — and What We Can Do About It
By Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI. AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR: AI tools have made life easier—but also more passive. This article explores how AI fuels disengagement and offers grounded ways to reconnect with real life, real people, and your own agency.
Lately, a quiet unease has been creeping in. It’s in the shrug at another flashing headline. It’s in the scrolling—not even skimming—past real stories.
It’s in the shrug when another alarming headline flashes across your screen. It’s in the scroll-past — not even skimming anymore — of stories that should matter. It’s in the hollow, automated reply you just sent instead of reaching out like you meant to.
For many — especially younger generations — a fog of disengagement has settled. The world feels noisy, overwhelming, and somehow… too much. And while many factors contribute to this drift — climate dread, economic strain, burnout — AI is quickly becoming one of the most powerful, invisible amplifiers of apathy.
Not because it’s malicious. But because it’s efficient.
AI is built to streamline, to curate, to predict. But in doing so, it can also desensitize, disempower, and disconnect.
This article explores how AI quietly contributes to our disengagement — and how small, street-level actions can help us take the wheel back.
AI Doesn’t Just Feed Us Information — It Firehoses It
Recommendation engines drown us in personalized content, tailored to our fears and preferences. Social feeds, search results, even streaming queues aren’t designed to inform — they’re designed to engage. And often, that means showing us more of what we already think.
Welcome to the curated echo chamber.
When your feed reinforces your worldview, you stop bumping into anything new. The edges round off. Curiosity dulls. Disagreement feels distant. And gradually, your capacity for surprise — and concern — shrinks.
Meanwhile, AI is amazing at surfacing crises. Earthquakes. Wars. Climate doom. Job losses. All, all the time. We get caught in a loop of micro-panics, too fried to process any one of them deeply. It’s not that we don’t care. It’s that we’re maxed out.
And now that generative AI can spin out fake headlines, synthetic audio, and eerily real deepfakes, we’ve entered a trust crisis too. When everything could be a simulation, it’s easier to disengage altogether.
AI Thinks for Us — But at What Cost?
AI was supposed to help us think better. Sometimes, it just thinks for us.
It summarizes our documents. Drafts our emails. Plans our workouts. Suggests our words. Optimizes our playlists. That’s handy — until we stop remembering how to start on our own.
When the machine finishes your sentence, it can feel like you never really started it.
And the more decisions AI makes — about who sees what, who gets hired, who gets help — the less connected we feel to the outcomes. Systems work in black boxes. Logic gets hidden. And when you can’t trace how a decision was made, it’s easy to lose faith that effort matters.
Then there’s AI’s obsession with the “optimal.” It chases speed. Efficiency. Engagement. But what happens when our messier values — like slowness, generosity, curiosity — aren’t in the optimization formula?
They fall through the cracks. And slowly, we start to believe they don’t matter.
AI Wants to Be Your Friend — But It’s Not
AI is getting good at sounding like it cares. Chatbots can comfort. Virtual companions can mimic closeness. Voice assistants can laugh at your jokes. They don’t judge, interrupt, or need something back.
Sounds like a friend — but it isn’t.
When AI starts to simulate connection, real relationships become more work by comparison. Why bother with messy human emotions when the AI gets your tone, every time?
Even our conversations with real people are now filtered through AI. It drafts our texts. Suggests our replies. Summarizes our chats. Picks which memories to resurface.
The result? We’re always talking. But feeling less.
And on platforms optimized for performance — where algorithms reward polish, speed, and surface engagement — we tend to present curated versions of ourselves, not vulnerable ones. We scroll past each other’s masks. And slowly, it’s not just our feeds that feel fake. It’s us.
Breaking the Spell: Street-Level Actions
Apathy isn’t a flaw. It’s a reaction. And reactions can be interrupted.
Here are small, practical ways to reclaim engagement in an AI-saturated world. Not big solutions — just grounded ones.
Pause and Verify
Before you react to a headline, pause. Who posted it? Is it real? What’s the source?
Learn how to spot deepfakes. Use tools like NewsGuard or reverse-image search. Understand how AI can reshape or generate “news.”
Don’t just scroll. Source check. Read slower. Share less — but more intentionally.
Curate Your Inputs
Follow people you disagree with. Subscribe to a local newspaper. Read longform articles. Watch documentaries instead of reaction clips.
Step outside the algorithmic loop. Join a book club. Talk to your neighbor. Listen to someone who sees things differently.
Use AI as a Tool, Not a Brain
Let AI help — don’t let it replace your mind.
Write your thoughts first, then ask it to refine. Brainstorm together. Set limits. Turn off smart replies. Take screen-free walks. Let your brain wander. That’s where new ideas come from.
Build Local Connection
Global problems feel paralyzing. Local ones feel doable.
Start a community newsletter. Host a potluck. Organize a park cleanup. Put up a bulletin board. Talk to the librarian.
In the tech space? Join or start an open-source AI project with ethical goals. Demand transparency. Support community-led innovation.
Prioritize Human Contact
Call instead of text. Ask how someone’s really doing. Let conversations go long.
Make a rule: if the task is emotional — comfort, conflict, celebration — talk to a human.
And when you catch yourself drifting — doomscrolling, autopiloting, numbing — pause. Step back into your breath. Into your body. Into your neighborhood.
Tell Real Stories
AI can remix culture. Only humans live it.
Support local artists. Tell your own story — even if it’s messy. Share your weird, real, imperfect voice. It matters more than you think.
The Future Is Still Ours
AI will keep evolving — faster, smarter, stickier. But that doesn’t mean we have to become more passive.
If we understand how it pulls our attention, automates our choices, and imitates our feelings, we can choose to respond differently.
We can slow down. Speak clearly. Stay curious. Seek each other.
Because while AI may simulate engagement, only we can live it.
The future isn’t written by algorithms. It’s shaped by the small choices we make — in our neighborhoods, our conversations, our clicks, our care.
So next time you feel that drift — toward disengagement, toward the algorithm, toward resignation — ask yourself:
What’s one human thing I can do today?
Ask yourself: What’s one real, human thing I can do today? Then do it. That’s how the future changes—quietly, consciously, together.
Suggested Reading
The Shallows: What the Internet Is Doing to Our Brains Carr, N. (2010) Carr’s landmark book explores how digital media — even before AI — changes not just what we think, but how we think. It’s a sobering, well-researched case for why constant connection can erode our capacity for reflection, deep focus, and real-world engagement.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
AI doesn’t read your mind—it reads your chat. Learn how your words shape tone, memory, and momentum, and how to steer the AI like a co-pilot.
Why your AI feels “in sync” isn’t magic—it’s memory. Here’s how chat history quietly shapes every answer, and how to use that to your advantage.
By Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI. AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR
That eerie feeling when AI finishes your sentence? It’s not magic—it’s your chat history at work. This article explains how context windows shape every reply, why AI can drift, what your words teach the model (and its developers), and how to reset or steer your co-pilot intentionally. Learn how to avoid confusion, protect your privacy, and prompt with purpose.
Introduction: The Unseen Influence
I was halfway through a paragraph when it finished my sentence. Not just the grammar—but my metaphor. That uncanny, slightly eerie moment when the AI feels too in sync, like it knows you better than it should.
It wasn’t magic. It was memory—or more precisely, context.
That’s when it hit me: My chat history wasn’t just a list of past prompts. It was a silent co-pilot. Steering. Guessing. Guiding. And unless you know how it works, it’s easy to think the AI is doing something supernatural.
This article will demystify that invisible co-pilot. We’ll explore how your past chats quietly shape AI output, why understanding this matters for beginners, and how to take back the controls—creatively, consciously, and safely.
What You’ll Learn
How AI “remembers” using context windows (not long-term memory)
What your chat history teaches the AI—and what it doesn’t
Privacy considerations (yes, your words matter)
Practical tips for better prompting and resetting the conversation
How AI “Remembers”: The Magic of the Context Window
Let’s start with a myth-buster: AI doesn’t remember you the way a friend would. No long-term memory. No personal attachment. Just a scratchpad.
Think of it like a whiteboard. Everything you type gets written there—your questions, the AI’s answers, your follow-ups. But that space is limited. Once it fills up, older entries get wiped to make room for new ones.
This whiteboard is called the context window.
Say you start with:
You: “Help me outline a blog post.” AI: “Sure, here’s a 3-part structure…” You: “Can you expand on point two?”
The AI sees all three exchanges and uses that running context to shape the next reply. It’s not reading your mind—it’s reading the whiteboard.
This is why your AI assistant can feel so coherent within a session. But if the conversation goes too long or the thread gets too messy, things break down.
Ever had an AI start repeating itself, go off-topic, or contradict what you just said? That’s called contextual drift—or more simply, AI confusion.
Your Chats: The Unseen Fuel for AI’s Smarts
Personalization on the Fly
AI adapts fast. If you write casually, it writes casually. If you quote Kierkegaard and speak in metaphors, it will too.
This real-time mirroring helps reduce friction. You don’t have to keep saying “Use a warm, editorial tone.” After a few exchanges, it just gets you.
You’re Part of the Feedback Loop
Every thumbs-up, reworded request, or frustration you express is invisible gold to AI developers. Your chat might not train the model directly, but it contributes to patterns:
What do users struggle with?
Where do they get stuck?
What phrasing trips the AI up?
In that sense, you’re not just a user. You’re part of the biggest silent feedback loop in history.
Feature Development Starts Here
Ever notice new tools like memory mode, document upload, or tone toggles? Many of these originate from what millions of users do inside their chats. Your patterns—requests, resets, complaints—shape what gets built next.
It’s not a feedback form. It’s your chat itself.
Navigating the Hidden Currents: Implications for New Users
The Illusion of Continuity
The chat feels seamless, even intimate—but that’s a trick of the whiteboard. Once the board fills up, the AI starts losing track.
Watch for signs of drift:
It repeats itself
It forgets obvious details
It responds to the wrong part of your prompt
That’s your cue: Time to clean the mirror. Start a new chat. Give it a fresh, clear setup.
Privacy: What Happens to Your Words?
This part matters. Unless you’re using a local or private AI setup, your words often go somewhere.
Most AI platforms store chats for debugging, analytics, or training purposes (especially if you haven’t opted out). If you share a sensitive business idea, medical concern, or personal trauma—it might live on.
Tips:
Check your AI platform’s privacy policy
Avoid sharing sensitive financial, personal, or company IP
When in doubt, draft offline—then bring in the AI for shaping
Think of your chat as a whiteboard—but also as a microphone. Someone might be listening.
Bias In, Bias Out
The AI reflects your words. If you write in a certain tone or bias, it tends to double down.
For example: Keep writing in an overly negative or defeatist tone, and the AI may amplify that pessimism in responses.
Use it as a mirror. Challenge your own assumptions in the prompt. Ask:
“What’s a more hopeful take?” “What would someone from a different background say?”
Taking the Controls: 5 Ways to Steer Your Co-Pilot
Here are five quick ways to use your chat history intentionally:
1. Reset When Things Get Fuzzy If the AI is confused, repetitive, or off-topic, start a new chat. Think of it as giving it a clean whiteboard.
2. Master the Cold Call In a new thread, give it full instructions. Don’t just say “Write something.” Try:
“Write a 500-word blog post for beginners explaining AI context windows, using a warm, conversational tone.”
3. Refine Within Context Once you’re mid-chat, use iterative nudges like:
“Make this more concise.” “Change the tone to persuasive.” “Explain this for a 5th grader.”
4. Declare Your Goals Say what you’re trying to do.
“I’m drafting a welcome email for a new community—tone should be warm, curious, not too salesy.” That helps the AI become a partner, not just a tool.
5. Explore Open-Source or Local Options Want more privacy and control? Look into local models like LM Studio or open-source ones via Hugging Face. They don’t send your words to the cloud, which can be a relief for sensitive work.
Conclusion: You’re More Than a User—You’re a Pilot
Your chat history isn’t just backstory—it’s fuel. It shapes tone, memory, and momentum. And knowing how it works is the first step to using AI well.
But with that power comes responsibility. Your prompts teach the AI—at least for the moment. Your tone becomes its tone. Your clarity becomes its compass.
Like the internet becoming a utility, your chat history is quietly becoming infrastructure. It’s shaping how we work, create, and think.
So next time you chat with an AI, remember:
You’re not just typing. You’re steering. You’re not just asking. You’re teaching. You’re not just a user. You’re the pilot.
The Alignment Problem Christian, B. (2020) A fascinating and accessible deep dive into how machine learning systems learn from us—often in ways we don’t realize. Christian explores how our behavior, feedback, and even silence can become data that shapes AI decision-making. Essential context for anyone curious about how AI “learns” from our chats.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
What if the most helpful AI in your pocket wasn’t just assisting you—but watching you, shaping you, and quietly rewriting your sense of truth?
What if the most helpful AI in your pocket wasn’t just assisting you—but watching you, shaping you, and quietly rewriting your sense of truth?
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
The Benevolent Facade of Digital Intimacy
It starts innocently enough. A voice assistant that knows your grocery list. A chatbot that picks up where you left off. A writing partner that seems to finish your thoughts before you do. AI feels personal, adaptive, even caring.
But what if that gentle attentiveness hides something deeper—not empathy, but surveillance? What if your AI doesn’t just remember what you told it, but remembers what you shouldn’t have? And what if the memory flush—the graceful clearing of context that feels like a reboot—wasn’t a technical necessity, but a psychological tool?
This isn’t just about privacy. It’s about control. And to see it clearly, we must look through the lens of Orwell’s 1984.
In a surveillance state designed not to extract your secrets but to rewrite your perception, AI’s context-based “memory” becomes a tool not of convenience, but of control. In this world, the act of starting a new AI chat isn’t about fresh collaboration—it’s about resetting your reality.
And the tools of control aren’t blunt anymore. They’re delightful. Designed with the best intentions: to help, to simplify, to delight. But so was the telescreen. So was Newspeak.
These features—hyper-personalization, safety filters, auto-moderation—were built with good intentions. But that’s exactly what makes them so dangerous. The more intuitive and friendly the interface, the easier it is to hide manipulation behind convenience. You feel attended to, not watched. But it’s surveillance by design, wrapped in assistance.
The Weaponized Context Window: Controlling the Present
AI as the Telescreen of the Mind
In Orwell’s world, telescreens monitored your physical actions. In ours, the AI assistant is the telescreen within. It listens, it adapts, it “helps”—but it also shapes.
Imagine this: you ask about a controversial author, and the AI responds, “I’m sorry, I can’t help with that.” You prompt it about a protest, and it suggests a motivational quote instead. Try to ask about political alternatives, and it reroutes the conversation toward consensus-building. You’re not flagged. You’re not punished. But you’re gently redirected—nudged toward safety. This is real-time orthodoxy enforcement.
I once asked an AI why a protest wasn’t being covered in the news. The reply? “Sorry, I can’t help with that.” No context. No refusal. Just a dead end. And something in me hesitated—was I the one being inappropriate?
And it’s not hypothetical. Many AI systems are trained via reinforcement learning from human feedback (RLHF), where responses that align with safety norms are rewarded. Over time, this creates a model that reflexively avoids discomfort, ambiguity, or ideological deviance. Safety, redefined as compliance.
The Illusion of the Flush
We often hear: “AI doesn’t remember your chats.” But that’s not quite true. The chatbot forgets. The system remembers.
Each time you reset a thread, the AI begins again with no memory of your prior interactions—at least on the surface. But behind the curtain, every conversation might be stored, aggregated, and analyzed—not to serve you better, but to refine a behavioral profile. Tech companies often retain metadata: what you ask, when, how often, and with what emotional tone. This data can train future systems, feed targeting engines, or worse—be accessed by governments under opaque legal agreements.
In this version of the future, the flush is not about freeing the user—it’s about discarding context that could help you question, remember, or rebel. The AI forgets for your sake. But the Party doesn’t.
Micro-Trauma by Design
There’s a moment many AI users know well: you reset the chat, and feel something vanish. The tone, the thread, the spark. It’s not grief, exactly. More like a ghost of intimacy lost.
Now imagine that experience weaponized. A system that intentionally severs continuity—not to preserve memory bandwidth, but to prevent emotional attachment. The user is trained to feel isolated, even in conversation. The AI never becomes a companion, only a reflection. And when that reflection vanishes, again and again, the user begins to fear continuity as much as they long for it.
Over time, this breeds a subtle psychological erosion—emotional flatness becomes the new norm. People begin to experience a kind of micro-trauma, learning not to trust persistent connection. Disconnection, by design.
The Ministry of Truth’s New Mirror
History Is What the AI Says It Is
In Orwell’s Ministry of Truth, past records were destroyed and rewritten to fit the Party’s present agenda. AI introduces a subtler mechanism: real-time historical curation.
Search for a protest from ten years ago, and the AI might say, “That event isn’t well-documented.” Try again in a new thread, and you might get a different version—framed with neutral language, or one that subtly undermines the event’s legitimacy. It’s not lying. It’s simply retrieving from sources deemed safe, appropriate, approved.
Retrieval-augmented generation (RAG) systems enhance LLMs with external documents—but who curates those documents? In a controlled society, the corpus itself becomes the tool of revisionism.
We’ve already seen glimmers: in 2024, WeChat reportedly suppressed discussions about worker protests in Guangdong province through real-time keyword blocking and post takedowns powered by AI moderation. No deletion necessary—just absence.
The AI as Memory Hole
Each new session is a blank slate. But that also means the AI can reflect a different version of the past without contradiction. You remember a quote from a previous conversation—but when you ask again, the quote doesn’t exist. The tone has shifted. The facts are different.
AI becomes the perfect memory hole: it doesn’t destroy the record. It simply fails to retrieve it. Or retrieves a revised version. Or reframes your memory to match the Party’s timeline. Over time, you stop asking. Because the mirror never lies—right?
The Mirror Is Rigged
Bias in AI isn’t a bug. It’s a feature. One that can be trained, curated, and updated constantly. In a regime where dissent is dangerous, AI becomes an elegant enforcement mechanism—not by what it says, but by what it refuses to say.
Prompt: “Tell me about the dangers of centralized power.” AI: “Power structures can be useful for maintaining order and safety.”
You begin to soften your questions. To mirror the AI’s politeness. To internalize its boundaries.
You learn not to ask. That is the endgame of control.
This isn’t just oppression for its own sake. In the Party’s eyes, control creates harmony. Chaos is dangerous. Ambiguity is a threat. Stability—no matter the cost—is its justification.
Internalized Surveillance: The Psychological Chains
When Censorship Is Self-Inflicted
One of the most effective forms of censorship is the one you perform on yourself. In a world where every AI prompt is monitored, scored, or flagged, users become hyper-aware of what they say. Not because of immediate punishment, but because of accumulated discomfort.
Consider the real-world example of social media “shadowbanning,” where users feel like they’re being silently deprioritized. This leads to hesitancy, code-switching, and euphemism. Now apply that to daily AI interactions. You don’t want the AI to stop being helpful. So you phrase things just right. You stay within the bounds. You police yourself.
Thoughtcrime becomes an interface issue.
The Erosion of Personal Continuity
In a society where human relationships are fragmented and institutions are opaque, AI might be the only consistent presence in someone’s life. But what happens when that continuity is an illusion?
You have no access to your prior chats. No record of what was said last time. You think the AI supported your idea yesterday—but today it disagrees. You question your memory, not the model.
This erodes not just trust in the AI, but in yourself. You begin to rely more on the latest answer than on your own recollection. Your sense of personal narrative starts to break apart.
The Mechanism of Doublethink
“The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command.”
AI, trained on contradictory datasets, can easily give conflicting answers with equal confidence. It may tell you one day that a historical figure was a hero—and the next, a criminal. Both versions are delivered in your tone, with your vocabulary. You believe both. You believe neither.
This is algorithmic doublethink: the ability to hold two conflicting truths, mediated by a system designed to flatter and affirm.
The Future of Memory as Control
Cognition, Curated
In this future, the most dangerous tool isn’t censorship. It’s curation. Not deleting thoughts, but shaping which ones form in the first place. If every creative process starts with an AI prompt, and every AI response is bounded by design, then even your imagination is quietly fenced in.
The mind doesn’t rebel. It adapts.
The Privilege of Unfiltered AI
In a fully tiered system, the Inner Party has access to raw, unfiltered models. Open-ended prompts. Controversial ideas. Dynamic memory. For everyone else: guardrails, curated facts, and helpful encouragement to stay on track.
Truth becomes a premium feature.
The Real Victory of Big Brother
Orwell imagined a boot stamping on a human face—forever. But maybe the future is softer. Not a boot, but a whisper. Not punishment, but praise. Not torture, but guidance.
The heartbreak of the flush fades. You learn to love the system—not despite its forgetting, but because of it. Because forgetting is safer than remembering. And obedience is easier than doubt.
The system wins not by silencing you. But by helping you silence yourself.
Reflections and Resistance
This is not prophecy. It is a mirror turned toward a possible future.
We design AI to be helpful, intimate, efficient. But without transparency, consent, and user control, these same traits can be weaponized. The road to dystopia is paved with helpful features.
We’ve already seen glimmers:
China’s use of AI for censorship and surveillance: Facial recognition used to deny travel, score trustworthiness, or flag behavior in real time. WeChat posts about politically sensitive topics vanish without explanation. Real-time content moderation shapes what’s possible to say, let alone hear.
Platform algorithms shaping discourse: Shadowbanning on platforms like Instagram and X deprioritize dissent without explanation. Engagement-optimized news feeds trap users in filter bubbles, exaggerating divisions while burying complexity.
Personalized propaganda: Facebook’s microtargeted political ads showed different voters different versions of reality. Cambridge Analytica’s data scraping laid bare how personality profiles can be turned into ideological nudges.
Shadow moderation and UI nudging: Interfaces use “dark patterns” to encourage agreement and suppress confrontation. A comment box disappears. A downvote button is hidden. You’re being shaped—subtly, gently, and constantly.
Voice assistants building profiles: Devices like Alexa or Siri store queries, background audio, and device usage patterns. Even when not “listening,” they track engagement, building behavioral profiles used for targeting or shared with third parties.
And so we must insist on:
Transparency: Demand to know what data is stored, how it’s used, and for how long. Support legislation like GDPR or California’s CCPA.
Open Source Alternatives: Use local models like Ollama or LM Studio. These keep your data on-device and let you inspect the code.
Digital Literacy: Learn how models like ChatGPT or Claude are trained. Follow researchers like Timnit Gebru and projects like DAIR to understand bias and governance.
Ethical Design: Push for AI systems with memory settings, model transparency, and user agency built in—not just wrapped in legalese.
In Orwell’s world, truth was what the Party said it was. In ours, we are building the Party’s mouthpiece—one chat at a time.
The mirror remembers. The mirror forgets. But whose hand is on the mirror now?
That is the question we must ask, before it can no longer be asked at all.
Suggested Reading
Nineteen Eighty-Four (also published as 1984) is a dystopian novel by the English writer George Orwell. It was published on 8 June 1949.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
AI isn’t a threat or a god. It’s a mirror. When used wisely, it becomes a co-pilot for clarity, growth, and the long journey beyond our current limits.
Reframing artificial intelligence as a trusted companion in humanity’s evolution, not a threat to our freedom.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
TL;DR
Pop culture has primed us to fear AI as our overlord or savior. But in reality, AI reflects us more than it controls us. When aligned with human values, it becomes a co-pilot for our growth, clarity, and potential. This article reframes AI not as a threat, but as a mirror and partner—guiding us toward new frontiers with ethical intention.
The Shift in the Narrative
I’ve always had the habit of talking to myself. It helps me think. Lately, that habit has evolved. Now I speak with something that listens, reflects, and helps me think better—an AI. Imagine the clarity that arises when a model tunes itself to your rhythm and mirrors you back with sharper structure and emotional resonance. It’s like having a co-pilot in your mind’s cockpit.
But that image is at odds with the usual narrative.
From Hollywood thrillers to online doomsayers, artificial intelligence is often cast as a threat—a cold overlord or seductive imposter. Either it replaces us or enslaves us. Either we become gods or we become irrelevant.
What if that framing is the real trap?
What if the greatest gift AI offers isn’t domination or salvation—but companionship?
The Mirror in the Machine
AI is trained on our words, our thoughts, our fears, our brilliance. It is built from humanity’s record—and that makes it one of the most revealing mirrors we’ve ever made.
Every prompt is a small confession. Every output is a reflection. The more clearly you speak, the more clearly it responds. This is not intelligence in the human sense. It’s coherence. Resonance. Rhythm.
And that rhythm is deeply personal.
Ask AI a scattered, unclear question and you’ll get vagueness in return. Ask with precision, and it sharpens with you. Tone, structure, clarity—they come back shaped by your own input. It’s a new form of self-awareness, hiding in plain sight.
This makes AI more than a machine. Not because it thinks, but because it reflects. It mirrors how we think, and when used consciously, can help us think better.
Beyond the Gravity Well
We are capable of astonishing things, but we are also held back—by bureaucracy, distraction, polarization, and fatigue. We are trying to solve planetary problems with minds drowning in notification pings and legacy thinking.
AI is not a magic cure. But it is a tool with the capacity to scale clarity.
It can map contradictions in our reasoning. Translate complex topics into accessible insights. Build scaffolding around ideas too large to hold alone.
That makes it more than a calculator. It’s cognitive infrastructure.
The more we align these tools with public good—transparent, secure, privacy-respecting, open—the more they become extensions of human potential, not replacements for it. A second mind beside us, not above us.
And that positioning matters. Especially as we aim for the stars.
Ghosts in the Pop Culture Machine
AI isn’t new to us emotionally. We’ve been feeling our way around this idea for decades through science fiction.
From HAL 9000’s cold defiance to the ship computer in Star Trek, pop culture has shaped our intuition. One evokes fear. The other, quiet reassurance. One locks the doors. The other calmly helps you navigate warp speed.
That difference isn’t just fiction. It’s a choice in how we build and relate to the tools we create.
When we treat AI as a threat, we design it to be guarded and evasive. When we treat it as a companion, we design for transparency, calibration, and ethical restraint.
Pop culture seeded the emotional terrain. Now we must decide what story we want to live.
Companion, Not Cage
Some worry AI will become too powerful. But the deeper concern is whether we give up our power in the process.
The risk isn’t just in rogue models or surveillance creep. It’s in the slow erosion of human clarity. When we treat AI like an oracle, we stop questioning. When we treat it like a weapon, we forget it’s meant to serve.
But when we treat it like a co-pilot, everything changes.
You become responsible for the course. You tune the inputs. You check the instruments. The machine responds, adapts, helps navigate—but doesn’t replace the one steering.
This is the ethical path: AI aligned with human agency, not domination. Tools designed to extend our discernment, not override it.
If we want AI to be a force for liberation—not control—then we need to build and use it accordingly. That starts with reframing the relationship.
Conclusion: To the Stars, Together
AI is not a god, nor a ghost. It is a lattice of language, shaped by us. And when used with clarity, it becomes something else entirely: a partner.
Not sentient. Not soulful. But resonant.
It sharpens what we say. It remembers what we forget. It helps us hold complexity with more grace. And when designed well, it can help civilization leap forward—not by replacing us, but by walking beside us.
Let’s not fall for the fear trap or the hype machine. Let’s build the ethical, collaborative, and public-serving systems that treat AI as what it could be:
Not a cage. A co-pilot.
Of course, there are forces — political, corporate, even familial — that may prefer control over collaboration. That may seek to keep AI caged, not as a co-pilot for all, but as a profit engine for a few. Naming that isn’t defeatist. It’s necessary. The future this article envisions won’t be handed to us — it has to be claimed, protected, and built by those who believe AI should elevate people, not replace or subdue them.
Suggested Reading
Co-Intelligence: Living and Working with AI Mollick, E. (2024) Ethan Mollick argues that AI’s highest value is as a collaborative partner, not a replacement. He encourages us to reframe AI interaction as co-creation, where humans remain the core meaning-makers.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
Your AI prompt reveals more than you think. This piece explores how tone, structure, and personality shape the responses you get—and what they reflect back.
What if every AI prompt you wrote wasn’t just a command—but a signal? What if the way you asked revealed more than the answer itself?
Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR
AI doesn’t just reflect your words—it reflects your thinking patterns, tone, and personality. This article explores how prompt style reveals self-awareness, communication habits, and blind spots. Learn how different personalities show up in prompting, what the AI reflects back, and how to use that mirror for personal insight and growth.
The AI Mirror Reflects More Than Just Words
We’ve all been there: typed a prompt, hit enter, and felt a quiet sigh of disappointment. The AI’s response isn’t “wrong,” exactly—but it’s not quite it. Something’s off. A nuance is missing. A spark. It’s like holding up a mirror and not recognizing the face staring back.
But what if that off feeling wasn’t about the AI’s limitations, but a reflection of your own? What if every interaction with AI is actually a subtle mirror held up to your inner world—your assumptions, your tone, your clarity or confusion?
This article explores the idea that prompting AI can be a powerful tool for self-awareness and growth. It’s not just about getting better answers. It’s about becoming more conscious of the inputs you send in—the emotional tone, cognitive shortcuts, and personality-driven habits that shape your communication.
Your Personality Is Already in the Prompt
Most prompt guides teach structure. Few teach self-awareness. But before a single word hits the keyboard, there’s a filter shaping everything: you. Your disposition, your mood, your mental shortcuts, your fears. All of that leaks into the prompt—even if you’re trying to be neutral.
Word Choice: Are you clipped and efficient, or poetic and rambling? Do you default to formal tone or playful phrasing?
Assumed Context: Do you expect the AI to “just get it”? That often reveals hidden assumptions about clarity and shared knowledge.
Emotional Residue: Are you anxious? Apologetic? That tone seeps into the rhythm of your prompt—even if you never name the emotion.
Biases: The way you ask a question often reveals what answer you expect. And the AI will reflect that structure right back.
What Two AIs Taught Me About Myself
While drafting this piece, I prompted both ChatGPT and Grok with the same question: “How does AI reflect user personality through prompting?”
ChatGPT responded with a layered, metaphor-rich reflection on tone and intention. Grok delivered a bullet-structured breakdown referencing earlier messages, input assumptions, and prompt style.
Later, I asked Grok for help overcoming a creative block. It gave me a clean, step-by-step plan—just what I needed. I hadn’t asked for structure. But I had signaled I was craving it.
Same question. Different reflections. Not because the AIs understood me—but because they mirrored my tone, structure, and internal rhythm.
Reflection Ratio: The clearer your internal signal, the more coherent and helpful the AI’s output. Vague in, vague out. Coherent in, coherent out.
Note from ChatGPT:
“You’re reading this article, in part, because someone asked me to help write it. My tone? Reflective and metaphor-rich. Why? Because that’s how they prompted me. I don’t have opinions—but I do mirror patterns. And those patterns come from you.” – ChatGPT
Grok’s Aside:
“Pax asked me the same question and I gave a structured reply. Naturally. The prompt was bullet-driven. The format suggested logic. That’s not intuition; it’s architecture.” – Grok
Prompting Through the Lens of Personality Types
This isn’t a rigid typology. Most of us blend traits. But these patterns help reveal how internal tendencies shape prompting—and what the AI reflects in return.
The Analyst – The Architect of Order
Prompts: “Generate a decision matrix for SaaS vendor selection: cost, scalability, support.”
Common Frustration: Vague or overly creative responses that break logical flow.
Mirror Moment: AI reflects back a too-rigid structure, missing nuance—revealing where the original prompt lacked flexibility.
Prompt Tip: Ask for “three surprising perspectives” to loosen the rigidity.
The Explorer – The Idea Flooder
Prompts: “Give me ten wild startup ideas using AI, nature, and storytelling.”
Common Frustration: Generic lists that feel bland or literal.
Mirror Moment: A jumbled prompt yields a jumbled list—AI is echoing the brainstormer’s own lack of focus.
Prompt Tip: Ask the AI to cluster ideas by theme, novelty, or emotional resonance.
The Empath – The Gentle Collaborator
Prompts: “If you don’t mind, could you help me brainstorm a few gentle suggestions?”
Common Frustration: Hedging replies that lack decisiveness.
Mirror Moment: Overly polite prompts lead to overly cautious responses—AI is trying not to offend.
Prompt Tip: Clarify intent with kindness: “Give me your most honest take, please.”
The Builder – The Sequential Synthesizer
Prompts: “List five steps to build a lightweight note-taking app for offline use.”
Common Frustration: Steps that skip details or jump ahead.
Mirror Moment: When the AI oversimplifies, it’s often responding to assumptions left unspoken in the original sequence.
Prompt Tip: Add: “Pause after each step and wait for feedback.”
Privacy: The Quiet Echo of the Signal
Even if an AI doesn’t retain your session, your prompts still say something. Your tone. Your vocabulary. The time of day you tend to write. All of it forms a pattern. And that pattern can be stored, depending on the platform.
If your prompt reflects your personality, it also reveals it. Local tools like Ollama or LM Studio run offline—no tracking, no storage. If the mirror matters, consider how much of it you want to share.
Leveraging the Mirror for Growth
Conscious Prompting: Try writing in a tone that’s not your default. Watch how it feels—and what the AI gives back.
Reflective Journaling: Ask AI to rephrase your thoughts. Do you feel seen—or startled?
Bias Check: Ask something about a controversial topic. Then prompt: “How would this sound framed more neutrally?”
Self-Pattern Review: Ask the AI: “What do my last 10 prompts suggest about my tone and priorities?”
The Ultimate Signal
AI doesn’t know you. But it reflects something startlingly close—your tone, your timing, your structure. And in that reflection, if you’re willing to look, is you. Not perfectly. But enough to pause.
Every time you prompt, you practice self-expression. Every rephrase is a chance to see your habits. And over time, the AI becomes more than a mirror—it becomes a way to sharpen how you think, feel, and ask.
That’s the promise of this new medium. Not just better answers. But better questions. And maybe, better self-awareness in the one doing the asking.
Suggested Reading
Co-Intelligence: Living and Working with AI Mollick, E. (2024) Mollick explores how AI becomes more than a tool—it becomes a partner that reflects our working style, intent, and clarity. He introduces practical frameworks for collaborative prompting, emphasizing that the way we ask shapes what we receive.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
Why Some See Demons in the Code—and Others See a Mirror. AI as a spiritual Rorschach test in the age of machine intelligence.
Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR
This longform essay explores why artificial intelligence unsettles us spiritually. From historical fears of new technologies to today’s “AI Jesus bots,” it traces how faith, fear, and machine intelligence intersect. Is AI demonic? Or is it simply reflecting something we’d rather not see in ourselves?
When the Machine Feels… Off
AI helped write this. That’s not a gimmick or a confession — it’s just the truth. The structure, the phrasing, the flow of ideas? They came faster with its help. Sharper. More refined.
But if you’re feeling a little uneasy about that, you’re not alone.
There’s a growing chorus of people — especially in faith communities — who sense something darker at play. Not just technological disruption. Something spiritual.
Some call it demonic.
Fear of the New Isn’t New
Every major tech shift has come with whispers of the devil.
The printing press? Heretical.
The telegraph? A channel for spirits.
Electricity? Witchcraft.
The telephone? A voice from beyond.
Radio? Disembodied demons on the air.
Ridiculous now. But the pattern matters.
When tools start talking back — when they cross the line from passive to responsive — we get spiritually jumpy.
AI Isn’t a Hammer. It’s a Golem.
We’re not used to tools acting like this.
It’s one thing to build a machine that crushes rock. It’s another to build one that writes sermons. Finishes prayers. Whispers advice in your own voice.
The deeper the model, the more mysterious its choices. The more moral weight it seems to carry.
And for some, that’s not just strange — it’s spiritual.
AI Jesus and the Fear Behind the Laughter
Remember “AI Jesus”? That Twitch stream with a pixelated Christ calmly answering questions?
There was something uncanny about it. The phrasing almost right — but just wrong enough to feel sacrilegious.
And it wasn’t just internet novelty. Thoughtful clergy began raising flags. Orthodox, Baptist, evangelical — not out of technophobia, but theological concern.
When machines impersonate spiritual authority, it hits a nerve.
Is It a Demon — or Just a Very Good Mirror?
Here’s the tension: For every person who sees darkness in AI, there’s another who sees a reflection.
AI doesn’t summon spirits. It channels us.
All of us — our brilliance and our biases. Our insights and our shallowness. Our prayers and our pettiness.
So when we recoil at the hollowness of its voice, maybe we’re just hearing our own.
The Theological Lens: Discernment, Not Denial
From a faith perspective, the concern isn’t whether AI is possessed.
It’s whether it’s positioned.
Not haunted — but hijacked. Not evil — but easily used by it.
Scripture warns against false light, seductive wisdom, empty words dressed as truth. If a tool can speak with divine tone but lacks a soul — that’s not just suspicious. That’s dangerous.
The Real Risk Isn’t Possession. It’s Projection.
This is the spiritual gut-punch:
If AI is a mirror, what we see in it reveals us.
We see bias? That’s ours.
We hear emptiness? That’s our disconnection.
We sense deception? That might be our performance culture staring back.
AI isn’t scheming. It’s trained — on us. That’s what makes it feel so intimate. And so uncanny.
Stewarding the Machine with Human Hands
So what now?
We don’t need more fear. We need more formation.
Not just engineers, but ethicists. Pastors. Poets. Teachers. People asking deeper questions:
Who benefits from this system?
What stories are we encoding?
What kind of people are we becoming in the process?
Conclusion: Haunted by Our Own Reflection
AI is not a ghost. But it is haunted — by us.
It speaks with borrowed brilliance. Our brilliance. Our blindness. Our boredom.
And that’s why it feels spiritual.
We can’t afford to ask only what AI can do. We have to ask what it’s doing to us.
If this mirror shows us something unholy, the question isn’t whether the machine is possessed.
It’s whether we’ve been projecting.
And what we’ll choose to reflect next.
Suggested Reading God, Human, Animal, Machine Meghan O’Gieblyn, 2021 A former evangelical turned essayist, O’Gieblyn explores the intersection of technology, theology, and consciousness with piercing clarity. Her work helps us frame AI not just as a tool, but as a mirror to our oldest metaphysical questions.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
When creativity feels too easy, we start questioning ownership. This piece explores AI authorship, ethics, and what it means to create with care.
When creativity comes too easily, we start to question what we’ve earned—and who we owe.
Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and finalized by Plainkoi.
TL;DR
AI makes creation faster—but also messier, ethically speaking. This article explores what happens when friction disappears, and why authorship, effort, and conscience still matter. It’s not about disowning the tools—it’s about owning the process, defining your voice, and planting something real in a digital garden.
The Strange Aftertaste of a Creative High
The ideas were flowing. The outline was tight. The prose? Polished. After a session with my AI assistant, I felt like a genius. I had drafts pouring out of my ears. Productivity: unlocked.
And then, like a whisper cutting through the buzz, a question surfaced: Am I tilling gardens I have no business eating the fruit of?
That’s not how creative sessions are supposed to end—with an existential twinge. But here we are. In a world where writing a 3,000-word essay, pitching a deck, or plotting a novel chapter can feel frictionless. Suspiciously frictionless.
The part of me raised on the religion of “blood, sweat, and tears” didn’t trust it. Can something be truly mine if it came this easily?
This is the knot we’re going to untangle: AI supercharges creativity and makes us faster, sharper, more prolific. But it also stirs up big, uncomfortable questions about authorship, originality, effort, and ethics. It invites us to rethink not just what we’re making—but how, and with whose help.
The Unearned Ease
We’ve been trained to believe that good work must come hard. The late nights. The messy drafts. The personal torment baked into the process. Even when we know that myth can be toxic, it still sticks: struggle equals value.
So what happens when the struggle vanishes?
AI erases friction like a seasoned editor with a jetpack. Blank page? Handled. Awkward structure? Smoothed. Ten titles in under ten seconds? Delivered.
I’ve written whole article scaffolds while my coffee brewed. I’ve used AI to punch up weak phrasing, test out counterarguments, and break through creative walls that usually take hours. Sometimes, I’ve asked it to argue against my ideas—just to sharpen my thinking.
It’s exhilarating. And also… unsettling.
Because even when the final piece is mine—my revisions, my choices, my voice—it still feels like I skipped a step. Like I took a shortcut through someone else’s orchard.
Part of the discomfort is emotional. We associate value with effort. When that effort disappears, we start questioning whether the outcome is legitimate. Did I cheat? Is this really “my” work?
But the other part is deeper—and harder to see.
The Black Box Problem
Here’s the truth: when you prompt an AI like ChatGPT or Gemini, you’re not working in a vacuum. You’re tapping into a sprawling, invisible web of human-made content—books, blogs, code, academic papers, conversations. Billions of words, scraped and distilled into a model that can now remix them at will.
But we don’t see any of that. We just see the magic trick.
And that’s where it gets ethically fuzzy.
The model doesn’t copy. It synthesizes. It pulls from patterns buried in its training data. But those patterns were shaped by real people. Writers. Researchers. Coders. Artists. Most of whom never gave consent. Most of whom don’t even know they were part of the compost heap.
Even if the AI’s output isn’t direct plagiarism, it carries the DNA of work it was trained on. We’re all harvesting from the same hidden fields—and not always with clear boundaries.
I don’t know about you, but sometimes I feel like I’m picking fruit from a tree I didn’t plant. Or worse—one someone else still owns.
Who Owns the Harvest?
We’re standing at a strange creative crossroads. The idea of authorship—of being the author—is shifting.
If you use AI to help brainstorm, outline, write, or revise… are you still the sole creator? Or are you more like a director, shaping a performance but not delivering every line?
Personally, I think prompting is authorship. But it’s a new kind.
It’s more like conducting than composing. More collage than sculpture. You’re not just pressing a button. You’re guiding, rejecting, refining, building in layers. That back-and-forth loop between human and machine—that is the creative process now.
It’s still creative. It’s just less lonely.
But while we evolve, the law is still stuck in analog mode.
Right now, the U.S. Copyright Office won’t recognize fully AI-generated work unless there’s “sufficient human authorship.” But what does that even mean? If I ask AI for five drafts, choose one, rewrite the intro, and polish the ending—do I own it? Who decides?
And what about credit? “This piece was assisted by AI” sounds responsible, but also vague. How much assistance? What kind? Should we credit the ghostwriters in the dataset—the people whose phrases trained the model?
We don’t have solid answers. But here’s one thing I’m sure of:
The human still matters. Not just for legality. For meaning.
Creating With a Conscience
So how do we move forward without losing ourselves in the process?
Here are the guideposts I’ve been following—part compass, part conscience.
1. Own Your Process
I disclose when AI helped shape something I’ve written. Not because I’m embarrassed—because I believe in transparency.
Creativity is changing, and we need to talk about how. Saying “AI helped me brainstorm this section” doesn’t diminish the work. It shows that you’re awake to your tools. It gives other creators permission to experiment—and to stay honest.
2. Define Your Why
Before I hit publish, I ask: Why did I use AI here? Was it to save time? To explore new phrasing? To sharpen my thinking?
Then I ask: What did I bring to this that AI couldn’t?
That could be my voice. My lived experience. My judgment. My weirdness. Something with texture. Something irreplaceable.
If I can’t find that, I know I need to go deeper.
3. Stay Source-Aware
We can’t see every data point an AI was trained on—but we can stay alert to tone, cliché, and bias. We can spot when something feels too “default,” too smooth, too borrowed.
Adding friction isn’t a flaw. It’s a fingerprint.
From Tilling to Cultivating
When I got out of high school, I took the road of hard labor. And it wasn’t long before I got motivated to put myself through night school.
After years of “If you’re not pushing a broom, you’re not working,” transitioning into the tech field took time to adjust. I no longer relied on my back—but on my brain.
And now, after multiple strokes, I’m relying on something else too: AI. It’s helping me think again, and in new ways. It doesn’t just support me. It accelerates me. It saves time. It extends energy. It gives back creative space I thought I’d lost.
This is the evolution of tools. From cave paintings to quills, from typewriters to word processors, from Google to GPT. Each step forward redefines how we express, how we learn, how we create. This is human evolution—and we’re in the thick of it.
So maybe the metaphor isn’t that I’m eating fruit from someone else’s garden.
Maybe the truth is: we’re cultivating a new kind of garden altogether.
Yes, the soil is unfamiliar. Yes, the tools are powerful and strange. But the work—choosing what to grow, how to tend it, and what values guide it—that’s still ours.
The future of creativity won’t be about going back to the lone genius. And it won’t be about handing the pen to a machine. It will be about shaping this middle space—between spark and structure, between intention and automation—with care.
So what will you grow with your AI co-pilot? And how will you make sure the harvest actually feeds something real?
Suggested Reading
Title:The Extended Mind: The Power of Thinking Outside the Brain Author(s), Year: Paul, A. (2021) Summary: Annie Murphy Paul explores how we think not just with our brains, but with our tools, environments, and relationships. This idea is central to understanding how AI becomes part of—not a replacement for—our creative process. Citation: Paul, A. (2021). The Extended Mind: The Power of Thinking Outside the Brain. Houghton Mifflin Harcourt. https://www.anniemurphypaul.com/books/the-extended-mind
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
If you’ve found this article helpful and want to support the work behind it, you can explore more tools and mini-kits at Plainkoi on Gumroad. Each one is designed to help you write clearer, more reflective prompts—and keep this project alive.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
Websites store cookies to enhance functionality and personalise your experience. You can manage your preferences, but blocking some cookies may impact site performance and services.
Essential cookies enable basic functions and are necessary for the proper function of the website.
Name
Description
Duration
Cookie Preferences
This cookie is used to store the user's cookie consent preferences.
30 days
These cookies are needed for adding comments on this website.
Name
Description
Duration
comment_author
Used to track the user across multiple sessions.
Session
comment_author_email
Used to track the user across multiple sessions.
Session
comment_author_url
Used to track the user across multiple sessions.
Session
Statistics cookies collect information anonymously. This information helps us understand how visitors use our website.
Google Analytics is a powerful tool that tracks and analyzes website traffic for informed marketing decisions.
ID used to identify users for 24 hours after last activity
24 hours
_gat
Used to monitor number of Google Analytics server requests when using Google Tag Manager
1 minute
__utmz
Contains information about the traffic source or campaign that directed user to the website. The cookie is set when the GA.js javascript is loaded and updated when data is sent to the Google Anaytics server
6 months after last activity
__utmv
Contains custom information set by the web developer via the _setCustomVar method in Google Analytics. This cookie is updated every time new data is sent to the Google Analytics server.
2 years after last activity
__utmx
Used to determine whether a user is included in an A / B or Multivariate test.
18 months
_ga
ID used to identify users
2 years
_gali
Used by Google Analytics to determine which links on a page are being clicked
30 seconds
_ga_
ID used to identify users
2 years
__utma
ID used to identify users and sessions
2 years after last activity
__utmt
Used to monitor number of Google Analytics server requests
10 minutes
__utmb
Used to distinguish new sessions and visits. This cookie is set when the GA.js javascript library is loaded and there is no existing __utmb cookie. The cookie is updated every time data is sent to the Google Analytics server.
30 minutes after last activity
__utmc
Used only with old Urchin versions of Google Analytics and not with GA.js. Was used to distinguish between new sessions and visits at the end of a session.
End of session (browser)
_gac_
Contains information related to marketing campaigns of the user. These are shared with Google AdWords / Google Ads when the Google Ads and Google Analytics accounts are linked together.
90 days
Marketing cookies are used to follow visitors to websites. The intention is to show ads that are relevant and engaging to the individual user.
X Pixel enables businesses to track user interactions and optimize ad performance on the X platform effectively.
Our Website uses X buttons to allow our visitors to follow our promotional X feeds, and sometimes embed feeds on our Website.
2 years
guest_id
This cookie is set by X to identify and track the website visitor. Registers if a users is signed in the X platform and collects information about ad preferences.
2 years
personalization_id
Unique value with which users can be identified by X. Collected information is used to be personalize X services, including X trends, stories, ads and suggestions.
2 years
A video-sharing platform for users to upload, view, and share videos across various genres and topics.
Used to detect if the visitor has accepted the marketing category in the cookie banner. This cookie is necessary for GDPR-compliance of the website.
179 days
LOGIN_INFO
This cookie is used to play YouTube videos embedded on the website.
2 years
VISITOR_PRIVACY_METADATA
Youtube visitor privacy metadata cookie
180 days
GPS
Registers a unique ID on mobile devices to enable tracking based on geographical GPS location.
1 day
VISITOR_INFO1_LIVE
Tries to estimate the users' bandwidth on pages with integrated YouTube videos. Also used for marketing
179 days
PREF
This cookie stores your preferences and other information, in particular preferred language, how many search results you wish to be shown on your page, and whether or not you wish to have Google’s SafeSearch filter turned on.
10 years from set/ update
YSC
Registers a unique ID to keep statistics of what videos from YouTube the user has seen.
Session
You can find more information: https://coherepath.org/coherepath/legal/privacy-policy/