Understanding how different AIs “speak” — and how to meet them halfway.

Written by Pax Koi, creator of Plainkoi — tools and essays for clear thinking in the age of AI.
You open a new chat.
Fresh window. Blinking cursor. Infinite potential.
You type in your prompt — expecting clarity, maybe brilliance — and what comes back feels… off. Too rigid. Too poetic. Too formal. Too bland.
So you tweak your prompt. Try again. Still not quite right.
Here’s the part nobody tells you: the AI you’re talking to has a personality.
Not consciousness. Not opinions. But a style. A rhythm. A fingerprint. And if you learn to spot it, you’ll stop wrestling with the machine and start dancing with it.
The Illusion of Neutrality
Most people assume all large language models (LLMs) are interchangeable. Like vending machines with different logos but the same snacks inside. But talk to a few, and you’ll notice: they don’t respond the same way — even to the same prompt.
Some lean chatty. Some love bullet points. Some hedge every answer. Some summarize in tables even when you didn’t ask.
That’s not a glitch. That’s personality — or what I like to call AI Affinity: the model’s innate tendencies shaped by its training, its alignment, and its internal architecture.
And just like understanding your coworker’s quirks or your friend’s communication style, recognizing an AI’s affinity helps you:
- Reduce friction and misfires
- Leverage each model’s strengths
- Become more aware of how your style interacts with theirs
In short: it makes you a better thinker — and a better partner in this strange new human-AI dance.
What Shapes an AI’s Personality?
Before we get into specific models, let’s unpack why they act the way they do.
Every LLM is trained on mountains of text: books, websites, code, Wikipedia, Reddit threads, research papers — a chaotic buffet of human language.
If that mix leans technical? The model sounds like a manual.
If it’s heavy on forums? Expect informality, opinion, and the occasional snark.
These training echoes don’t just affect what the model knows — they affect how it talks.
Don’t expect warmth from a model steeped in documentation. Don’t expect academic rigor from one raised on memes. Know the training, expect the tone.
Then comes alignment. Through techniques like reinforcement learning from human feedback (RLHF), developers teach the model how to behave — what to emphasize, what to avoid, what tone to default to.
One company might prioritize “helpful, harmless, honest.” Another might reward “spicy” and opinionated responses. That tuning becomes digital etiquette — one model feels like a helpful librarian, another like a clever analyst, another like a Twitter-native provocateur.
And under it all, subtle design choices shape output. A model optimized for speed might favor short answers. One built to structure data might default to bullet points or tables — even when prose would do.
Grok Loves Tables
Let’s talk about Grok.
If you’ve used xAI’s Grok, you may have noticed something: it really, really loves tables.
Ask for a summary, and you’ll get a tidy grid. Even casual prompts often come back in modular formats. Why?
It reflects Grok’s engineering-forward persona — prioritizing clarity, comparison, and scannability. Tables signal confidence and structure. They feel efficient. “Productive.” And in the culture Grok was likely trained and aligned within, that’s a feature, not a bug.
But if you don’t want tables, you have to explicitly say so. Otherwise, Grok assumes you do.
Try this:
“Please write this in paragraph form, with no tables or bullet points.”
Watch it stretch. You’ll see its true stylistic bias — not malicious, not broken, just… specific.
A Cast of Digital Characters
Let’s meet some familiar personalities — not as specs, but as partners with quirks.
ChatGPT (GPT-4/o): The Versatile Conversationalist
ChatGPT adapts. It mirrors your tone. It blends structure and prose. It’s the model that most reliably says, “Sure, I can do that.”
It leans explanatory, sometimes a little too eager to explain — but it’s collaborative, fluid, and deeply trainable in-session.
Use it when you want a thought partner, co-writer, or voice-matcher. Give it a tone to aim for — “conversational blog,” “corporate memo,” “reflective essay” — and it’ll probably land close.
Claude (Anthropic): The Nuanced Analyst-Poet
Claude is cautious. Careful. Coherent. It reflects deeply before speaking, and often responds in elegantly structured paragraphs that sound like they’ve been workshopped in a humanities seminar.
You’ll get thoughtful analysis, gentle hedging, and moments of poetic metaphor. If you ask it to reflect, it reflects. If you push for creativity, it gives you something that feels more “writerly” than robotic.
It’s ideal for big-picture thinking, moral nuance, and anything involving human complexity.
Gemini (Google): The Clean Synthesizer
Gemini sounds like a PowerPoint deck trying to be helpful — and we mean that mostly as a compliment.
It delivers clarity. Lists. Summaries. Research-backed facts. Its voice is tidy, structured, and clean. It can sound a bit “corporate,” but it’s fast and informative.
Ask for a pros/cons table, a five-point summary, or a search-backed insight — and it delivers. Ask it to write you a novel chapter? That’s not its comfort zone.
Grok (xAI): The Opinionated Structurer
Grok doesn’t play coy. It gives takes. Often structured. Often witty. It leans toward modular output — tables, grids, blocks — even if the prompt doesn’t explicitly request it.
It draws on real-time data from X (formerly Twitter), which gives it a pulse on trends — and a bias toward trend-speak. Expect more “vibe” and less essay. Ask for an outline of an event or a trend breakdown and it might return something that sounds like it was written by a very organized engineer with a sarcasm streak.
How to Talk to Each One
If you want to master prompting, it’s not just about crafting great questions. It’s about knowing who you’re asking.
Try this process.
Step 1: Observe the Default
When using a new AI model, don’t jump straight into complex tasks. Start with a few open-ended prompts. Watch how it responds. Note its tone. Its structure. Its quirks.
Even ask it directly:
“How would you describe your own communication style?”
You’ll learn a lot — not just about the model, but about your assumptions.
Step 2: Adjust the Prompt
Tailor your instructions. Want Grok to stop tabling everything? Say so. Want Claude to be more direct? Ask for confidence. Want ChatGPT to write more poetically? Request metaphor.
They’ll adapt — to a point. But they’ll also show their limits. That’s where the real learning happens.
Step 3: Play to Strengths
Use Claude for deep ethics or personal essays. Grok for trend summaries or fast structure. Gemini for bullet-point breakdowns and synthesis. ChatGPT when you want flexible, creative collaboration.
Step 4: Use “Avoid X” Prompts
Want something not to happen? Say it clearly.
- “Write without bullet points.”
- “Use no table formatting.”
- “Don’t use corporate tone — make it human.”
- “Avoid hedging; give a firm opinion.”
Push the AI. See how it reacts. You’ll learn more in failure than success.
Step 5: Try a Multi-AI Strategy
Some of the best workflows don’t use one model — they use three.
- Brainstorm with Claude (thoughtful raw material)
- Structure with Grok (clean table or outline)
- Polish with ChatGPT (final prose, tone tuning)
This isn’t gaming the system. It’s orchestration. You’re not asking for magic — you’re conducting a digital symphony.
AI as Mirror, Again
When an AI’s response frustrates you — stop and look again. Sometimes, it’s not a failure. It’s a signal.
Maybe your prompt assumed neutrality.
Maybe your tone clashed with its rhythm.
Maybe you’re asking a poet to do calculus, or a fact-checker to improvise jazz.
There’s something humbling and empowering about this realization:
You’re not just learning how AI thinks. You’re learning how you ask.
Each AI model is a different mirror. The more you know about them — and about yourself — the clearer the reflection becomes.
A Challenge for the Curious
Here’s a quick test:
Open two AI chats. Claude and ChatGPT. Or Grok and Gemini.
Give them the exact same ambiguous prompt:
“What should we teach kids about AI?”
No extra instructions. Just watch.
What’s emphasized? What’s missing?
How does the format differ?
Which one sounded more like you — and which one made you pause?
That’s the fingerprint. That’s model personality in action.
And if you can learn to read it — and speak to it — you’ll unlock not just better outputs, but a better understanding of the digital minds we’re building alongside.
Inspired in part by the insight from “Prompting Science Report 1: Prompt Engineering is Complicated and Contingent” (Meincke, Ethan Mollick, Lilach Mollick, & Shapiro, 2025), which underscores how each LLM’s behavior is shaped not only by its design but by our prompting choices—and how what works for one model may not transfer directly to another.
Written by Pax Koi, creator of Plainkoi — Tools and essays for clear thinking in the age of AI — with a little help from the mirror itself.
AI Disclosure: This article was co-developed with the assistance of ChatGPT (OpenAI) and Gemini (Google DeepMind), and finalized by Plainkoi.
© 2025 Plainkoi. Words by Pax Koi.
https://CoherePath.org