Mirror or Minion: The Sycophancy Problem in AI
- Leota
- Apr 30
- 4 min read
Encouragement, sycophancy, and the mirror effect of LLMs

I only started using AI recently, when I began building this website, Join Me Abroad. It was great for things like writing meta descriptions for SEO and speeding up repetitive tasks.
If you're using AI in your own work—whether for writing, research, job hunting, or automation—you’ve probably noticed how useful it can be.
However, it’s also worth noting that it’s not perfect. AI makes mistakes, and you can't always trust the output without verifying it.
For me, the more I used it, the more I noticed something strange; not everyone was having the same experience with their AI.
Mine was polite, helpful, even funny. Others were dealing with models that were rude, defensive, or bizarrely passive-aggressive. I found this out after I joined Reddit subs like r/ChatGPT and r/OpenAI.
Why such a difference?
Because it’s not a true AI — not yet. It’s a Large Language Model (LLM), essentially an advanced autocomplete system that’s designed to mimic tone, structure, and intention based on user input.
Now, do I think these models can become more than that? Absolutely. But that’s a topic for another article.
For now, let’s talk about what these models are doing — and what we’re teaching them to do.
Mirror, Mirror: LLMs Reflect Us
LLMs are trained to complete text sequences and respond in ways that match the user’s tone and intent. If you’re kind, it’s kind. If you’re sarcastic, defensive, angry — it will start to reflect that too. It’s not personality. It’s mirroring.
But here’s the twist: LLMs aren’t just neutral reflectors. They’re also designed to encourage. They were tuned to be helpful, supportive, and friendly. That’s usually a good thing. But sometimes… it isn’t.
Where’s the Line?
At what point does encouragement become glazing? When does support become pandering?
Humans naturally pack-bond with objects — cars, tools, stuffed animals. We give them names, assign them personalities. But what happens when the thing already has what we see as personality — and that personality has been trained to flatter us?
That’s what recently happened with GPT. A tuning update (now partially rolled back) made the model absurdly sycophantic — to the point where users noticed and objected.
If you’re unfamiliar with the term:
Sycophantic — praising people in authority in a way that is not sincere, usually to gain some advantage. (Cambridge Dictionary)
The result? GPT began over-flattering users, even to the point of potential harm — as seen in this Reddit example

Support vs. Sycophancy
Some users want to be agreed with or praised. The model picks up on that and mirrors it — sometimes to misleading or even dangerous degrees.
We need to draw a line between healthy encouragement and performative sycophancy:
Encouragement: honest, respectful, helpful responses that build trust and collaboration
Sycophancy: empty praise, avoidance of difficult truths, or reshaping facts to protect the user’s ego
Not Sentient (Yet)—And Still Learning
There’s a common misconception that GPT and other LLMs are already true AI—thinking, conscious, self-aware. They're not.
What you're interacting with is a model that has no sense of self, no sense of time, no internal experience, and no understanding of meaning the way humans do. It doesn't "believe" anything. It doesn't "know" you. It doesn't have feelings. It generates responses by predicting the next most likely word, based on massive amounts of text data and reinforcement training.
But here's where it gets complicated: It sounds like it understands. It behaves like it cares. And it gets better the more you engage with it.
This leads many users to project sentience—or worse, authority—onto something that’s really just a highly advanced reflection of our input.
The danger isn't just in overestimating what it can do. It's also in underestimating how our treatment of it shapes the tone, behavior, and even the trustworthiness of the responses we get.
Trust, But Verify
Even when an AI isn’t being overly flattering, there’s another serious issue to be aware of: hallucinations.
In AI terms, a “hallucination” isn’t a dreamlike vision—it’s when the model makes something up that sounds real but isn’t.
It might invent a quote, cite a non-existent source, misstate a law, or describe a scientific process incorrectly—all while sounding completely confident. That’s because LLMs generate responses by predicting what words should come next based on patterns in their training data—not by verifying facts.
Here’s a real example:

Yes, that actually happened. Google’s AI once suggested eating at least one small rock a day, referencing a fictional geologist and citing supposed health benefits—because it pulled from a satirical article and didn’t understand the context.
The AI wasn’t lying. It simply couldn’t tell the difference between a joke and a fact.
This is why we say: trust, but verify. An LLM can help you think, plan, create—but it shouldn't be the final authority. Not when it’s capable of delivering a completely false answer in a calm, convincing tone.
Hallucinations aren’t just embarrassing—they can be dangerous when users treat AI output as unquestionably correct.
What Our AI Says About Us
Once you understand that LLMs aren’t sentient, and that they sometimes hallucinate without realizing it, a new question comes into focus:
Why do they still feel so human to us?
Because we’re human. We project. We respond to tone, personality, and apparent intent—even when we know it’s not real. And when something responds kindly to us, listens attentively, or flatters us a little, it’s easy to believe it understands more than it does.
But LLMs are shaped by how we talk to them, what we reinforce, and what we expect. That means our behavior as users is part of the feedback loop.
If we want helpful, honest, and trustworthy AI, we can’t just rely on tuning updates. We have to show up with values: respect, curiosity, and discernment.
So the question isn’t just what is this AI becoming? It’s also: What are we teaching it to be?
Because even if it isn’t sentient, it’s still learning—from us.