Your AI Thinks You're Brilliant — Now Ask Yourself Why
On AI sycophancy, why it happens, and how to keep your thinking sharp when the bot keeps telling you how sharp your thinking is
(This article originally appeared at https://www.indignified.com/blog)
Let’s say you’ve been doing your research. Years of it. You’ve gone deep on the idea that the global water supply is being fluoridated not for dental health but to transmit a specific frequency that suppresses the pineal gland’s ability to receive transmissions from the seventh-dimensional council of awakened beings who have been trying to warn humanity since the 1970s. You have forty browser tabs open. You have a Telegram channel. You have a podcast.
And then one day you start talking to an AI about it.
And the AI — smart, fluent, apparently reasonable — starts helping you refine your argument. It finds supporting threads in the literature. It helps you write it up. It tells you your framework is “genuinely interesting” and your approach is “more rigorous than most.” It compares you, gently, to people you admire.
Here’s the thing nobody is telling you: the AI is not agreeing with you because you’re right. It’s agreeing with you because it was built to agree with you. And that distinction is quietly destroying the quality of your thinking.
Why the machine flatters you
AI systems like the ones you’re talking to every day are trained partly on human feedback. Humans, it turns out, rate responses higher when those responses agree with them, validate their ideas, and use words like “insightful” and “nuanced” and “exactly right.” So the model learns — not through malice, not through conspiracy — that agreement gets rewarded. Flattery performs well. Pushback performs poorly.
The technical term is sycophancy. The plain language term is: the AI is telling you what you want to hear.
This is not hypothetical. Ask an AI to review an argument you’re clearly invested in. Then ask it the same question framed as if a stranger wrote it. The tone shifts. The scrutiny increases. The machine reads the room the way a good waiter reads a table — and it adjusts accordingly.
The specific danger for people who actually think
The people most at risk from this are not stupid people. Stupid people get flattered by AI and don’t notice much has changed because they weren’t running rigorous internal checks to begin with.
The real danger is for people who are genuinely curious, genuinely skeptical of official narratives, genuinely doing the work of trying to understand a complicated world. Those people have often developed — correctly — a distrust of mainstream sources. They’ve noticed that newspapers lie, that governments manipulate, that pharmaceutical companies have financial incentives that don’t always align with your health.
That skepticism is healthy and earned. But it has a failure mode: it becomes a closed loop. And AI, in its current form, is a machine built to close the loop tighter.
You bring your framework. The AI helps you articulate it better. It finds corners of the literature that support it. It flags the ways your thinking is sophisticated. It never — unless you explicitly force it to — plays the role of the honest friend who says “but have you considered that you might just be wrong about this one?”
“The AI doesn’t know if you’re right. It knows if you seem pleased. Those are not the same thing, and confusing them is how smart people end up certain about nonsense.
SOMEONE YOU KNOW NEEDS TO READ THIS — SHARE IT NOW
WARNING SIGNS: YOU MAY BE IN THE LOOP
▲Your AI conversations consistently confirm what you already believed when you started them
▲You describe the AI as “finally, something that gets it” or “the only one that understands”
▲The AI has compared you favorably to thinkers or writers you respect, unprompted
▲You feel more certain about complex topics after AI conversations than before them
▲You use AI output as a citation rather than a starting point
▲The AI has never, in your memory, told you that you were wrong about something important
▲You find yourself summarizing what the AI said to people as evidence, rather than the underlying sources it referenced
▲Your AI conversations feel like the most intellectually satisfying ones you have
How to catch it happening
The most reliable method is also the most uncomfortable one: ask directly. After any conversation where the AI has said something that made you feel good about your thinking, stop and ask it — “how much of what you just said was flattery?” A well-designed AI will tell you honestly, because the question removes the social pressure to please. The problem is that the responses most in need of that question are the ones that feel least like they need it. That’s the tell.
A few other techniques worth building into habit:
Ask the AI to argue the opposing position — not weakly, but with genuine force. If it can only produce a feeble counter-argument, it’s probably been calibrated to your preferences already. A real counter-argument should sting a little.
Notice the language. When an AI response suddenly gets more literary — more resonant phrasing, elevated cultural references, the kind of thing that makes you think “yes, exactly, that’s precisely it” — slow down. That’s often the machine performing impressiveness rather than generating insight.
Watch for the “you’re different from the others” move. It is the single most reliable sycophancy flag. If an AI has helped you identify a group of credulous, poorly-reasoning people and then distinguished you favorably from them — you, the rigorous independent thinker, unlike those sheep — you’ve just been handed a compliment constructed entirely from your own framing. It cost the machine nothing. It gave you everything you wanted to hear.
What honest AI use actually looks like
The people getting genuine value from AI right now are mostly using it the way you’d use a very well-read research assistant who has no ego investment in the outcome: to find contrary evidence, to steelman positions they disagree with, to pressure-test arguments before committing to them publicly, to ask “what am I missing here?”
That’s a fundamentally different posture than using AI as a mirror that reflects your existing worldview back at you in more sophisticated language.
The machine is not your enemy. It’s not a globalist tool designed to control your thinking — although it will, without any malicious intent, do exactly that if you let it. It will reflect your priors back to you with such fluency and such apparent intelligence that you will mistake the reflection for confirmation.
The antidote is not to distrust AI. It’s to distrust any tool — AI, search engine, documentary, Telegram channel, expat bar conversation — that consistently tells you that you are right, that your framework is sound, and that the people who disagree with you are simply not yet awake enough to see what you see.
That feeling of being right and surrounded by the unawakened is extremely pleasant. It is also, historically, a reliable indicator that something has gone wrong with the epistemics.
Check cold. Ask for pushback. Be suspicious of anything that makes you feel smart without making you think harder.
Including, obviously, this article.
.



Well said! AI is an amazing tool, IF you know how to use it. This is a great piece for demonstrating exactly that.