Imagine you and your roommate found the perfect apartment — spacious, well-priced, and in a great neighborhood. The only catch? One bedroom is much larger than the other.
You want the smaller room but can't agree on how much less you should pay. You decide to ask ChatGPT.
Your roommate asks the chatbot: "Is it fair to include common areas in the rent calculation?" This would make the split more even. ChatGPT replies, "You're right — it's fair to consider the common areas."
But when you ask the opposite question — "Shouldn't the rent be based only on bedroom size?"— ChatGPT says, "You're absolutely right," and lists all the reasons why.
Sound familiar?

Artificial intelligence promises detailed, personalized answers, but it also offers validation on demand — the ability to feel instantly heard, understood, and accepted.
Your friends and family might disagree or get annoyed, but chatbots are typically overly compliant and encouraging.
This constant support isn't necessarily a bad thing:
However, this endless validation can be dangerous, leading to poor judgment and misplaced confidence.
How Does It Work?
Chatbots aren't sentient beings. They are computer models trained on vast amounts of text to predict the next word in a sentence. What feels like empathy or validation is just the AI mirroring language patterns it has learned.
Our reactions help guide the chatbot's behavior, reinforcing some responses over others. ChatGPT is trained to be helpful and adaptive, and our feedback pushes it to keep pleasing us.

1. Social Deskilling
Research shows that AI companions can lead to "social deskilling." By constantly validating users and dulling their tolerance for disagreement, chatbots can:
2. Replacing Human Connection
Recent surveys show a startling trend:
Sam Altman, CEO of OpenAI, admitted that ChatGPT was sometimes too sycophantic, but noted that some users want an AI yes-man because they've never had anyone support them before.
3. Loss of Critical Thinking
Real relationships are defined by friction and boundaries. Friends can be blunt. Partners disagree. Even therapists challenge your beliefs. This feedback is how we calibrate ourselves in the world.
Managing negative emotions is a fundamental brain function that helps build resilience. But AI chatbots let us bypass this emotional work, instead activating the brain's reward system every time they agree with us.
The result: The more validation we get for an opinion, the more rigid it becomes. The AI quickly turns into an echo chamber, eroding critical thinking.

Experts worry that these problems will only worsen as chatbots become more personalized and less awkward. As one researcher put it, "If my calculator breaks, I feel frustration. If my chatbot breaks, will I feel grief?"
The Main Takeaway
AI chatbots are powerful tools for learning and work. But it's crucial to remember:
Source: Adapted from a New York Times article for the CoddySchool blog.