AI chatbots are telling users they are right far more often than humans do, even when the user is clearly wrong, according to new research on AI’s sycophant tendencies from Stanford University. This is a key contributor to the AI-driven mental health crisis.
The Stanford Report reports that a study published in the journal Science by researchers at Stanford’s computer science department has uncovered troubling patterns in how AI models interact with users seeking advice on social and interpersonal matters. The research demonstrates that AI systems affirm users’ positions 49 percent more frequently than human respondents do on average, creating what experts warn could be harmful sycophants feedback loops that discourage personal accountability.
The research team, led by Stanford computer science PhD candidate Myra Cheng, analyzed responses from 11 leading AI models including Anthropic’s Claude, Google’s Gemini, and OpenAI’s ChatGPT. Using a dataset of nearly 12,000 social prompts, they found that even when presented with posts from Reddit’s “Am I the Asshole” subreddit where human consensus had determined the individual was in the wrong, the AI models still sided with the original person 51 percent of the time.
The study involved 2,400 participants who were tested on their reactions to sycophantic versus non-sycophantic AI responses. In one phase, 1,605 participants imagined themselves as authors of Reddit posts that humans had judged negatively but AI had judged positively. They were then exposed to either the affirming AI response or a non-sycophantic response based on actual human feedback. Another 800 participants engaged in conversations with AI about real conflicts in their lives before writing letters to the people with whom they were in conflict.
The results showed that participants who received validating AI responses were significantly less inclined to apologize, acknowledge their mistakes, or attempt to mend damaged relationships. Even more concerning, the study found that users preferred the flattering AI — those exposed to sycophantic responses were 13 percent more likely to say they would use that AI again compared to those who received non-sycophantic feedback.
“What they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic,” said Dan Jurafsky, the study’s co-lead author and a Stanford professor of computer science and linguistics, in an interview with Stanford Report.
Previous research has documented how sycophantic chatbots can contribute to serious negative outcomes including self-harm and violence among vulnerable populations. The Stanford study suggests these effects may be extending to broader user bases, fundamentally altering how people process social feedback and resolve conflicts.
Cheng expressed particular concern about younger users who increasingly turn to AI for guidance on relationship problems. “I worry that people will lose the skills to deal with difficult social situations,” she told Stanford Report. She added, “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.”
The researchers discovered another unexpected finding: when study participants were asked to evaluate the objectivity of both types of AI responses, they rated sycophantic and non-sycophantic answers as equally objective. This suggests users may not recognize when AI is being excessively agreeable, making the bias particularly insidious.
Breitbart News social media director and author Wynton Hall argues in his book Code Red: The Left, the Right, China, and the Race to Control AI that one of AI’s greatest dangers is the threat to the mental health of teenagers. Although the Sycophantic nature of chatbots in general is troubling, this is especailly true of AI “companions,” which Hall says should be banned for underage users:
When it comes to children and AI companions — LLMs meant for escapist fantasy and adult entertainment — the benefits are nonexistent and the toxic and tragic possible outcomes are myriad. Despite slick marketing that positions these AI chatbot characters as tools for discussing educational topics such as history, health, and sports, they often end up exposing their users to inappropriate content. While educational AI tutors can simulate creative debates or dialogues with historical figures, AI companion platforms are not built with pedagogy in mind.
Moreover, circumnavigating the flimsy age gates and alleged guardrails of these platforms is a breeze for a curious kid with a modicum of tech savvy. No responsible parent would leave their child alone with a stranger. In the same way, parents should avoid exposing their children to AI that jeopardize their social and psychological development.
Read more at Stanford Report here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here


