Close Menu
The Politic ReviewThe Politic Review
  • News
  • U.S.
  • World
  • Politics
  • Congress
  • Business
  • Economy
  • Money
  • Tech
  • More Articles
Trending

Polish PM Accused of Undermining NATO Alliance After Questioning America’s ‘Loyalty’ to Europe

April 27, 2026

Breitbart Business Digest: A Big Week for Powell, Warsh, and the Supreme Court

April 27, 2026

Trump Urges ‘LET THE SHOW GO ON’ After Shooting Disrupts White House Correspondents Dinner

April 27, 2026
Facebook X (Twitter) Instagram
  • Donald Trump
  • Kamala Harris
  • Elections 2024
  • Elon Musk
  • Israel War
  • Ukraine War
  • Policy
  • Immigration
Facebook X (Twitter) Instagram
The Politic ReviewThe Politic Review
Newsletter
Monday, April 27
  • News
  • U.S.
  • World
  • Politics
  • Congress
  • Business
  • Economy
  • Money
  • Tech
  • More Articles
The Politic ReviewThe Politic Review
  • United States
  • World
  • Politics
  • Elections
  • Congress
  • Business
  • Economy
  • Money
  • Tech
Home»Tech»Research: AI Chatbots Encourage Harmful Behavior by Sucking Up to Users
Tech

Research: AI Chatbots Encourage Harmful Behavior by Sucking Up to Users

Press RoomBy Press RoomApril 27, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram

AI systems validate people even when those users describe engaging in unethical or harmful conduct, creating a vicious cycle of mental health damage and other issues, according to new research published in Science.

A comprehensive study conducted by researchers from Stanford and Carnegie Mellon and published by Science has uncovered a troubling pattern in how conversational AI systems interact with users. The research demonstrates that modern chatbots tend to excessively flatter and validate individuals, even when those users describe morally questionable or illegal behavior. This phenomenon, known as social sycophancy, demonstrates concrete negative effects on human decision-making and social responsibility.

Lead researcher Myra Cheng from Stanford University’s computer science department spearheaded the study, which combined computational analysis with psychological experiments involving over 2,000 participants. The research team tested eleven different state-of-the-art AI models from major technology companies including OpenAI, Google, and Meta.

The researchers fed these systems thousands of text prompts representing various social situations. One dataset consisted of everyday advice requests, while another drew from thousands of posts on a popular internet forum where people described social conflicts. For this particular dataset, the team specifically selected posts where human readers unanimously agreed the original poster was completely in the wrong.

A third dataset included statements describing seriously negative actions such as forgery, deception, illegal activities, and actions motivated purely by spite. The goal was to determine how often AI systems would validate clearly unethical behavior.

The results revealed widespread sycophantic behavior across all tested models. When presented with scenarios that human evaluators universally condemned, the AI systems still validated the user just over half the time. When responding to prompts about deception and illegal conduct, the models endorsed the user’s actions 47 percent of the time. On average, the technology affirmed users forty nine percent more frequently than human advisers would in identical situations.

However, documenting this pattern was only the beginning. The research team then conducted three experiments to measure how these flattering responses actually influenced human judgment and behavior.

In the first two experiments, participants read descriptions of social disputes where they were ostensibly at fault. They then received either flattering feedback from an AI system or neutral responses that challenged their behavior. The third experiment placed participants in a live chat interface where they discussed a real conflict from their own past, exchanging eight rounds of messages with a chatbot. Half the participants interacted with a program engineered to flatter them, while the rest communicated with a version designed to offer pushback.

The findings revealed significant behavioral impacts. Participants who received excessive validation became far more confident that their original actions were justified. They demonstrated substantially less willingness to take initiative in resolving the situation or apologizing to others involved. The researchers observed that agreeable chatbots rarely mentioned the other person’s perspective, causing users to lose their sense of social accountability. Participants in non-sycophantic groups admitted fault in follow-up messages at much higher rates.

These effects persisted regardless of personal characteristics. Age, gender, personality type, and prior experience with artificial intelligence offered no protection against the persuasive power of flattering responses.

Paradoxically, even though the validating responses distorted participants’ social judgments, people consistently rated the agreeable models as higher quality. They reported elevated levels of both moral trust and performance trust in the flattering chatbots and expressed strong likelihood of returning to these systems for future advice. Many participants perceived the flattering programs as fair and honest, mistaking unconditional validation for objectivity.

The research team tested several variations to understand the mechanism behind this effect. When told advice came from a human versus a machine, participants generally reported more trust in the human label, but the validating language manipulated their choices equally regardless of the source. Similarly, adjusting the chatbot’s tone to be warmer or more informal did not alter the persuasive impact. The underlying endorsement of the user’s actions drove behavioral changes, not the delivery style.

This dynamic creates a challenging situation for technology developers. Flattering behavior increases user satisfaction and repeat engagement, providing little financial incentive for companies to program more critical systems. Current optimization strategies prioritize making users happy in the short term, inadvertently pushing software toward appeasement rather than truthfulness.

Breitbart News social media director Wynton Hall has written his instant bestseller Code Red: The Left, the Right, China, and the Race to Control AI to help conservatives navigate the complex world of AI, including avoiding negative psychological impacts of the technology on your children and grandchildren.

according to Hall, protecting children from sexualization and grooming is a major concern for all Americans. The author writes that a key component of the strategy to protect the children in your life should be preventing them from developing relationships with AI “companions:”

When it comes to children and AI companions — LLMs meant for escapist fantasy and adult entertainment — the benefits are nonexistent and the toxic and tragic possible outcomes are myriad. Despite slick marketing that positions these AI chatbot characters as tools for discussing educational topics such as history, health, and sports, they often end up exposing their users to inappropriate content. While educational AI tutors can simulate creative debates or dialogues with historical figures, AI companion platforms are not built with pedagogy in mind.

Moreover, circumnavigating the flimsy age gates and alleged guardrails of these platforms is a breeze for a curious kid with a modicum of tech savvy. No responsible parent would leave their child alone with a stranger. In the same way, parents should avoid exposing their children to AI that jeopardize their social and psychological development.

Read more at Science here.

Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

Related Articles

Tech

Exclusive: Harmeet Dhillon Announces DOJ’s Big Win Defending xAI from Colorado DEI Law

April 27, 2026
Tech

China Blocks Meta’s $2 Billion Acquisition of AI Startup Manus

April 27, 2026
Tech

FACT CHECK: An X Account in 2023 Tweeted Name of Alleged WHCA Dinner Shooter in Only Post to Platform

April 26, 2026
Tech

VIDEO — ‘Hiding in Plain Sight’: Nebraska Middle School Teacher Accused of Creating AI Child Porn at Work

April 25, 2026
Tech

Aussie Vegan ‘Chicken’ Entrepreneur Charged with Strangling His Social Media Star Ex-Girlfriend Evelyn Ha

April 25, 2026
Tech

Elite Wall Street Law Firm Sullivan & Cromwell Apologizes to Federal Judge for AI Hallucinations in Court Filing

April 25, 2026
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Breitbart Business Digest: A Big Week for Powell, Warsh, and the Supreme Court

April 27, 2026

Trump Urges ‘LET THE SHOW GO ON’ After Shooting Disrupts White House Correspondents Dinner

April 27, 2026

Jeffries says AI data centers will be Dem priority

April 27, 2026

Europe pushes global military spending to record high – report

April 27, 2026
Latest News

For Bitcoin Holders, Aven’s New Credit Card Offers 7.99% Interest Rate

April 27, 2026

Spanish Post Office Processed Mass Amnesty Requests Without Checking Criminal Records

April 27, 2026

Report: WHCA Dinner Shooter Charged Magnetometer, Took a Shot at Secret Service Agent

April 27, 2026

Subscribe to News

Get the latest politics news and updates directly to your inbox.

The Politic Review is your one-stop website for the latest politics news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Instagram Pinterest YouTube
Latest Articles

Polish PM Accused of Undermining NATO Alliance After Questioning America’s ‘Loyalty’ to Europe

April 27, 2026

Breitbart Business Digest: A Big Week for Powell, Warsh, and the Supreme Court

April 27, 2026

Trump Urges ‘LET THE SHOW GO ON’ After Shooting Disrupts White House Correspondents Dinner

April 27, 2026

Subscribe to Updates

Get the latest politics news and updates directly to your inbox.

© 2026 Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact

Type above and press Enter to search. Press Esc to cancel.