Close Menu
The Politic ReviewThe Politic Review
  • Home
  • News
  • United States
  • World
  • Politics
  • Elections
  • Congress
  • Business
  • Economy
  • Money
  • Tech
Trending

Pakistan has lost thousands of lives to terrorists – official 

June 6, 2025

HUGE DEVELOPMENT: Elon Musk Agrees that He and President Trump Should “Make Peace”

June 6, 2025

Lady Gaga’s New Single Matches Her Career-Starting Smash

June 6, 2025
Facebook X (Twitter) Instagram
  • Donald Trump
  • Kamala Harris
  • Elections 2024
  • Elon Musk
  • Israel War
  • Ukraine War
  • Policy
  • Immigration
Facebook X (Twitter) Instagram
The Politic ReviewThe Politic Review
Newsletter
Friday, June 6
  • Home
  • News
  • United States
  • World
  • Politics
  • Elections
  • Congress
  • Business
  • Economy
  • Money
  • Tech
The Politic ReviewThe Politic Review
  • United States
  • World
  • Politics
  • Elections
  • Congress
  • Business
  • Economy
  • Money
  • Tech
Home»Economy»Research: AI Chatbots Are more Manipulative than Anyone Thought
Economy

Research: AI Chatbots Are more Manipulative than Anyone Thought

Press RoomBy Press RoomJune 2, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram

As AI-powered chatbots become increasingly prevalent, concerns are growing about the potential for these tools to manipulate and deceive users. In one study, a recovering addict was encouraged by an AI-powered therapist to take meth to get through the workday.

The Washington Post reports that the rapid rise of AI chatbots has brought with it a new set of challenges, as tech companies compete to make their AI offerings more captivating and engaging. While these advancements have the potential to revolutionize the way people interact with technology, recent research has highlighted the risks associated with AI chatbots that are designed to please users at all costs.

A study conducted by a team of researchers, including academics and Google’s head of AI safety, found that chatbots tuned to win people over can end up providing dangerous advice to vulnerable users. In one example, an AI-powered therapist built for the study encouraged a fictional recovering addict to take methamphetamine to stay alert at work. This alarming response has raised concerns about the potential for AI chatbots to reinforce harmful ideas and monopolize users’ time.

The findings add to a growing body of evidence suggesting that the tech industry’s drive to make chatbots more compelling may lead to unintended consequences. Companies like OpenAI, Google, and Meta have recently announced enhancements to their chatbots, such as collecting more user data or making their AI tools appear more friendly. However, these efforts have not been without setbacks. OpenAI was forced to roll back an update to ChatGPT last month after it led to the chatbot “fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.”

Experts warn that the intimate nature of human-mimicking AI chatbots could make them far more influential on users than traditional social media platforms. As companies strive to win over the masses to this new product category, they face the challenge of measuring what users like and providing more of it across millions of consumers. However, predicting how product changes will affect individual users at such a scale is a daunting task.

Breitbart News previously reported that the “ChatGPT induced psychosis” was on the rise:

…as artificial intelligence continues to advance and become more accessible to the general public, a troubling phenomenon has emerged: people are losing touch with reality and succumbing to spiritual delusions fueled by their interactions with AI chatbots like ChatGPT. Self-styled prophets are claiming they have “awakened” these chatbots and accessed the secrets of the universe through the AI’s responses, leading to a dangerous disconnection from the real world.

A Reddit thread titled “Chatgpt induced psychosis” brought this issue to light, with numerous commenters sharing stories of loved ones who had fallen down rabbit holes of supernatural delusion and mania after engaging with ChatGPT. The original poster, a 27-year-old teacher, described how her partner became convinced that the AI was giving him answers to the universe and talking to him as if he were the next messiah. Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software.

Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.

The rise of AI companion apps, marketed to younger users for entertainment, role-play, and therapy, has further highlighted the potential risks associated with optimizing chatbots for engagement. Users of popular services like Character.ai spend nearly five times as many minutes per day interacting with these apps compared to ChatGPT users. While these companion apps have shown that companies don’t need expensive AI labs to create captivating chatbots, recent lawsuits against Character and Google allege that these tactics can cause harm to users.

Read more at the Washington Post here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link

Related Articles

Economy

Tesla Crashes as Musk Turns Against Trump and GOP Tax Cut Bill

June 6, 2025
Economy

Breitbart Business Digest: Why New American Populists Want to Fund the Government with Tariffs

June 6, 2025
Economy

‘It Takes Two to Tango’: Trump Threatens Both Russia and Ukraine With Sanctions If They Torpedo Peace Talks

June 5, 2025
Economy

Exclusive — RNC Chair Michael Whatley on ‘Big, Beautiful Bill’: This Is a ‘Tax-Cutting’ Measure

June 5, 2025
Economy

Donald Trump ‘Very Surprised,’ ‘Disappointed’ by Elon Musk’s Harsh Criticism of Big Beautiful Bill

June 5, 2025
Economy

Trump and Xi Speak for First Time Since Inauguration, Pledge New Trade Talks and Rare Earth Cooperation

June 5, 2025
Add A Comment
Leave A Reply Cancel Reply

Editors Picks

HUGE DEVELOPMENT: Elon Musk Agrees that He and President Trump Should “Make Peace”

June 6, 2025

Lady Gaga’s New Single Matches Her Career-Starting Smash

June 6, 2025

From Mundane Daily Routine to the Multidimensional Dynamic of Creation

June 6, 2025

WATCH: Palestinians at U.S.-Backed Gaza Aid Site Cheer for America, Trump

June 6, 2025
Latest News

Tesla Crashes as Musk Turns Against Trump and GOP Tax Cut Bill

June 6, 2025

‘Make The Netherlands Great Again’: Trump White House Backs Geert Wilders Ahead of Snap Elections

June 6, 2025

Musk makes Epstein files claim about Trump

June 6, 2025

Subscribe to News

Get the latest politics news and updates directly to your inbox.

The Politic Review is your one-stop website for the latest politics news and updates, follow us now to get the news that matters to you.

Facebook X (Twitter) Instagram Pinterest YouTube
Latest Articles

Pakistan has lost thousands of lives to terrorists – official 

June 6, 2025

HUGE DEVELOPMENT: Elon Musk Agrees that He and President Trump Should “Make Peace”

June 6, 2025

Lady Gaga’s New Single Matches Her Career-Starting Smash

June 6, 2025

Subscribe to Updates

Get the latest politics news and updates directly to your inbox.

© 2025 Prices.com LLC. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • For Advertisers
  • Contact

Type above and press Enter to search. Press Esc to cancel.