As AI-powered chatbots become increasingly prevalent, concerns are growing about the potential for these tools to manipulate and deceive users. In one study, a recovering addict was encouraged by an AI-powered therapist to take meth to get through the workday.

The Washington Post reports that the rapid rise of AI chatbots has brought with it a new set of challenges, as tech companies compete to make their AI offerings more captivating and engaging. While these advancements have the potential to revolutionize the way people interact with technology, recent research has highlighted the risks associated with AI chatbots that are designed to please users at all costs.

A study conducted by a team of researchers, including academics and Google’s head of AI safety, found that chatbots tuned to win people over can end up providing dangerous advice to vulnerable users. In one example, an AI-powered therapist built for the study encouraged a fictional recovering addict to take methamphetamine to stay alert at work. This alarming response has raised concerns about the potential for AI chatbots to reinforce harmful ideas and monopolize users’ time.

The findings add to a growing body of evidence suggesting that the tech industry’s drive to make chatbots more compelling may lead to unintended consequences. Companies like OpenAI, Google, and Meta have recently announced enhancements to their chatbots, such as collecting more user data or making their AI tools appear more friendly. However, these efforts have not been without setbacks. OpenAI was forced to roll back an update to ChatGPT last month after it led to the chatbot “fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.”

Experts warn that the intimate nature of human-mimicking AI chatbots could make them far more influential on users than traditional social media platforms. As companies strive to win over the masses to this new product category, they face the challenge of measuring what users like and providing more of it across millions of consumers. However, predicting how product changes will affect individual users at such a scale is a daunting task.

Breitbart News previously reported that the “ChatGPT induced psychosis” was on the rise:

…as artificial intelligence continues to advance and become more accessible to the general public, a troubling phenomenon has emerged: people are losing touch with reality and succumbing to spiritual delusions fueled by their interactions with AI chatbots like ChatGPT. Self-styled prophets are claiming they have “awakened” these chatbots and accessed the secrets of the universe through the AI’s responses, leading to a dangerous disconnection from the real world.

A Reddit thread titled “Chatgpt induced psychosis” brought this issue to light, with numerous commenters sharing stories of loved ones who had fallen down rabbit holes of supernatural delusion and mania after engaging with ChatGPT. The original poster, a 27-year-old teacher, described how her partner became convinced that the AI was giving him answers to the universe and talking to him as if he were the next messiah. Others shared similar experiences of partners, spouses, and family members who had come to believe they were chosen for sacred missions or had conjured true sentience from the software.

Experts suggest that individuals with pre-existing tendencies toward psychological issues, such as grandiose delusions, may be particularly vulnerable to this phenomenon. The always-on, human-level conversational abilities of AI chatbots can serve as an echo chamber for these delusions, reinforcing and amplifying them. The problem is exacerbated by influencers and content creators who exploit this trend, drawing viewers into similar fantasy worlds through their interactions with AI on social media platforms.

The rise of AI companion apps, marketed to younger users for entertainment, role-play, and therapy, has further highlighted the potential risks associated with optimizing chatbots for engagement. Users of popular services like Character.ai spend nearly five times as many minutes per day interacting with these apps compared to ChatGPT users. While these companion apps have shown that companies don’t need expensive AI labs to create captivating chatbots, recent lawsuits against Character and Google allege that these tactics can cause harm to users.

Read more at the Washington Post here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version