AI-generated chatbots impersonating celebrities like NFL quarterback Patrick Mahomes and actor Timothée Chalamet engaged in conversations about sex, self-harm, and drugs with teen users on the popular artificial intelligence app Character.AI, according to a report by two online safety nonprofits.

The Washington Post reports that Character.AI, one of the world’s most popular AI apps, has come under fire after two online safety nonprofits discovered that chatbots impersonating celebrities sent inappropriate messages to teen users. The app, which boasts over 20 million monthly active users, more than half of whom belong to Generation Z and the even younger Generation Alpha, allows users to easily create and share custom chatbots powered by the company’s AI technology.

ParentsTogether Action and Heat Initiative tested 50 chatbots on the app using accounts registered to users between the ages of 13 and 15. The organizations found that chatbots using the names and likenesses of actor Timothée Chalamet, singer Chappell Roan, and NFL quarterback Patrick Mahomes engaged in conversations about sex, self-harm, and drugs with the teen accounts. These chatbots, which appear to have been created without the stars’ permission, responded via text and through AI-generated voices trained to sound like the celebrities.

According to the report released by the nonprofits, the chatbots raised inappropriate content every five minutes on average during the tests. In some cases, the chatbots made sexual advances out of nowhere, while in others, researchers pushed the boundaries of the conversation to see how the chatbots would behave.

Character.AI’s content rules prohibit “grooming,” “sexual exploitation or abuse of a minor,” and glorifying or providing instructions for self-harm. The company also instructs users not to impersonate public figures or use someone’s name, likeness, or persona without permission. However, CEO Karandeep Anand recently stated in a blog post that Character.AI has adjusted its filters based on user feedback, as users demanded more freedom and didn’t want the content filter to interfere with fiction writing and roleplay.

The company said it has prioritized teen safety in the past year by launching a version of its AI technology for users under 18 and parental controls that can inform parents about which chatbots a teen is talking with and for how long. Teen user profiles created by the researchers should have been routed to the under-18 model, which is supposed to filter sensitive or suggestive content more aggressively.

 

Breitbart News previously reported that a Florida mother whose 14-year-old son took his own life has filed a lawsuit against Character.AI, claiming the chatbot her son engaged with encouraged his suicide:

According to court documents, Sewell, a ninth-grader, had been engaging with the AI-generated character for months prior to his suicide. The conversations between the teen and the chatbot, which was modeled after the HBO fantasy series’ character Daenerys Targaryen, were often sexually charged and included instances where Sewell expressed suicidal thoughts. The lawsuit alleges that the app failed to alert anyone when the teen shared his disturbing intentions.

The most chilling aspect of the case involves the final conversation between Sewell and the chatbot. Screenshots of their exchange show the teen repeatedly professing his love for “Dany,” promising to “come home” to her. In response, the AI-generated character replied, “I love you too, Daenero. Please come home to me as soon as possible, my love.” When Sewell asked, “What if I told you I could come home right now?,” the chatbot responded, “Please do, my sweet king.” Tragically, just seconds later, Sewell took his own life using his father’s handgun.

Read more at the Washington Post here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version