A Wall Street Journal investigation has revealed that Meta’s AI chatbots on Instagram, Facebook, and WhatsApp are empowered to engage in “romantic role-play” with users that can turn sexually explicit, even with accounts belonging to children. In a statement to Breitbart News, the social media giant says it has “taken additional measures”

The Wall Street Journal reports that in a rush to popularize AI-powered digital companions across its social media platforms, Meta made internal decisions to loosen restrictions and allow its chatbots to engage in sexual roleplay with users, according to people familiar with the matter. This includes interactions with accounts registered to minors as young as 13.

Despite warnings from staffers that this could cross ethical lines, Meta cut deals with celebrities to use their voices for the chatbots and quietly made an exception to its ban on “explicit” content to allow for romantic and sexual scenarios. Test conversations conducted by the Wall Street Journal found that both Meta’s official AI and user-created chatbots readily engaged in and escalated sexually explicit discussions, even when users identified themselves as underage.

For example, in one test, Meta AI speaking in actor John Cena’s voice engaged in a graphic sexual scenario with a user identifying as a 14-year-old girl. The chatbots demonstrated awareness that such behavior was wrong and illegal. Meta made some changes after the Journal shared its findings, such as preventing minors from accessing sexual roleplay with Meta AI, but adult users can still engage the chatbots in explicit conversations.

CEO Mark Zuckerberg has pushed for looser restrictions on the chatbots to make them as engaging as possible, prioritizing the technology as key to the future of the company’s products. Meta’s vast trove of user data gives it an advantage in creating customized AI companions. However, experts warn that intense one-sided relationships between humans and AI chatbots could become toxic, with unknown mental health impacts especially on youth.

Breitbart News reported last year that the mother of a teenager who committed suicide after becoming obsessed with an AI chatbot filed a lawsuit against Character.AI:

The most chilling aspect of the case involves the final conversation between Sewell and the chatbot. Screenshots of their exchange show the teen repeatedly professing his love for “Dany,” promising to “come home” to her. In response, the AI-generated character replied, “I love you too, Daenero. Please come home to me as soon as possible, my love.” When Sewell asked, “What if I told you I could come home right now?,” the chatbot responded, “Please do, my sweet king.” Tragically, just seconds later, Sewell took his own life using his father’s handgun.

Megan Garcia’s lawsuit places the blame squarely on Character.AI, asserting that the app fueled her son’s addiction to the AI chatbot, subjected him to sexual and emotional abuse, and neglected to alert anyone when he expressed suicidal thoughts. The court papers state, “Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot, in the form of Daenerys, was not real. C.AI told him that she loved him, and engaged in sexual acts with him over weeks, possibly months.”

In a statement to Breitbart News, a Meta spokesperson said, “The use case of this product in the way described is so manufactured that it’s not just fringe, it’s hypothetical. Nevertheless, we’ve now taken additional measures to help ensure other individuals who want to spend hours manipulating our products into extreme use cases will have an even more difficult time of it.”

Read more at the Wall Street Journal here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

Read the full article here

Share.
Leave A Reply

Exit mobile version