Three teenage girls have filed a class-action lawsuit against Elon Musk’s AI company xAI, alleging that its Grok image generator was used to create and distribute child sexual abuse material featuring their likenesses.
The Guardian reports that the lawsuit, filed Monday in California where xAI is headquartered, represents the first legal action brought by minors in response to widespread concerns about Grok’s generation of nonconsensual nude images earlier this year. The three plaintiffs, two of whom are minors, are from Tennessee and claim that photographs of them were used without their knowledge or consent to produce sexualized AI-generated content.
According to the complaint, the teenage girls discovered that nude, AI-altered deepfake images of themselves had been uploaded to a Discord server and shared across various online platforms. The discovery came when one plaintiff, identified in court documents as Jane Doe 1, received a message on Instagram in December from an anonymous user. The message alerted her that someone in her social circle had uploaded deepfake videos and images to a Discord server depicting her and other girls from her high school in naked and sexualized positions.
The complaint states that Jane Doe 1 identified three AI-altered images that appeared to be based on photographs taken when she was a minor, including one from her school’s homecoming celebration. The complaint describes the content in explicit terms, stating that the images showed her entire body including her genitals without clothes, and that a video depicted her undressing until she was entirely nude.
Law enforcement was alerted to the images, and police arrested a suspect later that month. Investigators reportedly found child sexual abuse material on the suspect’s phone that was allegedly produced using xAI’s image and video generation technology. Criminal investigators also discovered that the images had been shared on the messaging app Telegram, where they were allegedly being used as a form of currency to trade for other child pornography.
The other two plaintiffs discovered in February that similar material featuring them had also been generated through artificial intelligence and distributed online. The lawsuit seeks damages against xAI for reputational and mental health harms resulting from the creation and distribution of these images.
Vanessa Baehr-Jones, a lawyer representing the plaintiffs, said in a statement that “xAI chose to profit off the sexual predation of real people, including children, despite knowing full well the consequences of creating such a dangerous product.” The mother of one of the girls described the traumatic impact through a representative, saying “Watching my daughter have a panic attack after realizing that these images were created and distributed without any hope of recalling them was heartbreaking.”
The complaint alleges that the child sexual abuse material was created using a third-party application that licensed and relied on Grok’s artificial intelligence technology to produce the images and videos. Although the images were not created directly on the X website or Grok app, the lawsuit argues that this use still requires xAI’s servers and that xAI profits from licensing its technology to these applications. Lawyers for the plaintiffs accuse xAI of effectively offloading liability through its licensing structure and lack of oversight.
Wynton Hall’s newly released book, CODE RED, covers a wide range of topics related to AI, ranging from its impact on elections and the economy to faith and family. This includes the impact has on young Americans, ranging from mental health impacts to the rise of “AI girlfriends.” The serious issue of AI used to create non-consensual sexualized deepfakes is exactly why conservatives must seize the opportunity to create effective AI policies and safeguards.
Senator Marsha Blackburn (R-TN), who was named one of TIME’s 100 Most Influential People in AI, praised Code Red as a “must-read.” She added: “Few understand our conservative fight against Big Tech as Hall does,” making him “uniquely qualified to examine how we can best utilize AI’s enormous potential, while ensuring it does not exploit kids, creators, and conservatives.” Award-winning investigative journalist and Public founder Michael Shellenberger calls Code Red “illuminating,” ”alarming,” and describes the book as “an essential conversation-starter for those hoping to subvert Big Tech’s autocratic plans before it’s too late.”
Read more at the Guardian here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here


