French cybercrime prosecutors have announced that their investigation into Elon Musk and his social media platform X has been upgraded to a criminal probe, marking a significant escalation in the ongoing legal scrutiny of the tech tycoon’s deepfake porn scandal.
CNBC reports that the Paris prosecutor’s office confirmed the elevation of the investigation, which originally began in early 2025 following a request from French Member of Parliament Éric Bothorel. The probe centers on allegations of algorithmic manipulation designed to interfere in French politics and concerns about the spread of AI-generated deepfake content on the social network.
According to the prosecutor’s office, both Musk and former X CEO Linda Yaccarino received summons to appear before French authorities on April 20. Neither complied with the request to answer questions related to the investigation.
The investigation focuses on multiple serious allegations against X and its AI chatbot Grok. Authorities are examining complaints that X employed algorithmic manipulation to influence and interfere in French political processes. Additionally, prosecutors are investigating allegations that Musk and X knowingly permitted users of Grok to create and disseminate nonconsensual sexually explicit deepfake images through the platform.
In February, French authorities conducted a raid on X’s Paris office, prompting Musk to characterize the investigation as a “political attack” against him and his company.
Grok, the AI chatbot at the center of some allegations, is developed by xAI, Musk’s artificial intelligence company. In a series of corporate consolidations earlier this year, xAI acquired X, which Musk already owned, and subsequently merged with SpaceX. The technology has been integrated across Musk’s business empire, with a version of Grok now embedded in electric vehicles manufactured by Tesla.
The French investigation is not occurring in isolation. Multiple international jurisdictions are conducting similar probes into X and Grok, including the California attorney general’s office. These investigations generally examine whether Musk and his companies deliberately facilitated the creation and distribution of deepfake explicit images, including child sexual abuse materials, based on photographs or videos of individuals who did not consent to such use.
The matter has created tension between the United States and French governments. In April, the U.S. Department of Justice reportedly informed French authorities that it would not provide assistance in investigating Musk or X. The Justice Department allegedly accused France of inappropriately interfering with an American business, adding a diplomatic dimension to the legal proceedings.
The case highlights ongoing global concerns about the regulation of AI-generated content, particularly deepfakes that can be used to create convincing but fraudulent images and videos. Lawmakers and regulators worldwide have struggled to keep pace with rapidly advancing AI technology and its potential for misuse.
In his new instant bestseller, Code Red: The Left, the Right, China, and the Race to Control AI, Breitbart News social media director Wynton Hall writes extensively on how we can protect our children and grandchildren, a major topic when AI platforms like Grok create specific dangers to teens.
The “bottom line,” Hall asserts, is that “there’s no justification for a child to engage with AI character or companion platforms” — before reminding readers that children in the U.S. “already spend too much time staring at screens.”
“Regular use of parental controls, strong data privacy and age-appropriate settings, and discussions of online safety are essential to help kids navigate dangers and use technology responsibly,” the author suggests.
Read more at CNBC here.
Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.
Read the full article here
