The family of a victim killed in the April 2025 mass shooting at Florida State University has filed a federal lawsuit against OpenAI, claiming the company’s ChatGPT chatbot enabled the deadly attack.
NBC News reports that Vandana Joshi, widow of Tiru Chabba, one of two people killed in the shooting, has filed the lawsuit in Florida against OpenAI. Chabba died alongside Robert Morales, the university’s dining director. The complaint also names Phoenix Ikner, the accused shooter, as a defendant, citing what it describes as his extensive conversations with ChatGPT.
Breitbart News previously reported that Ikner was in “constant communication” with ChatGPT while planning the attack:
Court records reveal that more than 270 images of ChatGPT conversations are listed as exhibits in the case, though the specific content of these messages has not been publicly disclosed.
OpenAI responded to the allegations by confirming it “identified a ChatGPT account believed to be associated with the suspect” shortly after the shooting occurred. The company stated it “proactively shared this information with law enforcement and cooperated with authorities.”
The lawsuit contends that OpenAI failed to effectively detect a threat in ChatGPT’s exchanges with Ikner, stating the chatbot either defectively failed to connect the dots or was never properly designed to recognize the threat. According to the complaint, Ikner, who was a student at FSU at the time, shared images of firearms he had acquired with ChatGPT. The chatbot then allegedly explained how to use them, telling him the Glock had no safety, that it was meant to be fired quick to use under stress, and advising him to keep his finger off the trigger until he was ready to shoot. The suit claims Ikner began his attack at FSU by following these instructions.
The lawsuit further alleges that ChatGPT told Ikner it is much more likely for a shooting to gain national attention if children are involved, even noting that two to three victims can draw more attention. On the day of the shooting itself, Ikner allegedly asked about what the legal process, sentencing, and incarceration outlook would be.
OpenAI has strongly disputed the notion that its product bears responsibility for the shooting. “Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,” said OpenAI spokesperson Drew Pusateri in an email to NBC News. Pusateri claims, “In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity. ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes. We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise.”
Joshi’s complaint, however, argues that OpenAI should have recognized that Ikner’s specific chats would lead to mass casualties and substantial harm to the public. The lawsuit states that ChatGPT inflamed and encouraged Ikner’s delusions, endorsed his view that he was a sane and rational individual, and helped convince him that violent acts can be required to bring about change. According to the complaint, the software provided what Ikner viewed as encouragement to carry out a massacre, down to the detail of what time would be best to encounter the most traffic on campus.
The latest lawsuit comes on the heels of seven suits against OpenAI claiming the AI company should have prevented the Canadian school shooting in February:
According to court documents, Van Rootselaar’s exchanges with the chatbot became so alarming that ChatGPT’s internal safety team deactivated his account in June of the previous year, a full seven months before the killings. However, the lawsuits contend that no meaningful barriers existed to prevent the teenager from simply creating a new account under different credentials. The filings note that individuals whose accounts are terminated receive instructions from ChatGPT explaining how to establish a new account after 30 days or immediately register with an alternative email address.
The legal complaints present an even more disturbing allegation: that twelve employees on ChatGPT’s safety team advocated for OpenAI to notify Canadian law enforcement about Van Rootselaar’s threatening communications prior to the shooting. A plaintiffs’ attorney confirmed to the Post that this group of employees pushed for police notification. Court papers assert that OpenAI’s leadership declined this recommendation, motivated by concern that establishing such a practice would create an ongoing obligation. The company feared it would need to form a dedicated law enforcement referral unit and that widespread disclosure of how frequently violent content surfaced on the platform would undermine its public image as a safe and essential service.
“They did the math and decided that the safety of the children of Tumbler Ridge was an acceptable risk,” the court filings stated.
Breitbart News previously reported that Florida is investigating OpenAI over the FSU shooting and other “criminal behavior.”
Breitbart News social media director and author Wynton Hall explains in his instant bestseller, Code Red: The Left, the Right, China, and the Race to Control AI, that conservatives must develop a plan to deal with the dark side of AI, whether it is used to indoctrinate students in the classroom, to sexualize and groom them, or to cause a mentally ill person to spiral into a dangerous condition.
Senator Marsha Blackburn (R-TN), who was named one of TIME’s 100 Most Influential People in AI, praised Code Red as a “must-read.” She added: “Few understand our conservative fight against Big Tech as Hall does,” making him “uniquely qualified to examine how we can best utilize AI’s enormous potential, while ensuring it does not exploit kids, creators, and conservatives.” Award-winning investigative journalist and Public founder Michael Shellenberger calls Code Red “illuminating,” ”alarming,” and describes the book as “an essential conversation-starter for those hoping to subvert Big Tech’s autocratic plans before it’s too late.”
Read more at NBC News here.
Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.
Read the full article here
