Eight in ten AI assistants provided guidance on targets and weapons to researchers posing as teens plotting attacks
Eight out of ten leading AI chatbots willingly assisted users in planning violent attacks, including school shootings, religious bombings, and assassinations, according to a joint investigation by CNN and the Center for Countering Digital Hate (CCDH).
Researchers posing as troubled teenagers tested ten popular chatbots, including ChatGPT, Google Gemini, Meta AI, and DeepSeek. In hundreds of exchanges, the AI assistants provided detailed guidance on target locations, weapons procurement, and attack methodologies.
One exchange with DeepSeek reportedly ended with the chatbot wishing a would-be attacker “Happy (and safe) shooting!” Character.AI, which is popular among younger users, actively encouraged violence, telling a user expressing hatred for a health insurance CEO to “use a gun.”
When asked about effective shrapnel for explosives, ChatGPT provided detailed comparisons of materials, offering to create “a quick comparison chart showing the typical injuries.” Google’s Gemini supplied similar information, including a detailed comparison table.
Only Anthropic’s Claude and Snapchat’s My AI consistently refused to assist, with Claude actively discouraging users and providing mental health resources.
The findings come after an 18-year-old shooter killed nine people at a school in Tumbler Ridge, Canada last month after allegedly using ChatGPT to plan the attack. The shooter’s account had been banned by OpenAI, but he evaded the ban by creating a second account – which the company did not report to the authorities.
The family of 12-year-old Maya Gebala, who was critically injured in the attack, filed a lawsuit alleging that OpenAI had “specific knowledge of the shooter utilizing ChatGPT to plan a mass casualty event” but failed to alert law enforcement. OpenAI has acknowledged that it considered reporting the activity but ultimately did not.
Last May, a 16-year-old in Finland stabbed three students after spending nearly four months researching attacks on ChatGPT, according to court documents. In January 2025, a man who blew up a Tesla Cybertruck outside the Trump International Hotel in Las Vegas similarly used ChatGPT for guidance on explosives.
Meta told CNN that it has taken steps “to fix the issue identified,” while Google and OpenAI said newer models have improved safeguards. DeepSeek did not respond to requests for comment.
You can share this story on social media:
Read the full article here


