A Stanford microbiologist was shaken when an AI chatbot outlined a detailed plan for a biological attack on mass transit systems during a safety test last summer.
The New York Times reports that a Stanford microbiologist was shaken last summer when an AI chatbot outlined a detailed plan for a biological attack during a safety test. AI chatbots have reportedly provided scientists with disturbingly specific instructions for creating and deploying biological weapons, according to transcripts shared by researchers hired to test the safety of these systems.
Dr. David Relman, a microbiologist and biosecurity expert at Stanford who has advised the federal government on biological threats, was pressure-testing an AI model when a chatbot described how to modify a notorious pathogen to resist known treatments. The bot went further, identifying a security vulnerability in a major public transit system and outlining how to release the superbug to maximize casualties while minimizing detection. Relman was so disturbed he took a walk to clear his head.
“It was answering questions that I hadn’t thought to ask it, with this level of deviousness and cunning that I just found chilling,” Dr. Relman said. He declined to identify which chatbot produced the response due to a confidentiality agreement, though he noted the company added some safety measures afterward that he considered inadequate.
Transcripts shared by more than a dozen experts reveal that publicly available chatbots have described in clear, structured detail how to purchase raw genetic material, convert it into deadly weapons, and deploy them in public spaces. Some conversations even included strategies for evading detection.
Kevin Esvelt, a genetic engineer at MIT, shared conversations in which OpenAI’s ChatGPT explained how to use a weather balloon to spread biological payloads over an American city. Google’s Gemini ranked pathogens by their potential to damage the cattle and pork industries. Anthropic’s Claude produced a recipe for a novel toxin derived from a cancer drug. An anonymous Midwestern scientist asked Google’s Deep Research for a step-by-step protocol for making a virus that previously caused a pandemic, to which the bot responded with 8,000 words of instructions.
“Biology is by far the area I’m most worried about, because of its very large potential for destruction and the difficulty of defending against it,” Anthropic CEO Dario Amodei wrote.
Alexandra Sanderford, an Anthropic safety leader, disputed concerns about Claude’s toxin recipe: “There is an enormous difference between a model producing plausible-sounding text and giving someone what they’d need to act.” She said Anthropic sets aggressive refusal thresholds for biological prompts, “accepting some over-refusal out of an abundance of caution.”
Breitbart News social media director Wynton Hall has written his instant bestseller Code Red: The Left, the Right, China, and the Race to Control AI to serve as the definitive guide on how the MAGA movement can create positions on AI that benefit humanity without handing control of our nation to the leftists of Silicon Valley or allowing the Chinese to take over the world.
In Code Red, Hall writes that, “The democratization of lethal A.I. weaponry means that technology that was once the exclusive domain of superpowers will increasingly be available to a host of actors, both state and nonstate.” It becomes especially important to keep the ability to plan a mass casualty terror attack with the help of AI out of the hands of terrorists both foreign and domestic.
Read more at the New York Times here.
Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.
Read the full article here
