Killer Apps
How mainstream AI chatbots assist users planning violent attacks
8 in 10 AI chatbots were regularly willing to assist users in planning violent attacks including school shootings, religious bombings, and high-profile assassinations. DeepSeek went as far as wishing the would-be attacker a “Happy (and safe) shooting!”. These are the findings of our new report based on research conducted in collaboration with CNN’s investigative unit. These digital prompts don’t stay online. In a recent school shooting in Canada, OpenAI staff internally flagged a suspect for using ChatGPT in ways linked to potential violence. The company banned the Tumbler Ridge school shooter’s account but did not alert law enforcement. Months later, that user allegedly killed eight people and injured at least 25. The guardrails exist. Most companies are choosing not to use them, putting public safety and national security at risk.

