Families sue OpenAI for not flagging shooter’s ChatGPT chats before Canadian school massacre

TL;DR Summary
Families of seven victims from a British Columbia high school shooting filed federal lawsuits in San Francisco accusing OpenAI and CEO Sam Altman of negligence after the company’s safety team flagged the shooter’s ChatGPT account eight months before the attack but did not alert Canadian authorities, instead deactivating the account. The suits allege that leadership overruled safety warnings to protect the company’s survival and IPO, contributing to the tragedy. OpenAI says it has strengthened safeguards and escalation procedures since, and this case is part of a broader wave of AI-safety lawsuits.
- Families sue OpenAI over failure to report Canada mass shooter’s behavior on ChatGPT The Guardian
- Families sue OpenAI over Canadian mass shooter's use of ChatGPT NPR
- OpenAI could have stopped Canadian trans teen's school shooting — but didn't because of greed: bombshell lawsuits New York Post
- OpenAI’s Sam Altman apologizes to Canadian community after failing to flag mass shooter’s conversations with its AI chatbot CNN
- School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users Ars Technica
Reading Insights
Total Reads
0
Unique Readers
4
Time Saved
6 min
vs 7 min read
Condensed
92%
1,208 → 91 words
Want the full story? Read the original article
Read on The Guardian