OpenAI CEO Sam Altman has apologized to members of a Canadian community where a mass shooting took place earlier this year for not flagging the ChatGPT account of the shooter to law enforcement.
“The pain your community has endured is unimaginable,” Altman wrote in a letter shared Friday on social media by the British Columbia Premier David Eby. “I have been thinking of you often over the past few months.”
Eight people were killed in the Feb. 10 massacre in the small community of Tumbler Ridge in northeast British Columbia. Six people were fatally shot when 18-year-old Jesse Van Rootselaar opened fire at Tumbler Ridge Secondary School, authorities said, and the shooter’s mother and 11-year-old brother were killed at a nearby residence. Van Rootselaar died of a self-inflicted gunshot wound, officials said.
Altman wrote in the letter, dated Thursday, that Van Rootselaar’s ChatGPT account had been banned in June 2025 — about eight months prior to the shooting.
“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said.
In February, OpenAI told CBS News that Van Rootselaar’s account had been flagged last year by automated abuse detection tools and human investigators that identify potential misuses of ChatGPT for violent activities. OpenAI said the account was then banned for violating its usage policies.
OpenAI said that the company had weighed whether to flag the account to law enforcement, but had determined at the time that it did not pose an imminent and credible risk of serious physical harm to others, failing to meet the threshold for referral.
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” OpenAI said in a statement to CBS News in February following the shooting. “We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”
OpenAI says that ChatGPT is trained to discourage real-world harm, and is instructed to refuse to help when it detects an illicit intent. Users that indicate plans to harm others are flagged to human reviewers who determine whether a case poses an imminent threat of physical harm and should be referred to law enforcement, according to the company.
Altman wrote in his letter that OpenAI will remain focused on preventative efforts “to help ensure something like this never happens again.”
“I want to express my deepest condolences to the entire community,” Altman said. “No one should ever have to endure a tragedy like this.”
Earlier this week, Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI after reviewing messages between ChatGPT and a Florida State University student accused in an April 2025 campus shooting that killed two people and wounded several others.
Uthmeier said his team determined that ChatGPT offered “significant advice” to the alleged shooter. His office is issuing subpoenas to OpenAI requesting records of the company’s protocols for reporting possible crimes to law enforcement, and its handling of user threats.
Regarding the Florida shooting, an OpenAI spokesperson said in a statement to CBS News Tuesday that “after learning of the incident,” it “identified a ChatGPT account believed to be associated with the suspect and proactively shared this information with law enforcement.”

