OpenAI CEO Apologizes for Not Alerting Police Before Canada Mass Shooting

OpenAI CEO Sam Altman has apologized for failing to alert law enforcement about a teenager's ChatGPT account flagged for violent activity before a mass shooting in Canada. The 18-year-old attacker, Jesse Van Rootselaar, killed her mother and half-brother before opening fire at a school in British Columbia, leaving five children and a teacher dead. Altman expressed deep remorse in a letter, acknowledging the company should have informed authorities after banning the account in June 2025. A lawsuit alleges the teenager used ChatGPT as a 'trusted confidante' and discussed gun violence scenarios, with some employees recommending police notification.

Key Points: OpenAI CEO Apologizes for Canada Mass Shooting Lapse

  • OpenAI CEO Sam Altman apologizes for not alerting police about flagged account
  • 18-year-old Jesse Van Rootselaar killed mother, half-brother, then opened fire at school
  • Five children and a teacher died, 25 injured in British Columbia shooting
  • Lawsuit alleges ChatGPT was used as 'trusted confidante' for violent discussions
3 min read

OpenAI CEO apologises for not alerting police before Canada mass shooting

OpenAI CEO Sam Altman apologizes for failing to alert police about a teenager's ChatGPT account flagged for violent activity, linked to a deadly Canada mass shooting.

"I am deeply sorry that we did not alert law enforcement to the account that was banned in June. - Sam Altman"

New Delhi, April 25

OpenAI CEO Sam Altman has apologised for the AI company's failure to alert law enforcement agencies about warning signs linked to a teenager who later carried out one of Canada's recent deadliest mass shootings.

The apology came after more than two months of attack in which 18-year-old Jesse Van Rootselaar killed her mother and half-brother before opening fire at a secondary school in Tumbler Ridge, British Columbia, leaving five children and a teacher dead, according to multiple reports.

According to reports, Altman acknowledged in a letter shared by local news outlet Tumbler RidgeLines and British Columbia Premier David Eby that OpenAI should have informed authorities after flagging the attacker's account.

However, the attacker later died of a self-inflicted gunshot wound.

Moreover, at least 25 people were injured in the shooting, which Canadian authorities have described as one of the country's worst mass casualty incidents.

"I want to express my deepest condolences to the entire community. No one should ever have to endure a tragedy like this. I cannot imagine anything worse in this world than losing a child," Altman said in the letter.

"I am deeply sorry that we did not alert law enforcement to the account that was banned in June. While I know words can never be enough, I believe an apology is necessary to recognise the harm and irreversible loss your community has suffered," he added.

OpenAI had earlier said that Rootselaar's ChatGPT account was internally flagged in June 2025 for misuse 'in furtherance of violent activities' and was subsequently suspended.

However, the company did not notify authorities at the time, stating that the activity did not meet the threshold of posing a credible or imminent threat.

The company now says it is reviewing its policies and will work more closely with governments to prevent similar incidents. "Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again," Altman said.

A lawsuit filed by the family of one of the victims has alleged that the teenager used ChatGPT as a 'trusted confidante' and discussed multiple gun violence scenarios in the days leading up to the attack.

The suit claimed that some OpenAI employees had flagged the conversations as indicating a potential risk of serious harm and recommended notifying law enforcement, but the suggestion was rejected as the threat was not deemed imminent. The account was only suspended.

It further alleged that the attacker was able to create a second account after the first was banned, allowing similar conversations to continue.

The company reportedly contacted Canadian authorities only after the shooting.

- IANS

Share this article:

Reader Comments

P
Priya S
Heartbreaking tragedy 😢 As a mother, I can't imagine what those families are going through. The fact that OpenAI flagged the account but didn't inform authorities is baffling. And the attacker could just make a second account! Our own laws in India need to catch up with technology to prevent such incidents. We shouldn't wait for a tragedy in our country to act.
M
Michael C
Having worked in AI safety myself, this is exactly the kind of scenario we worry about most. The "credible and imminent threat" threshold is problematic because by the time something is imminent, it's often too late. OpenAI needs to prioritize harm prevention over legal liability concerns. Also concerning that the teen created a second account - shows how easily motivated individuals can bypass restrictions.
K
Kavya N
Sam Altman saying "words can never be enough" is correct, but at least he apologised publicly. What worries me is the broader pattern - AI companies in the US are quite lax about these things. In India, we have the IT Act and data protection rules, but nothing specifically for AI content moderation. Hope our government takes note and creates proper guidelines before we face something similar.
S
Sarah B
This lawsuit is going to be massive. The fact that employees flagged it and were overruled is a major red flag about OpenAI's internal decision-making. Also concerning that the teenager was using ChatGPT as a "trusted confidante" - shows how isolated some young people are. Mental health support and AI regulation both need urgent attention globally, including in India where mental health stigma is still high.
V
Varun X

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50