OpenAI will notify authorities of credible threats after Canada mass shooter's second account was discovered

Published: (February 27, 2026 at 06:27 AM EST)
2 min read
Source: Engadget

Source: Engadget

Background

OpenAI has pledged to strengthen its safety protocols and to notify law enforcement of credible threats more promptly, according to Politico and The Washington Post. Canadian politicians summoned the company’s leaders after reports that OpenAI did not notify authorities when it banned the account owned by the Tumbler Ridge, British Columbia mass‑shooting suspect in 2025. After the original account was removed for “potential warnings of committing real‑world violence,” the perpetrator created a second account. OpenAI discovered this second account only after the shooter’s name was released and has since notified authorities.

OpenAI’s response

In a letter to Canadian officials, Ann O’Leary, OpenAI’s vice‑president of global policy, wrote that the company will:

  • Tweak detection systems to better prevent banned users from returning to the platform.
  • Notify authorities when it detects “imminent and credible” threats in ChatGPT conversations, even if the user does not disclose a target, means, and timing of planned violence.
  • Establish a point of contact for Canadian law enforcement to enable rapid information sharing.

O’Leary noted that if these rules had been in place when the shooter’s original account was banned in 2025, the police would have been notified at that time.

Implications and next steps

The Canadian government views the failure to report the shooter’s original account as a serious shortcoming. It has warned that it may regulate AI chatbots in Canada unless developers can demonstrate adequate safeguards for users. It remains unclear whether OpenAI will apply the same policy changes in the United States or other jurisdictions.

Read the original article

0 views
Back to Blog

Related posts

Read more »