Canadian government demands safety changes from OpenAI

Published: (February 25, 2026 at 03:49 PM EST)
2 min read
Source: Engadget

Source: Engadget

Background

Canadian officials summoned leaders from OpenAI to Ottawa this week to address safety concerns about ChatGPT. The crux of the government’s concerns was that OpenAI did not notify authorities when it banned the account of a user who allegedly committed a mass shooting in British Columbia earlier this month.

Government Concerns

“The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they’re not forthcoming very quickly, the government is going to be making changes,” said Justice Minister Sean Fraser【Reuters】(https://www.reuters.com/sustainability/society-equity/canada-tells-openai-boost-safety-measures-or-be-forced-by-government-2026-02-25/).

It is unclear what those government‑led changes or rules might be. Two previous attempts to pass an online harms act in Canada were unsuccessful.

Wall Street Journal Report

A recent report by The Wall Street Journal claimed that in 2025 some OpenAI employees flagged the account of the alleged shooter, Jesse Van Rootselaar, as containing potential warnings of real‑world violence and called for leadership to notify law enforcement. Although the account was banned for policy violations, a company representative said the activity did not meet OpenAI’s criteria for engaging the local police【WSJ】(https://www.wsj.com/us-news/law/openai-employees-raised-alarms-about-canada-shooting-suspect-months-ago-b585df62?mod=e2tw).

Official Reactions

“Those reports were deeply disturbing, reports saying that OpenAI did not contact law enforcement in a timely manner,” said Canadian Artificial Intelligence Minister Evan Solomon ahead of the discussion with company leaders【Politico】(https://www.politico.com/news/2026/02/23/canada-openai-chatgpt-school-shooting-00793471).
“We will have a sit‑down meeting to have an explanation of their safety protocols and when they escalate and their thresholds of escalation to police, so we have a better understanding of what’s happening and what they do.”

OpenAI has been implicated in multiple wrongful‑death suits. The company’s ChatGPT was accused of encouraging “paranoid beliefs” before a man killed his mother and himself in a December 2025 lawsuit【Engadget】(https://www.engadget.com/ai/lawsuit-accuses-chatgpt-of-reinforcing-delusions-that-led-to-a-womans-death-183141193.html). It is also at the center of a wrongful‑death lawsuit alleging that the AI helped a teenager plan and commit suicide【Engadget】(https://www.engadget.com/ai/the-first-known-ai-wrongful-death-lawsuit-accuses-openai-of-enabling-a-teens-suicide-212058548.html).

0 views
Back to Blog

Related posts

Read more »