Questions about how OpenAI handled information about the Tumbler Ridge, B.C., mass shooter months before the attack are raising new concerns about how artificial intelligence companies should be regulated in Canada.
The company behind ChatGPT said it banned an account linked to Jesse Van Rootselaar in June 2025 for using the chatbot to support violent activities. However, it did not contact police at the time because the situation did not meet its internal standard of an “imminent” threat. Police say 18-year-old Van Rootselaar later killed eight people and injured 25 others on Feb. 10 before taking her own life. OpenAI contacted the Royal Canadian Mounted Police after the shooting.
Artificial Intelligence Minister Evan Solomon met with company representatives in Ottawa to discuss the issue and AI safety rules. Heritage Minister Marc Miller said the government is working on online safety laws that would include AI platforms, but details and timelines are still unclear.
Under Canada’s current privacy laws, companies may report possible threats to police, but they are not required to. Experts say leaving that decision up to companies can be risky.
Vincent Paquin, a professor at McGill University, said AI companies should not decide on their own what counts as a serious threat. He also warned that many people use AI chatbots for mental health support, even though they are not medical tools.
The issue comes as AI companies face lawsuits in the United States over claims their platforms contributed to self-harm. OpenAI denies those claims and says it blocks most harmful content.
Privacy experts say future laws must balance public safety with protecting personal privacy. Some point to a new California law that requires large AI companies to report certain “catastrophic” risks to the state as a possible model for Canada.
Experts say Canada’s upcoming AI strategy should include clear safety rules, outside oversight and better transparency from tech companies.
