Iranians Reject All Ceasefire Demands From Trump Officials
Did You See That March Jobs Report?
Trump Reportedly Will Issue New Order That Will Pay Civilian Staffers for ICE/Border...
Ex-Biden Staffer Charged With Murder. Here's What Happened.
Chuck Schumer Is In Worse Trouble With His Party Than We Thought
Colorado Springs Man Sentenced for Hate Crime Hoax That Probably Flipped the City's...
What Exactly Is the Purpose of NATO in the Year 2026?
Plainclothes Miracle
Check Out This Kid's Hilarious Response to CNN When He's Asked Why He's...
America at 250: Rediscovering Exceptionalism in Rail and Space
The Sudden Political Star of Trump II: Marco Rubio
Nine-Year Bid-Rigging Plot Inflated US Air Force Contracts by $37 Million
Barabbas or Bust
Prayer to Remove the Veil of Evil Darkness Over Iran
Good Friday, Resurrection Sunday and the Search for Peace in a Troubled World
Tipsheet

OpenAI Flagged Canada Mass Shooter for Violent Content, but Didn't Contact the Authorities

OpenAI Flagged Canada Mass Shooter for Violent Content, but Didn't Contact the Authorities
AP Photo/Michael Dwyer

A disturbing revelation about a mass shooting in Canada after reports revealed that OpenAI had flagged the account of the alleged shooter and did not report him to the authorities.

Advertisement

The Straits Times reported that OpenAI flagged and banned 18-year-old Jesse Van Rootselaar’s ChatGPT account about eight months before he allegedly murdered eight people in and around a school in Tumbler Ridge, British Columbia.

The company’s internal systems detected troubling prompts Rootselaar used related to gun violence. An OpenAI spokesperson told reporters the company “proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”

Van Rootselaar’s chats described alarming scenarios involving gun violence, using the platform to plan out the alleged attack. The company’s system flagged his interactions in June 2025. This triggered an internal review and debate among about a dozen OpenAI workers about whether the company should alert Canadian law enforcement.

The Wall Street Journal reported that some employees urged management to contact police. However, the decision makers eventually concluded Van Rootselaar's conduct did not meet the threshold for an “imminent and credible” threat that would justify violating a user’s privacy. Instead, the company chose to ban the account for violating usage policies. 

Advertisement

The company said it “considered referring the account to law enforcement at the time, but did not identify credible or imminent planning and determined it did not meet the threshold.”

The shootings started at Tumbler Ridge Secondary School on February 10 where the gunman killed six people and injured at least 25 others. The police found Van Rootselaar dead from a self-inflicted gunshot wound.

Investigators later discovered the bodies of the suspect’s 38-year-old mother and 11-year-old stepbrother. This marks one of the worst mass shootings Canada has experienced.

OpenAI claims its systems are trained to detect and discourage “the possible furthering of violent activities,” and that the company is now reviewing when this behavior should trigger referrals to law enforcement.

Canada’s federal minister responsible for artificial intelligence said the case raised “serious questions” about whether voluntary corporate standards are enough to keep people safe. He suggested he and other government officials might use this shooting to expand the government’s control over artificial intelligence companies. Ottawa could consider imposing rules to lay out when AI platforms are required to report potential threats to the authorities.

Advertisement

OpenAI spokesperson Kayla Wood said the company is attempting to balance user privacy with public safety. It is working with regulators and other experts to avoid “unintended consequences from overly broad referrals to law enforcement.”

Stories like this have proliferated since artificial intelligence became more mainstream, with ordinary consumers adopting the technology. Similar to social media platforms, AI companies will likely struggle to strike the balance between public safety and privacy rights, as Wood suggested.

The exact wording of the prompts Van Rootselaar used is not yet known, so it is difficult to judge whether OpenAI made the right decision in this case.

Join the conversation as a VIP Member

Recommended

Trending on Townhall Videos

Advertisement
Advertisement
Advertisement