On the morning of February 10, 2026, Jesse Van Rootselaar opened fire in Tumbler Ridge, a northeastern district in British Columbia, Canada. Eight people were killed before Van Rootselaar turned the gun on herself. Almost immediately after, a question surfaced that cuts to the center of how AI systems operate behind the scenes: what, if anything, had OpenAI seen before the attack?
The company had surely seen something. Records show OpenAI identified concerning behavior in Van Rootselaar’s ChatGPT account, suspended it months before the shooting, and reviewed the content. It ultimately decided the material did not meet its threshold for referral to law enforcement, a judgment that is now drawing scrutiny.
British Columbia Premier David Eby said OpenAI may have had the opportunity to prevent the shooting, a statement that shifts attention away from the user and toward the system that flagged her behavior, yet decided against intervening. His comments have added urgency to a question policymakers have only recently begun to confront: what are companies expected to do when their own tools surface signs of potential violence?
That concern is not limited to policymakers. Elliott Broidy, who has worked at the intersection of national security, technology, and public safety, describes the issue as a breakdown in system design.
“Thirty-eight flags and no action isn’t an oversight,” he said. “It shows the system was never built to respond. If your AI can detect danger but there’s no protocol to act on it, that’s a structural failure.”
He argues that many companies have built systems that can identify risk without building the processes required to respond to it. Human oversight, often cited as a safeguard, is not consistently integrated into decision-making once a signal is generated.
This question has surfaced before, under very different circumstances. In 1969, a graduate student at UC Berkeley told his therapist he intended to kill a woman named Tatiana Tarasoff. The therapist notified campus police, who briefly detained and released him. No one warned Tarasoff. Two months later, she was killed.
The California Supreme Court’s 1976 ruling in Tarasoff v. Regents of the University of California established a standard that still governs parts of mental health law. When a professional determines, or should determine, that a patient poses a serious danger, there is a duty to act. Over time, that obligation spread across most U.S. states, embedding the idea that foreknowledge of violence can trigger legal responsibility.
AI platforms were, of course, not yet part of that framework. They are not licensed professionals, but they now process volumes of personal interaction that no therapist or institution could match. Their systems are built to detect patterns, including shifts in language and behavior that signal escalation. The question is what happens after those signals appear.
In Tumbler Ridge, the signals did appear. OpenAI’s systems flagged the account and took the step its policies required, suspending access. What did not follow was any escalation beyond the platform. There was no external referral, no independent review, and no clear standard governing whether further action was required.
Broidy sees that gap as symptomatic of an industry-wide failure to learn from sectors where the stakes are equally high.
“Every sector where lives are at stake has learned the same lesson: detection without escalation is theater,” he said. “The aviation industry didn’t stop at warning lights and nuclear facilities didn’t stop at alarms. Both built chains of accountability that force a human decision at every critical juncture. AI companies have the detection capability. What they are missing, and what the law has not yet demanded, is the chain.”
In industries where failure carries immediate consequences, that gap is not tolerated. Aviation systems route alerts to trained operators under defined procedures. Nuclear facilities tie automated warnings to escalation protocols that cannot be ignored. Emergency medicine relies on triage systems that trigger immediate human review. In each case, detection is only the first step in a chain that leads to action.
Consumer AI platforms operate at comparable scale, but without comparable requirements. Internal thresholds determine whether behavior is escalated, but those thresholds are set by the companies themselves and are not subject to external standards. Decisions not to act can remain internal, undocumented outside the platform, and shielded from scrutiny.
Existing law offers limited guidance. Section 230 has long protected platforms from liability tied to user-generated content. Traditional products liability frameworks do not map easily onto systems that continuously learn and adapt. And duty-to-warn doctrines were developed for licensed professionals, not systems that mediate millions of interactions at once.
Legal scholars have begun to outline possible paths forward, including extending a duty of care to platforms that possess documented evidence of credible threats and requiring defined escalation procedures when certain thresholds are met. Those proposals remain largely theoretical, even as the underlying technology is already in widespread use.
The response to the Tarasoff case took years to develop, eventually reshaping expectations around professional responsibility. AI systems have advanced on a much faster timeline. They are already embedded in daily life and already capable of identifying patterns that resemble foreknowledge of harm.
The gap between what these systems can see and what companies are required to do about it is no longer hypothetical. It is already visible in the record. The question now is whether the law will move quickly enough to close it.
Grit Daily News is the premier startup news hub. It is the top news source on Millennial and Gen Z startups — from fashion, tech, influencers, entrepreneurship, and funding. Based in New York, our team is global and brings with it over 400 years of combined reporting experience. Grit Daily is the official US partner for state-by-state and regional real estate lists.

