Seven families of victims from a deadly mass shooting in Tumbler Ridge, British Columbia, have filed new lawsuits against OpenAI and CEO Sam Altman in a California court. They accuse the artificial intelligence giant of negligence, asserting the company ignored internal warnings about the shooter's violent ChatGPT interactions months before the February attack. Lawyer Jay Edelson stated, "We're going to put the jury in the room when the decision was made."
The legal landscape surrounding artificial intelligence companies shifted this week as seven families from Tumbler Ridge, British Columbia, redirected their legal challenge against OpenAI. These new lawsuits, filed in California on Wednesday, replace an earlier action initiated in a Canadian court by the family of Maya Gebala, a 12-year-old survivor. That previous case is now voluntarily withdrawn.
This strategic move to a U.S. jurisdiction indicates a determined effort to hold the AI developer and its leadership accountable for alleged lapses in safety protocols. Jay Edelson, the lead attorney representing the families, indicated that he anticipates filing more than two dozen additional lawsuits on behalf of victims and community members from Tumbler Ridge against OpenAI. He plans to request jury trials for each case. "We feel very comfortable making a case in front of a jury," Edelson told the BBC.
The legal team's confidence suggests a belief that the human element of the tragedy will resonate strongly with a jury, moving beyond purely technical arguments. Eight individuals died in the February 10 attack at a secondary school in Tumbler Ridge, a small town nestled in the northern Rockies. Six of the victims were children.
The shooter, 18-year-old Jessie Van Rootselaar, died at the scene from a self-inflicted gunshot wound. Maya Gebala, who was shot three times — in the head, neck, and cheek — remains hospitalized, a vivid reminder of the violence's lasting toll. Following the incident, media reports began to surface, revealing that OpenAI's own safety team had flagged Van Rootselaar's ChatGPT activity months before the attack, specifically noting references to gun violence.
Despite these internal alerts, the company did not inform local police. The central accusation in the lawsuits is that OpenAI and its senior leadership, including CEO Sam Altman, were negligent and actively aided the mass shooting by failing to report the suspect's disturbing online behavior to law enforcement. According to the lawsuit filed by Gebala's family, OpenAI possessed "actual knowledge" of the shooter's intent to carry out an attack.
The legal filing asserts that Van Rootselaar described "scenarios involving gun violence" during conversations with ChatGPT. These troubling exchanges were reportedly identified by a 12-person safety team within OpenAI. That team, the lawsuit alleges, then recommended that the suspect be reported to the Royal Canadian Mounted Police (RCMP).
However, executive leadership at OpenAI, the lawsuit claims, vetoed this critical decision. The filing further alleges that this choice not to alert police was made to protect the company's valuation and public reputation. OpenAI currently holds an estimated value of $850 billion.
The lawsuit starkly states, "They did the math and decided that the safety of the children of Tumbler Ridge was an acceptable risk." This is a serious charge. Last week, Sam Altman published an open letter expressing his sorrow to the victim's families. "I am deeply sorry that we did not alert law enforcement," Altman wrote in the letter, which appeared in the local news outlet Tumbler RidgeLines. "While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered." However, the lawsuits also accuse OpenAI of misleading the public about the suspect being banned from the platform after the dangerous activity was flagged. The legal documents contend that OpenAI's system makes it too easy for users to create new accounts.
Families Sue OpenAI Over Tumbler Ridge Shooting, Cite Chatbot Negligence
The suspect, the lawsuit claims, subsequently opened another account using the same name and "continued using ChatGPT to plan the attack." This specific allegation strikes at the heart of the company's stated safety measures. In response to the lawsuits, an OpenAI spokesperson issued a statement, asserting the company maintains "a zero-tolerance policy for using our tools to assist in committing violence." The spokesperson also stated that OpenAI had "already strengthened our safeguards," which includes improved assessment and escalation processes for "potential threats of violence." OpenAI further refuted the claim that the suspect was able to create a new account. The company told the BBC that it revokes access from banned users, which may involve disabling their account and implementing measures to prevent new account creation.
On Tuesday, OpenAI also published a blog post detailing its updated approach to responding to users who exhibit potentially dangerous behavior on ChatGPT. These actions suggest the company recognizes the need for enhanced vigilance. Here is what the legal filing actually alleges: a direct decision by leadership to override a safety team's recommendation.
This is not merely a technical glitch or an oversight. It suggests a calculated risk assessment, which will be a central point of contention in court. Just as a physician has a duty of care to their patient, the legal system is now grappling with the extent of this duty for artificial intelligence platforms towards the broader public.
The legal precedent for holding technology companies liable for user-generated content has been complex and varied, often hinging on whether the platform actively creates or merely hosts the content. These lawsuits push the boundaries of that legal framework, arguing that OpenAI's AI models are not passive hosts, but rather active participants in user interactions, and therefore carry a higher responsibility. The economic toll of such events extends beyond the immediate human tragedy.
For a company valued at $850 billion, the lawsuits pose a significant financial and reputational risk. The legal costs alone will be considerable. This case is not just about a single incident; it's a diagnostic test for the emerging AI ecosystem.
Before reaching firm conclusions about AI's ultimate responsibility, it's important to understand the legal process ahead. Edelson emphasized that he has requested the suspect's chat logs from OpenAI but has been denied access. He remains confident, however, that these crucial logs will be obtained through the discovery phase of the lawsuits. "We're going to show them how people were jumping up and down saying we need to protect this town, and we're going to show them how Sam Altman and OpenAI routinely make these decisions to put their own interests first," Edelson told the BBC.
The Tumbler Ridge incident is not an isolated event regarding scrutiny of OpenAI's safety protocols. OpenAI is also currently facing a criminal probe in Florida. This separate investigation relates to the use of ChatGPT by a man accused of carrying out a shooting at Florida State University last year, an attack that killed two people and injured several others.
This parallel investigation adds another layer of complexity to the legal challenges facing the company, suggesting a broader pattern of incidents where AI interactions precede violence. OpenAI had previously committed to strengthening its safety measures in discussions with Canadian officials following the Tumbler Ridge attack. Altman reiterated this commitment in his open letter, stating the company will continue "working with all levels of government to help ensure something like this never happens again."
- Seven families from Tumbler Ridge, B.C., filed new lawsuits against OpenAI and CEO Sam Altman in California courts. - The lawsuits allege OpenAI's executive leadership vetoed a safety team's recommendation to report the shooter's violent ChatGPT activity to police months before the February attack. - Lawyer Jay Edelson plans to file over two dozen similar lawsuits and seek jury trials, aiming to uncover internal decision-making processes. - OpenAI denies allegations of user re-account creation and states it has strengthened safety protocols, while Altman has issued an apology. These lawsuits could set a precedent for how artificial intelligence companies are held liable for content generated or facilitated by their platforms. The upcoming court proceedings will scrutinize the balance between corporate responsibility, user privacy, and public safety in the rapidly evolving AI landscape.
Observers will watch closely for the outcome of discovery requests, particularly regarding the shooter's chat logs, which could offer critical insights into OpenAI's internal decision-making and the extent of its knowledge prior to the tragedy. The legal battles will likely shape future regulatory discussions concerning AI safety standards globally, pushing for clearer guidelines on how AI companies manage potential threats identified through their systems. The industry waits to see how these cases influence public perception and investment in AI technology, especially regarding the ethical frameworks governing its development and deployment.
Key Takeaways
— - Seven families from Tumbler Ridge, B.C., filed new lawsuits against OpenAI and CEO Sam Altman in California courts.
— - The lawsuits allege OpenAI's executive leadership vetoed a safety team's recommendation to report the shooter's violent ChatGPT activity to police months before the February attack.
— - Lawyer Jay Edelson plans to file over two dozen similar lawsuits and seek jury trials, aiming to uncover internal decision-making processes.
— - OpenAI denies allegations of user re-account creation and states it has strengthened safety protocols, while Altman has issued an apology.
Source: BBC News









