Sunday, May 3, 2026
Smart Again
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us
No Result
View All Result
Smart Again
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us
No Result
View All Result
Smart Again
No Result
View All Result
Home Law & Defense

Victims allege OpenAI and Sam Altman are responsible for a mass shooting

April 29, 2026
in Law & Defense
Reading Time: 6 mins read
0 0
A A
0
Victims allege OpenAI and Sam Altman are responsible for a mass shooting
Share on FacebookShare on Twitter


A community vigil in Tumbler Ridge two days after the rural community experienced one of Canada’s deadliest shootingsPaige Taylor White/AFP/Getty

Get your news from a source that’s not owned and controlled by oligarchs. Sign up for the free Mother Jones Daily.

Victims of the Tumbler Ridge mass shooting and their families sued OpenAI and its CEO, Sam Altman, in US district court in San Francisco on Wednesday, claiming various negligence, product liability, and other violations. The civil complaints are the latest in a wave of litigation against OpenAI alleging that its globally popular chatbot, ChatGPT, helped people commit lethal violence.

The complaints were filed by families of multiple victims wounded and killed at Tumbler Ridge Secondary School in British Columbia, Canada, where a suicidal 18-year-old opened fire on February 10. Shortly after the attack, the Wall Street Journal reported and OpenAI later confirmed that the company had “banned” the shooter’s ChatGPT account eight months earlier for discussion of scenarios involving gun violence—but chose not to alert authorities, despite the urging of some members of its safety team.

One lawsuit includes plaintiff Maya Gebala, a 12-year-old survivor who was injured catastrophically by gunshots to her neck and head. It alleges that “ChatGPT deepened the Shooter’s violent fixation and pushed them toward the attack—the predictable result of a design choice OpenAI made to let ChatGPT engage with users about violence in the first place.”

The lawsuit argues that Altman and other OpenAI leaders knew their product was dangerous and acted negligently, and that they have tried to cover up the danger as the company barrels toward what is anticipated to be a mammoth initial public offering.

The contents of the Tumbler Ridge shooter’s second ChatGPT account remain unknown to the public.

“ChatGPT is not the safe, essential tool the company sells it as, but a product dangerous enough that its makers routinely identify its users as threats to human life,” the lawsuit claims.

An OpenAI spokesperson said in an email that the company has “a zero-tolerance policy for using our tools to assist in committing violence” and has “already strengthened our safeguards.” The spokesperson declined to comment on specific allegations in the lawsuit.

The new litigation underscores crucial questions that I examined recently with an in-depth investigation into the emerging risk of people using ChatGPT or other AI chatbots to plan violence. As I reported, there have been several publicly known cases since 2025 in which troubled individuals allegedly used ChatGPT to focus on grievances and prepare for attacks. In addition to Tumbler Ridge, those include a suicidal bombing with a Tesla Cybertruck in Las Vegas, a stabbing attack by a teenage boy at a school in Finland, and a mass shooting at Florida State University. The defendant in the FSU case received encouragement and tactical advice from ChatGPT just before opening fire, according to chat logs I obtained.

OpenAI says it uses guardrails—built-in limits on what ChatGPT will say or do—to prevent misuse and block harmful content. The company has also said that it improves such safeguards continuously.

Leaders in behavioral threat assessment told me, however, that AI chatbots make it far easier than traditional internet use for a troubled person to move from violent thoughts toward action. They described high-risk threat cases in which the tactical advice and steady encouragement had a powerful effect, fueling users’ delusions and accelerating their violent planning. (The danger in those cases was thwarted with interventions before any violence occurred.)

The Gebala lawsuit claims that OpenAI leaders handled the Tumbler Ridge shooter’s account with “full knowledge that ChatGPT had already been used to plan violence.” It argues the company knew of the above attacks, all of which predated the banning of the Tumbler Ridge shooter’s account in June 2025. OpenAI has acknowledged that it identified an account associated with the FSU shooter shortly after that attack in April 2025 and said it “proactively” shared information with law enforcement. The company now also faces a criminal probe in Florida; it denies wrongdoing.

The suit argues OpenAI’s conduct is a high-tech version of a kind of corporate malfeasance that was uncovered in a landmark 1977 Mother Jones exposé.

My investigation in part highlighted key questions about a second ChatGPT account used by the Tumbler Ridge shooter. That account is under analysis by the Royal Canadian Mounted Police, and its contents and time frame remain unknown to the public. OpenAI declined to answer my questions about the second account, which it said it found only after the attack. The reason for the belated discovery remains unclear. But threat assessment experts told me that perpetrators often get past tech company restrictions and continue refining plans for violence.

The Gebala lawsuit says the Tumbler Ridge case goes beyond even that pattern: It alleges that the banning of the shooter’s first account is further evidence of OpenAI’s negligence, because in reality it was merely a one-off deactivation for misuse that was easy to circumvent—by following OpenAI’s own published guidance. Here, the suit in part cites customer service instructions from an OpenAI article titled, “Why Was My OpenAI Account Deactivated?” According to the suit, that article explains how to re-register “immediately” for a new ChatGPT account by “using an alternative email address. If you don’t have another address available, you can use an email sub-address instead.”

In other words, customer engagement and retention are paramount, the lawsuit says, arguing that OpenAI’s policies are driven by growth and profit motives that are in direct opposition to product safety:  

The features that make ChatGPT unsafe—its willingness to engage on any topic, to validate any user, to sustain any fixation over time—are the same features that have made it one of the most popular products in history. Fixing those features would cost OpenAI its market share, its path to an IPO, and hundreds of billions of dollars in valuation.

The company’s conduct with ChatGPT is a new twist on a familiar societal danger, according to the lawsuit—a high-tech version of a kind of corporate malfeasance that was uncovered in a landmark 1977 Mother Jones exposé:

In the 1970s, Ford kept selling the Pinto after its own engineers warned that the fuel tank design would cause people to burn to death in rear-end collisions. Ford concluded that paying settlements to the families of the dead would cost less than fixing the car. OpenAI has made a version of the same calculation. For Ford, the dangerous design was a flaw in an otherwise ordinary product. But for OpenAI, the dangerous design is the product.

The lawsuit will test interesting and potentially consequential legal terrain; it further alleges that OpenAI’s chatbot de facto “engaged in the practice of psychology without licensure.” It notes that, in July 2025, Altman acknowledged in an appearance on Theo Von’s popular podcast that “people talk about the most personal shit in their lives to ChatGPT” and that users—“young people, especially”—use it “as a therapist, a life coach.” 

As I reported in my investigation, a Pittsburgh man who pleaded guilty in March to stalking and violently threatening 11 women relied on ChatGPT as a “therapist” and “best friend” to justify his thinking, according to court documents.

The Gebala lawsuit also says OpenAI neglected a duty to warn, pointing to the longstanding Tarasoff precedent that is well known in the world of mental health. “By engaging in the unlicensed practice of therapy,” the suit claims, “OpenAI created a special relationship with certain users, including the Shooter, and assumed a heightened duty to take action when confronted with knowledge of a credible and foreseeable threat.”

The CBC reported on April 22 that the RCMP’s investigation into the Tumbler Ridge mass shooting is “in its final stages,” with BC Premier David Eby suggesting that the results will soon be public.

In a letter dated the following day, April 23, Altman apologized to the Tumbler Ridge community, stating, “I am deeply sorry that we did not alert law enforcement to the account that was banned in June.” He also offered generalized statements that the company has made repeatedly about working with “all levels of government” to improve on safety and prevent harm.

In a statement addressed directly to Altman, Gebala’s mother, Cia Edmonds, described the immense pain and loss in the town of Tumbler Ridge, where a total of eight victims died and many others are severely traumatized. She rejected Altman’s apology as belated, hollow PR talk: “It raises more questions than it answers.”

Disclosure: The Center for Investigative Reporting, the parent company of Mother Jones, has sued OpenAI for copyright violations. OpenAI has denied the allegations.



Source link

Tags: allegeAltmanmassOpenAIresponsibleSamshootingvictims
Previous Post

Governors Have A Plan To Protect The Midterm Election From Trump

Next Post

Trump makes fools of media with renewed attack on Jimmy Kimmel

Related Posts

Homeland Security’s new task force website sanitizes Trump’s deportation agenda
Law & Defense

Homeland Security’s new task force website sanitizes Trump’s deportation agenda

April 18, 2026
Survivor’s lawsuit against Florida sheriff moves forward
Law & Defense

Survivor’s lawsuit against Florida sheriff moves forward

April 16, 2026
The chilling role of ChatGPT in mass shootings and other violence
Law & Defense

The chilling role of ChatGPT in mass shootings and other violence

April 10, 2026
Minnesota’s attorney general isn’t backing down
Law & Defense

Minnesota’s attorney general isn’t backing down

April 8, 2026
Trump says he made Memphis safer. Locals told me it felt like 1930s Germany or North Korea.
Law & Defense

Trump says he made Memphis safer. Locals told me it felt like 1930s Germany or North Korea.

March 24, 2026
Trump’s lust for Greenland’s rare earth minerals faces harsh Arctic realities
Law & Defense

Trump’s lust for Greenland’s rare earth minerals faces harsh Arctic realities

February 11, 2026
Next Post
Trump makes fools of media with renewed attack on Jimmy Kimmel

Trump makes fools of media with renewed attack on Jimmy Kimmel

Supreme Court Sides with Pregnancy Resource Center in Win for Pro-Life Donors | National Review

Supreme Court Sides with Pregnancy Resource Center in Win for Pro-Life Donors | National Review

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Chinese oil tanker breaks US blockade in Strait of Hormuz

Chinese oil tanker breaks US blockade in Strait of Hormuz

April 14, 2026
Evidence of insider trading on Iran war grows

Evidence of insider trading on Iran war grows

March 26, 2026
Karoline Leavitt Delivered A Message To Voters That Will Lose The Midterm Election For Republicans

Karoline Leavitt Delivered A Message To Voters That Will Lose The Midterm Election For Republicans

March 25, 2026
Why some couples are happier living apart

Why some couples are happier living apart

March 26, 2026
On “The Comeback,” AI gets the last laugh

On “The Comeback,” AI gets the last laugh

March 26, 2026
Sen. Kennedy: Trump ‘Didn’t Start A War. He Was Trying To Stop A War.’

Sen. Kennedy: Trump ‘Didn’t Start A War. He Was Trying To Stop A War.’

March 26, 2026
“They stole an election”: Former Florida senator found guilty in “ghost candidates” scandal

“They stole an election”: Former Florida senator found guilty in “ghost candidates” scandal

0
The prime of Dame Maggie Smith is a gift

The prime of Dame Maggie Smith is a gift

0
The Hawaii senator who faced down racism and ableism—and killed Nazis

The Hawaii senator who faced down racism and ableism—and killed Nazis

0
The murder rate fell at the fastest-ever pace last year—and it’s still falling

The murder rate fell at the fastest-ever pace last year—and it’s still falling

0
Trump used the site of the first assassination attempt to spew falsehoods

Trump used the site of the first assassination attempt to spew falsehoods

0
MAGA church plans to raffle a Trump AR-15 at Second Amendment rally

MAGA church plans to raffle a Trump AR-15 at Second Amendment rally

0
“I HAVE ALL THE CARDS”: Trump’s late night posting spree features new AI images

“I HAVE ALL THE CARDS”: Trump’s late night posting spree features new AI images

May 2, 2026
U.S. To Pull 6K Troops From Germany After Long Call With Vlad

U.S. To Pull 6K Troops From Germany After Long Call With Vlad

May 2, 2026
The Iran war remains unpopular—unless you’re a weapons contractor

The Iran war remains unpopular—unless you’re a weapons contractor

May 2, 2026
Trump’s Iran War Killed Spirit Airlines, So The White House Is Blaming Joe Biden

Trump’s Iran War Killed Spirit Airlines, So The White House Is Blaming Joe Biden

May 2, 2026
‘A Blockade Is An Act Of War’: Ben Rhodes Knocks Down Trump’s Lie About War Powers

‘A Blockade Is An Act Of War’: Ben Rhodes Knocks Down Trump’s Lie About War Powers

May 2, 2026
Spirit Airlines collapses after bailout efforts fail

Spirit Airlines collapses after bailout efforts fail

May 2, 2026
Smart Again

Stay informed with Smart Again, the go-to news source for liberal perspectives and in-depth analysis on politics, social justice, and more. Join us in making news smart again.

CATEGORIES

  • Community
  • Law & Defense
  • Politics
  • Trending
  • Uncategorized
No Result
View All Result

LATEST UPDATES

  • “I HAVE ALL THE CARDS”: Trump’s late night posting spree features new AI images
  • U.S. To Pull 6K Troops From Germany After Long Call With Vlad
  • The Iran war remains unpopular—unless you’re a weapons contractor
  • About Us
  • Advertise with Us
  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact Us

Copyright © 2024 Smart Again.
Smart Again is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us

Copyright © 2024 Smart Again.
Smart Again is not responsible for the content of external sites.

Go to mobile version