Friday, April 10, 2026
Smart Again
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us
No Result
View All Result
Smart Again
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us
No Result
View All Result
Smart Again
No Result
View All Result
Home Politics

A wave of quittings reveals the AI industry’s gloom and doom

March 5, 2026
in Politics
Reading Time: 8 mins read
0 0
A A
0
A wave of quittings reveals the AI industry’s gloom and doom
Share on FacebookShare on Twitter


Mother Jones illustration; National Gallery of Norway/Wikimedia; Getty

Get your news from a source that’s not owned and controlled by oligarchs. Sign up for the free Mother Jones Daily.

On Saturday, the United States and Israeli governments unleashed stealth bombers, drones, and missiles on Iran, citing a rationale of preventing Iran from developing and deploying catastrophic nuclear weapons.  

“They’ve rejected every opportunity to renounce their nuclear ambitions, and we can’t take it anymore,” President Donald Trump would later say in an 6-minute address to the nation.

But in their quest to prevent Iran from developing an advanced weapon, the US and Israel were also deploying one of their own: artificial intelligence. 

As the Wall Street Journal reported the day of the strikes, the US military used Anthropic’s large language model, Claude, for “intelligence assessments, target identification and simulating battle scenarios” to prepare its attack. 

But in the months leading up to the military action, the Trump administration and Anthropic had actually been at an impasse over how the Pentagon could use Claude, with Anthropic posing concerns about its technology being deployed for mass surveillance or to power fully autonomous weapons. Indeed, a few hours before approving the strikes, Trump posted to Truth Social that because of the disagreement, the US would “IMMEDIATELY CEASE… all use of Anthropic’s technology,” after a six-month phase out.

Even before Trump’s pronouncement targeting Anthropic, OpenAI and xAI had raced to fill the void and fulfill the vendor’s lucrative military contracts by agreeing their products could be used in “all lawful use” cases—prompting outcry among both the public and the companies’ own staff who feared that wording was too vague. 

“The people building this technology are simultaneously more excited and more frightened than anyone else.”

In response to that pushback, Sam Altman, CEO of OpenAI, wrote an internal memo, which he later posted on X, saying the policy change had been too rushed. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” it read.

If Altman’s about-face seemed candid, it was also emblematic of a broader problem among AI sector leaders: Their job is to chase the dollars spun off by a technology that could plausibly lead to widespread doom.

Despite Anthropic’s red lines over military applications, its leaders also can’t help themselves from engaging in opportunism at the cost of caution. Just last month Time reported Anthropic would drop the core of its safety policy, which promised to only advance AI systems the company could guarantee were safe. “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments… if competitors are blazing ahead,” Anthropic’s chief science officer Jared Kaplan told the magazine. 

In other words, if their competitors jumped off a bridge, they probably would too.

“They think that they can do this horrible task a little more safely than the next guy. And presumably, at least one of them is right,” says Nate Soares, president of the Machine Intelligence Research Institute and co-author of the 2025 best-selling book, If Anyone Builds It, Everyone Dies. “But they’re not comporting themselves with the gravity of this horrible situation.” 

Observers in the AI industry have long been warning of—and hyping up—the life-altering, world-ending potential of their flagship products. A few weeks before the attacks on Iran, an essay posted to X went viral. Titled “Something Big Is Happening,” it was about the future of the artificial intelligence industry, and, fittingly, seemed to have been written with generous help from AI. The author was Matt Shumer, who runs an “AI personal assistant” company. 

Shumer’s point was, effectively, that artificial intelligence is taking over the world, that the technology has grown exponentially in just the last few months, and that those who don’t embrace it immediately will be left behind. 

“We have summoned an extremely powerful creature. If let loose, it would grow much faster than whatever we can do to tame it.”

“The people building this technology are simultaneously more excited and more frightened than anyone else on the planet,” Shumer wrote. “They believe it’s too powerful to stop and too important to abandon. Whether that’s wisdom or rationalization, I don’t know.” 

His breathless post, which has so far garnered 85 million views, came at virtually the same time as the very public resignations of a dozen industry AI workers, with many announcing their deep ethical reservations about the field on their way out. (Like Shumer’s essay, these resignations were posted on X, the place where the snake most energetically eats its own tail.) 

The first to quit in recent weeks was Anthropic researcher Mrinank Sharma, who announced on February 9 that he’d be leaving his job, and the industry, to “explore a poetry degree and devote myself to the practice of courageous speech.” 

“I continuously find myself reckoning with our situation,” Sharma wrote. “The world is in peril, and not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this moment.”

Sharma was followed by OpenAI researcher Zoë Hitzig, who announced she was leaving her job on February 11 via a New York Times op-ed that cited her concerns about her employer’s decision to run ads in ChatGPT. Hitzig wrote that relying on advertising revenue could incentivize a downward spiral of changes to increase user engagement, which could mean “manipulating” people who already use ChatGPT to talk about deep personal questions about health, relationships, faith, and the afterlife. 

Then researcher Hieu Pham announced he was leaving Open AI in late February, having posted a dire warning just a few days before: “We have summoned an extremely powerful creature. If let loose, it would grow much faster than whatever we can do to tame it.”

“I cannot believe I would say this one day, but I am burnt out” after less than a year at Open AI, wrote Pham, who had previously worked for xAI. “All the mental health deteriorating that I used to scoff at is real, miserable, scary, and dangerous.”

Over a single week in February, at least 11 engineers and two co-founders quit xAI, owned by Elon Musk, whose flagship AI product Grok has been embroiled in an ongoing scandal about its creation of non-consensual sexualized images of women and girls. (As TechCrunch reported, Musk has implied he forced some of those resignations; it’s unclear whether that’s true.) 

Collectively, the resignations and nebulous warnings that have accompanied them suggest existential terror has seized the AI industry. The people involved are actively debating whether they are building the future of humankind or paving the way for its downfall. Their agonized trepidation was being aired in an extremely public way, even before the use of Anthropic technology in the Iran attacks gave it new urgency.

“People in Silicon Valley are spooked. They know, by their own admissions, that they are toying with an extremely dangerous technology. It’s not a big surprise that conscientious people regularly leave sounding shaken,” explains Soares. 

This is concerning, says Margaret Mitchell, chief ethics scientist at the open-source AI company Hugging Face, because those who become worried about the ethical, environmental, and human impacts of AI tend to be the first to depart the companies developing it. They leave behind people who are less bothered by the way their products might be affecting society.

“For some people, their ability to feel and experience harm to others makes it impossible for them to continue to do work that’s complicit in harming those others,” Mitchell says. “For some people it doesn’t affect them as much.” 

“Most of today’s AI companies exist because the people running them don’t trust any of the other AI executives.”

To be clear, it’s not just the AI industry that appears frightened of AI; almost every day heralds a new apocalyptic headline or study about the ways that it could destroy various industries or plunder the entire economy. Consider the so-called 2028 GLOBAL INTELLIGENCE CRISIS REPORT, which roiled the stock market last month. The fictionalized future economic analysis published by the No. 1 finance Substack, Citrini Research, offered a grim prophesy where millions of white collar workers have lost jobs to AI bots, which—unlike humans—don’t have mortgages to pay, families to feed, or vacations to take, a scenario it describes as “more economic pandemic than economic panacea.”

One of the biggest issues in the AI industry is much more pedestrian than the cri de coeur resignations indicate, says Katharine Trendacosta, director of policy and advocacy at the Electronic Frontier Foundation: none of these companies have exactly figured out how to make money. 

Tech companies investing heavily in AI have, as she puts it, “hit the ceiling on the places where it’s useful and easy to sell, and now they’ve moved to the classic Silicon Valley thing of ‘we have to be the only one doing it.’” Hype can bring in the venture capital it takes to squeeze out competitors, Trendacosta explains, so huge—sometimes apocalyptic—claims about what a product, or AI technology generally, can do are incentivized. 

It’s also common for AI companies to claim that their competitors are dangerous, sloppy or technologically unsound. “Most of today’s AI companies exist because the people running them don’t trust any of the other AI executives,” Soares says. “None of the top executives think any of the other executives can do the job properly.”

Anthropic is the most obvious example. Its founders were senior leaders of OpenAI who left over “differences in vision,” including safety. Another former senior leader at OpenAI, Ilya Sutskever launched Safe Superintelligence Inc. (SSI) in 2024, aiming to avoid “short-term commercial pressures.”

The process can be cyclical. Just look at what OpenAi’s Altman said regarding the new deal his company signed with the Pentagon after Anthropic blanched: “We think our agreement has more guardrails than any previous agreement for classified AI deployments,” he wrote.

These assurances of a “safer” product seem increasingly threadbare—but Altman has continued to make them. In the memo Altman tweeted on Monday, he tried to tamp down public concern about the Trump Administration using their product by assuring employees that their OpenAI systems would not be “intentionally used for domestic surveillance of U.S. persons and nationals,” and would not be used by “Department of War intelligence agencies” like the NSA. 

“There are many things the technology just isn’t ready for,” the memo read, “and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods.”

But there’s a limit to according to a report by CNBC, in internal meetings Altman has said something different, telling employees that the company and its workers don’t “get to make operational decisions” about how their technology is used. 

“So maybe you think the Iran strike was good and the Venezuela invasion was bad,” Altman said, according to CNBC. “You don’t get to weigh in on that.”



Source link

Tags: Doomgloomindustrysquittingsrevealswave
Previous Post

“I feel like I’m grieving my mother”

Next Post

Kristi Noem is out: Chaotic reign at DHS ends amid personal scandal

Related Posts

California bill aims to end spraying of crops with toxic “forever chemicals”
Politics

California bill aims to end spraying of crops with toxic “forever chemicals”

April 10, 2026
Epstein Survivors Call Out Melania Trump’s Performative Press Conference
Politics

Epstein Survivors Call Out Melania Trump’s Performative Press Conference

April 10, 2026
Sam Altman’s really weird week just got even worse
Politics

Sam Altman’s really weird week just got even worse

April 9, 2026
Thomas Massie, Jeffrey Epstein, And The Biggest Sign Yet That Trump And MAGA Are Dead
Politics

Thomas Massie, Jeffrey Epstein, And The Biggest Sign Yet That Trump And MAGA Are Dead

April 9, 2026
Sam Altman is having a really weird week
Politics

Sam Altman is having a really weird week

April 9, 2026
Is the Don’s con gone?
Politics

Is the Don’s con gone?

April 9, 2026
Next Post
Kristi Noem is out: Chaotic reign at DHS ends amid personal scandal

Kristi Noem is out: Chaotic reign at DHS ends amid personal scandal

More young women are dying from heart disease — and people are missing these warning signs

More young women are dying from heart disease — and people are missing these warning signs

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Karoline Leavitt Delivered A Message To Voters That Will Lose The Midterm Election For Republicans

Karoline Leavitt Delivered A Message To Voters That Will Lose The Midterm Election For Republicans

March 25, 2026
Susan Collins Wants Bipartisan War Funding: Democrats Should Tell Her To Drop Dead

Susan Collins Wants Bipartisan War Funding: Democrats Should Tell Her To Drop Dead

March 19, 2026
“Like a zombie apocalypse: Trump’s budget cuts stir fears of frightening pipeline mishaps

“Like a zombie apocalypse: Trump’s budget cuts stir fears of frightening pipeline mishaps

July 22, 2025
Epstein breaks Congress

Epstein breaks Congress

July 22, 2025
US Government Is Accelerating Coral Reef Collapse, Scientists Warn

US Government Is Accelerating Coral Reef Collapse, Scientists Warn

March 1, 2026
New footage yet again contradicts DHS claims about its killing of a US citizen

New footage yet again contradicts DHS claims about its killing of a US citizen

March 7, 2026
“They stole an election”: Former Florida senator found guilty in “ghost candidates” scandal

“They stole an election”: Former Florida senator found guilty in “ghost candidates” scandal

0
The prime of Dame Maggie Smith is a gift

The prime of Dame Maggie Smith is a gift

0
The Hawaii senator who faced down racism and ableism—and killed Nazis

The Hawaii senator who faced down racism and ableism—and killed Nazis

0
The murder rate fell at the fastest-ever pace last year—and it’s still falling

The murder rate fell at the fastest-ever pace last year—and it’s still falling

0
Trump used the site of the first assassination attempt to spew falsehoods

Trump used the site of the first assassination attempt to spew falsehoods

0
MAGA church plans to raffle a Trump AR-15 at Second Amendment rally

MAGA church plans to raffle a Trump AR-15 at Second Amendment rally

0
Newsmax Host Claims Aliens Have Feminized The Country

Newsmax Host Claims Aliens Have Feminized The Country

April 10, 2026
California bill aims to end spraying of crops with toxic “forever chemicals”

California bill aims to end spraying of crops with toxic “forever chemicals”

April 10, 2026
The chilling role of ChatGPT in mass shootings and other violence

The chilling role of ChatGPT in mass shootings and other violence

April 10, 2026
How Austin’s stunning drop in rents explains housing in America

How Austin’s stunning drop in rents explains housing in America

April 10, 2026
Majorie Taylor Green Said It Better Than Any Republican Or Democrat Has.

Majorie Taylor Green Said It Better Than Any Republican Or Democrat Has.

April 10, 2026
Is It Any Wonder ‘Melania’ The Movie Bombed?

Is It Any Wonder ‘Melania’ The Movie Bombed?

April 10, 2026
Smart Again

Stay informed with Smart Again, the go-to news source for liberal perspectives and in-depth analysis on politics, social justice, and more. Join us in making news smart again.

CATEGORIES

  • Community
  • Law & Defense
  • Politics
  • Trending
  • Uncategorized
No Result
View All Result

LATEST UPDATES

  • Newsmax Host Claims Aliens Have Feminized The Country
  • California bill aims to end spraying of crops with toxic “forever chemicals”
  • The chilling role of ChatGPT in mass shootings and other violence
  • About Us
  • Advertise with Us
  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact Us

Copyright © 2024 Smart Again.
Smart Again is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us

Copyright © 2024 Smart Again.
Smart Again is not responsible for the content of external sites.

Go to mobile version