Sam Altman, CEO of OpenAI, takes his seat before a meeting of the White House Task Force on Artificial Intelligence Education in the East Room of the White House, Thursday, Sept. 4, 2025, in Washington.Alex Brandon/AP
Sam Altman suggested that an investigative story describing him as someone “unconstrained by truth” with a “sociopathic lack of concern” for consequences caused an early Friday attack on his San Francisco home.
The OpenAI CEO’s unsubstantiated implication came in a post on his personal blog published on Friday, hours after the attack. “There was an incendiary article about me a few days ago,” he wrote. Although he initially “brushed it aside,” he said he was “pissed” and now “thinking that I have underestimated the power of words and narratives.”
A 20-year-old man was arrested early Friday after allegedly throwing a Molotov cocktail at Altman’s home. No one was hurt in the incident. The same man is suspected of also threatening to burn down a building at OpenAI’s headquarters later that morning. The Monday New Yorker investigation Altman referred to included interviews with over 100 people who had firsthand knowledge of how Altman conducted business. While a few people defended Altman, most said he was power-hungry and manipulative.
In a move that appears as an attempt to soften his appeal, Altman accompanied his post with a photo of his husband and son. While he acknowledged the present moment as “a time of great anxiety about AI,” he noted, “while we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally.”
Altman’s suggestion of “tactics” indicates that he believes reports like the New Yorker investigation are intentionally dishonest and lead to violence.
The attack comes as CEOs are spending more money on personal security. According to a report by the Conference Board, a nonprofit research group, and ESGauge, an analytics platform, the top 10 percent of spenders pay an average of $1.2 million per year for full-time protection teams, armored vehicles, and threat intelligence.
As of April 11, OpenAI is hiring for four positions in corporate security, including a risk analyst, who will bring experience in “physical security, counterintelligence, and risk management,” and security leaders in San Francisco, Washington, D.C., and at data centers.
But security goes far beyond safety when considering the technology OpenAI produces. The attack could be used to develop a surveillance dragnet. The company secured a Pentagon contract in February and announced the deal in a company release titled “Our agreement with the Department of War.”
That same day, the Trump administration dropped and blacklisted Anthropic, a rival AI company, from federal contracts due to the company’s refusal to remove safety restrictions on its AI model that prevent the government from using it for mass surveillance or autonomous weapons. So while OpenAI claimed in its February announcement that it would ensure that its tools “will not be used for domestic surveillance,” how could it have successfully secured the contract?
If Altman truly wants to “de-escalate the rhetoric and tactics,” he has to reconcile with boosting a company that requires massive energy resources to harm not only those abroad in the name of national security but our communities at home.


























