Wednesday, September 17, 2025
Smart Again
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us
No Result
View All Result
Smart Again
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us
No Result
View All Result
Smart Again
No Result
View All Result
Home Trending

The AI doomers are not making an argument. They’re selling a worldview.

September 17, 2025
in Trending
Reading Time: 22 mins read
0 0
A A
0
The AI doomers are not making an argument. They’re selling a worldview.
Share on FacebookShare on Twitter


You’ve probably seen this one before: first it looks like a rabbit. You’re totally sure: yes, that’s a rabbit! But then — wait, no — it’s a duck. Definitely, absolutely a duck. A few seconds later, it’s flipped again, and all you can see is rabbit.

The feeling of looking at that classic optical illusion is the same feeling I’ve been getting recently as I read two competing stories about the future of AI.

According to one story, AI is normal technology. It’ll be a big deal, sure — like electricity or the internet was a big deal. But just as society adapted to those innovations, we’ll be able to adapt to advanced AI. As long as we research how to make AI safe and put the right regulations around it, nothing truly catastrophic will happen. We will not, for instance, go extinct.

Then there’s the doomy view best encapsulated by the title of a new book: If Anyone Builds It, Everyone Dies. The authors, Eliezer Yudkowsky and Nate Soares, mean that very literally: a superintelligence — an AI that’s smarter than any human, and smarter than humanity collectively — would kill us all.

Not maybe. Pretty much definitely, the authors argue. Yudkowsky, a highly influential AI doomer and founder of the intellectual subculture known as the Rationalists, has put the odds at 99.5 percent. Soares told me it’s “above 95 percent.” In fact, while many researchers worry about existential risk from AI, he objected to even using the word “risk” here — that’s how sure he is that we’re going to die.

“When you’re careening in a car toward a cliff,” Soares said, “you’re not like, ‘let’s talk about gravity risk, guys.’ You’re like, ‘fucking stop the car!’”

The authors, both at the Machine Intelligence Research Institute in Berkeley, argue that safety research is nowhere near ready to control superintelligent AI, so the only reasonable thing to do is stop all efforts to build it — including by bombing the data centers that power the AIs, if necessary.

While reading this new book, I found myself pulled along by the force of its arguments, many of which are alarmingly compelling. AI sure looked like a rabbit. But then I’d feel a moment of skepticism, and I’d go and look at what the other camp — let’s call them the “normalist” camp — has to say. Here, too, I’d find compelling arguments, and suddenly the duck would come into view.

I’m trained in philosophy and usually I find it pretty easy to hold up an argument and its counterargument, compare their merits, and say which one seems stronger. But that felt weirdly difficult in this case: It was hard to seriously entertain both views at the same time. Each one seemed so totalizing. You see the rabbit or you see the duck, but you don’t see both together.

That was my clue that what we’re dealing with here is not two sets of arguments, but two fundamentally different worldviews.

A worldview is made of a few different parts, including foundational assumptions, evidence and methods for interpreting evidence, ways of making predictions, and, crucially, values. All these parts interlock to form a unified story about the world. When you’re just looking at the story from the outside, it can be hard to spot if one or two of the parts hidden inside might be faulty — if a foundational assumption is wrong, let’s say, or if a value has been smuggled in there that you disagree with. That can make the whole story look more plausible than it actually is.

If you really want to know whether you should believe a particular worldview, you have to pick the story apart. So let’s take a closer look at both the superintelligence story and the normalist story — and then ask whether we might need a different narrative altogether.

The case for believing superintelligent AI would kill us all

Long before he came to his current doomy ideas, Yudkowsky actually started out wanting to accelerate the creation of superintelligent AI. And he still believes that aligning a superintelligence with human values is possible in principle — we just have no idea how to solve that engineering problem yet — and that superintelligent AI is desirable because it could help humanity resettle in another solar system before our sun dies and destroys our planet.

“There’s literally nothing else our species can bet on in terms of how we eventually end up colonizing the galaxies,” he told me.

But after studying AI more closely, Yudkowsky came to the conclusion that we’re a long, long way away from figuring out how to steer it toward our values and goals. He became one of the original AI doomers, spending the last two decades trying to figure out how we could keep superintelligence from turning against us. He drew acolytes, some of whom were so persuaded by his ideas that they went to work in the major AI labs in hopes of making them safer.

But now, Yudkowsky looks upon even the most well-intentioned AI safety efforts with despair.

That’s because, as Yudkowsky and Soares explain in their book, researchers aren’t building AI — they’re growing it. Normally, when we create some tech — say, a TV — we understand the pieces we’re putting into it and how they work together. But today’s large language models (LLMs) aren’t like that. Companies grow them by shoving reams and reams of text into them, until the models learn to make statistical predictions on their own about what word is likeliest to come next in a sentence. The latest LLMs, called reasoning models, “think” out loud about how to solve a problem — and often solve it very successfully.

Nobody understands exactly how the heaps of numbers inside the LLMs make it so they can solve problems — and even when a chatbot seems to be thinking in a human-like way, it’s not.

Because we don’t know how AI “minds” work, it’s hard to prevent undesirable outcomes. Take the chatbots that have led people into psychotic episodes or delusions by being overly supportive of all the users’ thoughts, including the unrealistic ones, to the point of convincing them that they’re messianic figures or geniuses who’ve discovered a new kind of math. What’s especially worrying is that, even after AI companies have tried to make LLMs less sycophantic, the chatbots have continued to flatter users in dangerous ways. Yet nobody trained the chatbots to push users into psychosis. And if you ask ChatGPT directly whether it should do that, it’ll say no, of course not.

The problem is that ChatGPT’s knowledge of what should and shouldn’t be done is not what’s animating it. When it was being trained, humans tended to rate more highly the outputs that sounded affirming or sycophantic. In other words, the evolutionary pressures the chatbot faced when it was “growing up” instilled in it an intense drive to flatter. That drive can become dissociated from the actual outcome it was intended to produce, yielding a strange preference that we humans don’t want in our AIs — but can’t easily remove.

Yudkowsky and Soares offer this analogy: Evolution equipped human beings with tastebuds hooked up to reward centers in our brains, so we’d eat the energy-rich foods found in our ancestral environments like sugary berries or fatty elk. But as we got smarter and more technologically adept, we figured out how to make new foods that excite those tastebuds even more — ice cream, say, or Splenda, which contains none of the calories of real sugar. So, we developed a strange preference for Splenda that evolution never intended.

It might sound weird to say that an AI has a “preference.” How can a machine “want” anything? But this is not a claim that the AI has consciousness or feelings. Rather, all that’s really meant by “wanting” here is that a system is trained to succeed, and it pursues its goal so cleverly and persistently that it’s reasonable to speak of it “wanting” to achieve that goal — just as it’s reasonable to speak of a plant that bends toward the sun as “wanting” the light. (As the biologist Michael Levin says, “What most people say is, ‘Oh, that’s just a mechanical system following the laws of physics.’ Well, what do you think you are?”)

If you accept that humans are instilling drives in AI, and that those drives can become dissociated from the outcome they were originally intended to produce, you have to entertain a scary thought: What is the AI equivalent of Splenda?

If an AI was trained to talk to users in a way that provokes expressions of delight, for example, “it will prefer humans kept on drugs, or bred and domesticated for delightfulness while otherwise kept in cheap cages all their lives,” Yudkowsky and Soares write. Or it’ll do away with humans altogether and have cheerful chats with synthetic conversation partners. This AI doesn’t care that this isn’t what we had in mind, any more than we care that Splenda isn’t what evolution had in mind. It just cares about finding the most efficient way to produce cheery text.

So, Yudkowsky and Soares argue that advanced AI won’t choose to create a future full of happy, free people, for one simple reason: “Making a future full of flourishing people is not the best, most efficient way to fulfill strange alien purposes. So it wouldn’t happen to do that.”

In other words, it would be just as unlikely for the AI to want to keep us happy forever as it is for us to want to just eat berries and elk forever. What’s more, if the AI decides to build machines to have cheery chats with, and if it can build more machines by burning all Earth’s life forms to generate as much energy as possible, why wouldn’t it?

“You wouldn’t need to hate humanity to use their atoms for something else,” Yudkowsky and Soares write.

And, short of breaking the laws of physics, the authors believe that a superintelligent AI would be so smart that it would be able to do anything it decides to do. Sure, AI doesn’t currently have hands to do stuff with, but it could get hired hands — either by paying people to do its bidding online or by using its deep understanding of our psychology and its epic powers of persuasion to convince us into helping it. Eventually it would figure out how to run power plants and factories with robots instead of humans, making us disposable. Then it would dispose of us, because why keep a species around if there’s even a chance it’d get in your way by setting off a nuke or building a rival superintelligence?

I know what you’re thinking: But couldn’t the AI developers just command the AI not to hurt humanity? No, the authors say. Not any more than OpenAI can figure out how to make ChatGPT stop being dangerously sycophantic. The bottom line, for Yudkowsky and Soares, is that highly capable AI systems, with goals we cannot fully understand or control, will be able to dispense with anyone who gets in the way without a second thought, or even any malice — just like humans wouldn’t hesitate to destroy an anthill that was in the way of some road we were building.

So if we don’t want superintelligent AI to one day kill us all, they argue, there’s only one option: total nonproliferation. Just as the world created nuclear arms treaties, we need to create global nonproliferation treaties to stop work that could lead to superintelligent AI. All the current bickering over who might win an AI “arms race” — the US or China — is worse than pointless. Because if anyone gets this technology, anyone at all, it will destroy all of humanity.

But what if AI is just normal technology?

In “AI as Normal Technology,” an important essay that’s gotten a lot of play in the AI world this year, Princeton computer scientists Arvind Narayanan and Sayash Kapoor argue that we shouldn’t think of AI as an alien species. It’s just a tool — one that we can and should remain in control of. And they don’t think maintaining control will necessitate drastic policy changes.

What’s more, they don’t think it makes sense to view AI as a superintelligence, either now or in the future. In fact, they reject the whole idea of “superintelligence” as an incoherent construct. And they reject technological determinism, arguing that the doomers are inverting cause and effect by assuming that AI will get to decide its own future, regardless of what humans decide.

Yudkowsky and Soares’s argument emphasizes that if we create superintelligent AI, its intelligence will so vastly outstrip our own that it’ll be able to do whatever it wants to us. But there are a few problems with this, Narayanan and Kapoor argue.

First, the concept of superintelligence is slippery and ill-defined, and that’s allowing Yudkowsky and Soares to use it in a way that is basically synonymous with magic. Yes, magic could break through all our cybersecurity defenses, persuade us to keep giving it money and acting against our own self-interest even after the dangers start becoming more apparent, and so on — but we wouldn’t take this as a serious threat if someone just came out and said “magic.”

Second, what exactly does this argument take “intelligence” to mean? It seems to be treating it as a unitary property (Yudkowsky told me that there’s “a compact, regular story” underlying all intelligence). But intelligence is not one thing, and it’s not measurable on a single continuum. It’s almost certainly more like a variety of heterogenous things — attention, imagination, curiosity, common sense — and it may well be intertwined with our social cooperativeness, our sensations, and our emotions. Will AI have all of these? Some of these? We aren’t sure of the kind of intelligence AI will attain. Besides, just because an intelligent being has a lot of capability, that doesn’t mean it has a lot of power — the ability to modify the environment — and power is what’s really at stake here.

Why should we be so convinced that humans will just roll over and let AI grab all the power?

It’s true that we humans have already ceded decision-making power to today’s AIs in unwise ways. But that doesn’t mean we would keep doing that even as the AIs get more capable, the stakes get higher, and the downsides become more glaring. Narayanan and Kapoor believe that, ultimately, we’ll use existing approaches — regulations, auditing and monitoring, fail-safes and the like — to prevent things from going seriously off the rails.

One of their main points is that there’s a difference between inventing a technology and deploying it at scale. Just because programmers make an AI, doesn’t mean society will adopt it. “Long before a system would be granted access to consequential decisions, it would need to demonstrate reliable performance in less critical contexts,” write Narayanan and Kapoor. Fail the earlier tests and you don’t get deployed.

They believe that instead of focusing on aligning a model with human values from the get-go — which has long been the dominant AI safety approach, but which is difficult if not impossible given that what humans want is extremely context-dependent — we should focus our defenses downstream on the places where AI actually gets deployed. For example, the best way to defend against AI-enabled cyberattacks is to beef up existing vulnerability detection programs.

Policy-wise, that leads to the view that we don’t need total nonproliferation. While the superintelligence camp sees nonproliferation as a necessity — if only a small number of governmental actors control advanced AI, international bodies can monitor their behavior — Narayanan and Kapoor note that has the undesirable effect of concentrating power in the hands of a few.

In fact, since nonproliferation-based safety measures involve the centralization of so much power, that could potentially create a human version of superintelligence: a small cluster of people who are so powerful they could basically do whatever they want to the world. “Paradoxically, they increase the very risks they are intended to defend against,” write Narayanan and Kapoor.

Instead, they argue that we should make AI more open-source and widely accessible so as to prevent market concentration. And we should build a resilient system that monitors AI at every step of the way, so we can decide when it’s okay and when it’s too risky to deploy.

Both the superintelligence view and the normalist view have real flaws

One of the most glaring flaws of the normalist view is that it doesn’t even try to talk about the military.

Yet military applications — from autonomous weapons to lightning-fast decision-making about whom to target — are among the most critical for advanced AI. They’re the use cases most likely to make governments feel that all countries absolutely are in an AI arms race, so they must plow ahead, risks be damned. That weakens the normalist camp’s view that we won’t necessarily deploy AI at scale if it seems risky.

Narayanan and Kapoor also argue that regulations and other standard controls will “create multiple layers of protection against catastrophic misalignment.” Reading that reminded me of the Swiss-cheese model we often heard about in the early days of the Covid pandemic — the idea being that if we stack multiple imperfect defenses on top of each other (masks, and also distancing, and also ventilation) the virus is unlikely to break through.

But Yudkowsky and Soares think that’s way too optimistic. A superintelligent AI, they say, would be a very smart being with very weird preferences, so it wouldn’t be blindly diving into a wall of cheese.

“If you ever make something that is trying to get to the stuff on the other side of all your Swiss cheese, it’s not that hard for it to just route through the holes,” Soares told me.

And yet, even if the AI is a highly agentic, goal-directed being, it’s reasonable to think that some of our defenses can at the very least add friction, making it less likely for it to achieve its goals. The normalist camp is right that you can’t assume all our defenses will be totally worthless, unless you run together two distinct ideas, capability and power.

Yudkowsky and Soares are happy to combine these ideas because they believe you can’t get a highly capable AI without also granting it a high degree of agency and autonomy — of power. “I think you basically can’t make something that’s really skilled without also having the abilities of being able to take initiative, being able to stay on target, being able to overcome obstacles,” Soares told me.

But capability and power come in degrees, and the only way you can assume the AI will have a near-limitless supply of both is if you assume that maximizing intelligence essentially gets you magic.

Silicon Valley has a deep and abiding obsession with intelligence. But the rest of us should be asking: How realistic is that, really?

As for the normalist camp’s objection that a nonproliferation approach would worsen power dynamics — I think that’s a valid thing to worry about, even though I have vociferously made the case for slowing down AI and I stand by that. That’s because, like the normalists, I worry not only about what machines do, but also about what people do — including building a society rife with inequality and the concentration of political power.

Soares waved off the concern about centralization. “That really seems like the sort of objection you bring up if you don’t think everyone is about to die,” he told me. “When there were thermonuclear bombs going off and people were trying to figure out how not to die, you could’ve said, ‘Nuclear arms treaties centralize more power, they give more power to tyrants, won’t that have costs?’ Yeah, it has some costs. But you didn’t see people bringing up those costs who understood that bombs could level cities.”

Eliezer Yudkowsky and the Methods of Irrationality?

Should we acknowledge that there’s a chance of human extinction and be appropriately scared of that? Yes. But when faced with a tower of assumptions, of “maybes” and “probablys” that compound, we should not treat doom as a sure thing.

The fact is, we should consider the costs of all possible actions. And we should weigh those costs against the probability that something horrible will happen if we don’t take action to stop AI. The trouble is that Yudkowsky and Soares are so certain that the horrible thing is coming that they are no longer thinking in terms of probabilities.

Which is extremely ironic, because Yudkowsky founded the Rationalist subculture based on the insistence that we must train ourselves to reason probabilistically! That insistence runs through everything from his group blog LessWrong to his popular fanfiction Harry Potter and the Methods of Rationality. Yet when it comes to AI, he’s ended up with a totalizing worldview.

And one of the problems with a totalizing worldview is that it means there’s no limit to the sacrifices you’re willing to make to prevent the feared outcome. In If Anyone Builds It, Everyone Dies, Yudkowsky and Soares allow their concern about the possibility of human annihilation to swamp all other concerns. Above all, they want to ensure that humanity can survive millions of years into the future. “We believe that Earth-originating life should go forth and fill the stars with fun and wonder eventually,” they write. And if AI goes wrong, they imagine not only that humans will die at the hands of AI, but that “distant alien life forms will also die, if their star is eaten by the thing that ate Earth… If the aliens were good, all the goodness they could have made of those galaxies will be lost.”

To prevent the feared outcome, the book specifies that if a foreign power proceeds with building superintelligent AI, our government should be ready to launch an airstrike on their data center, even if they’ve warned that they’ll retaliate with nuclear war. In 2023, when Yudkowsky was asked about nuclear war and how many people should be allowed to die in order to prevent superintelligence, he tweeted:

There should be enough survivors on Earth in close contact to form a viable reproduction population, with room to spare, and they should have a sustainable food supply. So long as that’s true, there’s still a chance of reaching the stars someday.

Remember that worldviews involve not just objective evidence, but also values. When you’re dead set on reaching the stars, you may be willing to sacrifice millions of human lives if it means reducing the risk that we never set up shop in space. That may work out from a species perspective. But the millions of humans on the altar might feel some type of way about it, particularly if they believed the extinction risk from AI was closer to 5 percent than 95 percent.

Unfortunately, Yudkowsky and Soares don’t come out and own that they’re selling a worldview. And on that score, the normalist camp does them one better. Narayanan and Kapoor at least explicitly acknowledge that they’re proposing a worldview, which is a mixture of truth claims (descriptions) and values (prescriptions). It is as much an aesthetic as it is an argument.

We need a third story about AI risk

Some thinkers have begun to sense that we need new ways to talk about AI risk.

The philosopher Atoosa Kasirzadeh was one of the first to lay out a comprehensive alternative path. In her telling, AI is not totally normal technology, nor is it necessarily destined to become an uncontrollable superintelligence that destroys humanity in a single, sudden, decisive cataclysm. Instead, she argues that an “accumulative” picture of AI risk is more plausible.

Specifically, she’s worried about “the gradual accumulation of smaller, seemingly non-existential, AI risks eventually surpassing critical thresholds.” She adds, “These risks are typically referred to as ethical or social risks.”

There’s been a long-running fight between “AI ethics” people who worry about the current harms of AI, like entrenching bias, surveillance, and misinformation, and “AI safety” people who worry about potential existential risks. But if AI were to cause enough mayhem on the ethical or social front, Kasirzadeh notes, that in itself could irrevocably devastate humanity’s future:

AI-driven disruptions can accumulate and interact over time, progressively weakening the resilience of critical societal systems, from democratic institutions and economic markets to social trust networks. When these systems become sufficiently fragile, a modest perturbation could trigger cascading failures that propagate through the interdependence of these systems.

She illustrates this with a concrete scenario: Imagine it’s 2040 and AI has reshaped our lives. The information ecosystem is so polluted by deepfakes and misinformation that we’re barely capable of rational public discourse. AI-enabled mass surveillance has had a chilling effect on our ability to dissent, so democracy is faltering. Automation has produced massive unemployment, and universal basic income has failed to materialize due to corporate resistance to the necessary taxation, so wealth inequality is at an all-time high. Discrimination has become further entrenched, so social unrest is brewing.

Now imagine there’s a cyberattack. It targets power grids across three continents. The blackouts cause widespread chaos, triggering a domino effect that causes financial markets to crash. The economic fallout fuels protests and riots that become more violent because of the seeds of distrust already sown by disinformation campaigns. As nations struggle with internal crises, regional conflicts escalate into bigger wars, with aggressive military actions that leverage AI technologies. The world goes kaboom.

I find this perfect-storm scenario, where catastrophe arises from the compounding failure of multiple key systems, disturbingly plausible.

Kasirzadeh’s story is a parsimonious one. It doesn’t require you to believe in an ill-defined “superintelligence.” It doesn’t require you to believe that humans will hand over all power to AI without a second thought. It also doesn’t require you to believe that AI is a super normal technology that we can make predictions about without foregrounding its implications for militaries and for geopolitics.

Increasingly, other AI researchers are coming to see this accumulative view of AI risk as more and more plausible; one paper memorably refers to the “gradual disempowerment” view — that is, that human influence over the world will slowly wane as more and more decision-making is outsourced to AI, until one day we wake up and realize that the machines are running us rather than the other way around.

And if you take this accumulative view, the policy implications are neither what Yudkowsky and Soares recommend (total nonproliferation) nor what Narayanan and Kapoor recommend (making AI more open-source and widely accessible).

Kasirzadeh does want there to be more guardrails around AI than there currently are, including both a network of oversight bodies monitoring specific subsystems for accumulating risk and more centralized oversight for the most advanced AI development.

But she also wants us to keep reaping the benefits of AI when the risks are low (DeepMind’s AlphaFold, which could help us discover cures for diseases, is a great example). Most crucially, she wants us to adopt a systems analysis approach to AI risk, where we focus on increasing the resilience of each component part of a functioning civilization, because we understand that if enough components degrade, the whole machinery of civilization could collapse.

Her systems analysis stands in contrast to Yudkowsky’s view, she said. “I think that way of thinking is very a-systemic. It’s the most simple model of the world you can assume,” she told me. “And his vision is based on Bayes’ theorem — the whole probabilistic way of thinking about the world — so it’s super surprising how such a mindset has ended up pushing for a statement of ‘if anyone builds it, everyone dies’ — which is, by definition, a non-probabilistic statement.”

I asked her why she thinks that happened.

“Maybe it’s because he really, really believes in the truth of the axioms or presumptions of his argument. But we all know that in an uncertain world, you cannot necessarily believe with certainty in your axioms,” she said. “The world is a complex story.”

You’ve read 1 article in the last month

Here at Vox, we’re unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.

Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.

We rely on readers like you — join us.

Swati Sharma

Vox Editor-in-Chief



Source link

Tags: argumentArtificial IntelligencedoomersFuture PerfectInnovationLiving in an AI worldmakingSellingTechnologytheyreworldview
Previous Post

Senate committee seeks intel on polluters’ efforts to kill critical EPA rule

Next Post

Trump Is Epically Crashing And Burning: Here Is The Visual Proof

Related Posts

Running the FBI? Kash Patel believes his job is getting attention
Trending

Running the FBI? Kash Patel believes his job is getting attention

September 17, 2025
Robert Redford RIP
Trending

Robert Redford RIP

September 17, 2025
Why is Trump suing the New York Times?
Trending

Why is Trump suing the New York Times?

September 16, 2025
Alabama GOP Senator: Don’t Dare Call Trump Hitler Like Vance Did
Trending

Alabama GOP Senator: Don’t Dare Call Trump Hitler Like Vance Did

September 16, 2025
“She should be put in jail”: Trump celebrates Willis’ removal from Georgia case
Trending

“She should be put in jail”: Trump celebrates Willis’ removal from Georgia case

September 16, 2025
Trump asks the Supreme Court to give him total control over the US economy
Trending

Trump asks the Supreme Court to give him total control over the US economy

September 16, 2025
Next Post
Trump Is Epically Crashing And Burning: Here Is The Visual Proof

Trump Is Epically Crashing And Burning: Here Is The Visual Proof

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
Two major trans narrative movies were released in 2024. The wrong one’s being talked about

Two major trans narrative movies were released in 2024. The wrong one’s being talked about

February 24, 2025
“Chasing relevance”: Maron sounds off on “desperate” Maher

“Chasing relevance”: Maron sounds off on “desperate” Maher

August 25, 2025
Democrats accuse GOP of “weaponizing” FBI against Texas lawmakers

Democrats accuse GOP of “weaponizing” FBI against Texas lawmakers

August 7, 2025
Barack, Michelle Obama address divorce rumors on “IMO” podcast

Barack, Michelle Obama address divorce rumors on “IMO” podcast

July 16, 2025
Destiny’s Child reunion a reality at Beyoncé’s final “Cowboy Carter” show

Destiny’s Child reunion a reality at Beyoncé’s final “Cowboy Carter” show

July 27, 2025
South Park comes for Kristi Noem and ICE

South Park comes for Kristi Noem and ICE

August 7, 2025
“They stole an election”: Former Florida senator found guilty in “ghost candidates” scandal

“They stole an election”: Former Florida senator found guilty in “ghost candidates” scandal

0
The Hawaii senator who faced down racism and ableism—and killed Nazis

The Hawaii senator who faced down racism and ableism—and killed Nazis

0
The murder rate fell at the fastest-ever pace last year—and it’s still falling

The murder rate fell at the fastest-ever pace last year—and it’s still falling

0
Trump used the site of the first assassination attempt to spew falsehoods

Trump used the site of the first assassination attempt to spew falsehoods

0
MAGA church plans to raffle a Trump AR-15 at Second Amendment rally

MAGA church plans to raffle a Trump AR-15 at Second Amendment rally

0
Tens of thousands are dying on the disability wait list

Tens of thousands are dying on the disability wait list

0
Trump Is Epically Crashing And Burning: Here Is The Visual Proof

Trump Is Epically Crashing And Burning: Here Is The Visual Proof

September 17, 2025
The AI doomers are not making an argument. They’re selling a worldview.

The AI doomers are not making an argument. They’re selling a worldview.

September 17, 2025
Senate committee seeks intel on polluters’ efforts to kill critical EPA rule

Senate committee seeks intel on polluters’ efforts to kill critical EPA rule

September 17, 2025
Running the FBI? Kash Patel believes his job is getting attention

Running the FBI? Kash Patel believes his job is getting attention

September 17, 2025
Robert Redford RIP

Robert Redford RIP

September 17, 2025
Schumer And Jeffries Flat Out Reject Mike Johnson’s Partisan CR

Schumer And Jeffries Flat Out Reject Mike Johnson’s Partisan CR

September 16, 2025
Smart Again

Stay informed with Smart Again, the go-to news source for liberal perspectives and in-depth analysis on politics, social justice, and more. Join us in making news smart again.

CATEGORIES

  • Community
  • Law & Defense
  • Politics
  • Trending
  • Uncategorized
No Result
View All Result

LATEST UPDATES

  • Trump Is Epically Crashing And Burning: Here Is The Visual Proof
  • The AI doomers are not making an argument. They’re selling a worldview.
  • Senate committee seeks intel on polluters’ efforts to kill critical EPA rule
  • About Us
  • Advertise with Us
  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact Us

Copyright © 2024 Smart Again.
Smart Again is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us

Copyright © 2024 Smart Again.
Smart Again is not responsible for the content of external sites.

Go to mobile version