Site icon Smart Again

Who do you believe about the end of the world?

Who do you believe about the end of the world?


Not everyone wants to rule the world, but it does seem lately as if everyone wants to warn the world might be ending.

On Tuesday, the Bulletin of the Atomic Scientists unveiled their annual resetting of the Doomsday Clock, which is meant to visually represent how close the experts at the organization feel that the world is to ending. Reflecting a cavalcade of existential risks ranging from worsening nuclear tensions to climate change to the rise of autocracy, the hands were set to 85 seconds to midnight, four seconds closer than in 2025 and the closest the clock has ever been to striking 12.

The day before, Anthropic CEO Dario Amodei — who may as well be the field of artificial intelligence’s philosopher-king — published a 19,000-word essay entitled “The Adolescence of Technology.” His takeaway: “Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it.”

Should we fail this “serious civilizational challenge,” as Amodei put it, the world might well be headed for the pitch black of midnight. (Disclosure: Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content.)

As I’ve said before, it’s boom times for doom times. But examining these two very different attempts at communicating existential risk — one very much a product of the mid-20th century, the other of our own uncertain moment — presents a question. Who should we listen to? The prophets shouting outside the gates? Or the high priest who also runs the temple?

The Doomsday Clock has been with us so long — it was created in 1947, just two years after the first nuclear weapon incinerated Hiroshima — that it’s easy to forget how radical it was. Not just the Clock itself, which may be one of the most iconic and effective symbols of the 20th century, but the people who made it.

The Bulletin of the Atomic Scientists was founded immediately after the war by scientists like J. Robert Oppenheimer — the very men and women who had created the bomb they now feared. That lent an unparalleled moral clarity to their warnings. At a moment of uniquely high levels of institutional trust, here were people who knew more about the workings of the bomb than anyone else, desperately telling the public that we were on a path to nuclear annihilation.

The Bulletin scientists had the benefit of reality on their side. No one, after Hiroshima and Nagasaki, could doubt the awful power of these bombs. As my colleague Josh Keating wrote earlier this week, by the late 1950s there were dozens of nuclear tests being conducted around the world each year. That nuclear weapons, especially at that moment, presented a clear and unprecedented existential risk was essentially inarguable, even by the politicians and generals building up those arsenals.

But the very thing that gave the Bulletin scientists their moral credibility — their willingness to break with the government they once served — cost them the one thing needed to end those risks: power.

As striking as the Doomsday Clock remains as a symbol, it is essentially a communication device wielded by people who have no say over the things they’re measuring. It’s prophetic speech without executive authority. When the Bulletin, as it did on Tuesday, warns that the New START treaty is expiring or that nuclear powers are modernizing their arsenals, it can’t actually do anything about it except hope policymakers — and the public — listen.

And the more diffuse those warnings become, the harder it is to be heard.

Since the end of the Cold War took nuclear war off the agenda — temporarily, at least — the calculations behind the Doomsday Clock have grown to encompass climate change, biosecurity, the degradation of US public health infrastructure, new technological risks like “mirror life,” artificial intelligence, and autocracy. All of these challenges are real, and each in their own way threatens to make life on this planet worse. But mixed together, they muddy the terrifying precision that the Clock promised. What once seemed like clockwork is revealed as guesswork, just one more warning among countless others.

Even more than most AI leaders, Amodei has frequently been compared to Oppenheimer.

Amodei was a physicist and a scientist first. Amodei did important work on the “scaling laws” that helped unlock powerful artificial intelligence, just as Oppenheimer did critical research that helped blaze the trail to the bomb. Like Oppenheimer, whose real talent lay in the organizational abilities required to run the Manhattan Project, Amodei has proven to be highly capable as a corporate leader.

And like Oppenheimer — after the war at least — Amodei hasn’t been shy about using his public position to warn in no uncertain terms about the technology he helped create. Had Oppenheimer had access to modern blogging tools, I guarantee you he would have produced something like “The Adolescence of Technology,” albeit with a bit more Sanskrit.

Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

The difference between these figures is one of control. Oppenheimer and his fellow scientists lost control of their creation to the government and the military almost immediately, and by 1954 Oppenheimer himself had lost his security clearance. From then on, he and his colleagues would largely be voices on the outside.

Amodei, by contrast, speaks as the CEO of Anthropic, the AI company that at the moment is perhaps doing more than any other to push AI to its limits. When he spins transformative visions of AI as potentially “a country of geniuses in a datacenter,” or runs through scenarios of catastrophe ranging from AI-created bioweapons to technologically enabled mass unemployment and wealth concentration, he is speaking from within the temple of power.

It’s almost as if the strategists setting nuclear war plans were also fiddling with the hands on the Doomsday Clock. (I say “almost” because of a key distinction — while nuclear weapons promised only destruction, AI promises great benefits and terrible risks alike. Which is perhaps why you need 19,000 words to work out your thoughts about it.)

All of which leaves the question of whether the fact that Amodei has such power to influence the direction of AI gives his warnings more credibility than those on the outside, like the Bulletin scientists — or less.

The Bulletin’s model has integrity to spare, but increasingly limited relevance, especially to AI. The atomic scientists lost control of nuclear weapons the moment they worked. Amodei hasn’t lost control of AI — his company’s release decisions still matter enormously. That makes the Bulletin’s outsider position less applicable. You can’t effectively warn about AI risks from a position of pure independence because the people with the best technical insight are largely inside the companies building it.

But Amodei’s model has its own problem: The conflict of interest is structural and inescapable.

Every warning he issues comes packaged with “but we should definitely keep building.” His essay explicitly argues that stopping or substantially slowing AI development is “fundamentally untenable” — that if Anthropic doesn’t build powerful AI, someone worse will. That may be true. It may even be the best argument for why safety-conscious companies should stay in the race. But it’s also, conveniently, the argument that lets him keep doing what he’s doing, with all the immense benefits that may bring.

This is the trap Amodei himself describes: “There is so much money to be made with AI — literally trillions of dollars per year — that even the simplest measures are finding it difficult to overcome the political economy inherent in AI.”

The Doomsday Clock was designed for a world where scientists could step outside the institutions that created existential threats and speak with independent authority. We may no longer live in that world. The question is what we build to replace it — and how much time we have left to do so.

You’ve read 1 article in the last month

Here at Vox, we’re unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.

Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.

We rely on readers like you — join us.

Swati Sharma

Vox Editor-in-Chief



Source link

Exit mobile version