Friday, July 18, 2025
Smart Again
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us
No Result
View All Result
Smart Again
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us
No Result
View All Result
Smart Again
No Result
View All Result
Home Trending

You can get unfathomably rich building AI. Should you?

July 18, 2025
in Trending
Reading Time: 7 mins read
0 0
A A
0
You can get unfathomably rich building AI. Should you?
Share on FacebookShare on Twitter


It’s a good time to be a highly in-demand AI engineer. To lure leading researchers away from OpenAI and other competitors, Meta has reportedly offered pay packages totalling more than $100 million. Top AI engineers are now being compensated like football superstars.

Few people will ever have to grapple with the question of whether to go work for Mark Zuckerberg’s “superintelligence” venture in exchange for enough money to never have to work again (Bloomberg columnist Matt Levine recently pointed out that this is kind of Zuckerberg’s fundamental challenge: If you pay someone enough to retire after a single month, they might well just quit after a single month, right? You need some kind of elaborate compensation structure to make sure they can get unfathomably rich without simply retiring.)

Most of us can only dream of having that problem. But many of us have occasionally had to navigate the question of whether to take on an ethically dubious job (Denying insurance claims? Shilling cryptocurrency? Making mobile games more habit-forming?) to pay the bills.

For those working in AI, that ethical dilemma is supercharged to the point of absurdity. AI is a ludicrously high-stakes technology — both for good and for ill — with leaders in the field warning that it might kill us all. A small number of people talented enough to bring about superintelligent AI can dramatically alter the technology’s trajectory. Is it even possible for them to do so ethically?

AI is going to be a really big deal

On the one hand, leading AI companies offer workers the potential to earn unfathomable riches and also contribute to very meaningful social good — including productivity-increasing tools that can accelerate medical breakthroughs and technological discovery, and make it possible for more people to code, design, and do any other work that can be done on a computer.

On the other hand, well, it’s hard for me to argue that the “Waifu engineer” that xAI is now hiring for — a role that will be responsible for making Grok’s risqué anime girl “companion” AI even more habit-forming — is of any social benefit whatsoever, and I in fact worry that the rise of such bots will be to the lasting detriment of society. I’m also not thrilled about the documented cases of ChatGPT encouraging delusional beliefs in vulnerable users with mental illness.

Much more worryingly, the researchers racing to build powerful AI “agents” — systems that can independently write code, make purchases online, interact with people, and hire subcontractors for tasks — are running into plenty of signs that those AIs might intentionally deceive humans and even take dramatic and hostile action against us. In tests, AIs have tried to blackmail their creators or send a copy of themselves to servers where they can operate more freely.

For now, AIs only exhibit that behavior when given precisely engineered prompts designed to push them to their limits. But with increasingly huge numbers of AI agents populating the world, anything that can happen under the right circumstances, however rare, will likely happen sometimes.

Over the past few years, the consensus among AI experts has moved from “hostile AIs trying to kill us is completely implausible” to “hostile AIs only try to kill us in carefully designed scenarios.” Bernie Sanders — not exactly a tech hype man — is now the latest politician to warn that as independent AIs become more powerful, they might take power from humans. It’s a “doomsday scenario,” as he called it, but it’s hardly a far-fetched one anymore.

And whether or not the AIs themselves ever decide to kill or harm us, they might fall into the hands of people who do. Experts worry that AI will make it much easier both for rogue individuals to engineer plagues or plan acts of mass violence, and for states to achieve heights of surveillance over their citizens that they have long dreamed of but never before been able to achieve.

Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

In principle, a lot of these risks could be mitigated if labs designed and adhered to rock-solid safety plans, responding swiftly to signs of scary behavior among AIs in the wild. Google, OpenAI, and Anthropic do have safety plans, which don’t seem fully adequate to me but which are a lot better than nothing. But in practice, mitigation often falls by the wayside in the face of intense competition between AI labs. Several labs have weakened their safety plans as their models came close to meeting pre-specified performance thresholds. Meanwhile, xAI, the creator of Grok, is pushing releases with no apparent safety planning whatsoever.

Worse, even labs that start out deeply and sincerely committed to ensuring AI is developed responsibly have often changed course later because of the enormous financial incentives in the field. That means that even if you take a job at Meta, OpenAI, or Anthropic with the best of intentions, all of your effort toward building a good AI outcome could be redirected toward something else entirely.

So should you take the job?

I’ve been watching this industry evolve for seven years now. Although I’m generally a techno-optimist who wants to see humanity design and invent new things, my optimism has been tempered by witnessing AI companies openly admitting their products might kill us all, then racing ahead with precautions that seem wholly inadequate to those stakes. Increasingly, it feels like the AI race is steering off a cliff.

Given all that, I don’t think it’s ethical to work at a frontier AI lab unless you have given very careful thought to the risks that your work will bring closer to fruition, and you have a specific, defensible reason why your contributions will make the situation better, not worse. Or, you have an ironclad case that humanity doesn’t need to worry about AI at all, in which case, please publish it so the rest of us can check your work!

When vast sums of money are at stake, it’s easy to self-deceive. But I wouldn’t go so far as to claim that literally everyone working in frontier AI is engaged in self-deception. Some of the work documenting what AI systems are capable of and probing how they “think” is immensely valuable. The safety and alignment teams at DeepMind, OpenAI, and Anthropic have done and are doing good work.

But anyone pushing for a plane to take off while convinced it has a 20 percent chance of crashing would be wildly irresponsible, and I see little difference in trying to build superintelligence as fast as possible.

A hundred million dollars, after all, isn’t worth hastening the death of your loved ones or the end of human freedom. In the end, it’s only worth it if you can not just get rich off AI, but also help make it go well.

It might be hard to imagine anyone who’d turn down mind-boggling riches just because it’s the right thing to do in the face of theoretical future risks, but I know quite a few people who’ve done exactly that. I expect there will be more of them in the coming years, as more absurdities like Grok’s recent MechaHitler debacle go from sci-fi to reality.

And ultimately, whether or not the future turns out well for humanity may depend on whether we can persuade some of the richest people in history to notice something their paychecks depend on their not noticing: that their jobs might be really, really bad for the world.

You’ve read 1 article in the last month

Here at Vox, we’re unwavering in our commitment to covering the issues that matter most to you — threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.

Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.

We rely on readers like you — join us.

Swati Sharma

Vox Editor-in-Chief



Source link

Tags: Artificial IntelligenceBuildingFuture PerfectInnovationRichTechnologyunfathomably
Previous Post

With Trump as pilot, America is experiencing emotional turbulence

Next Post

Colbert’s cancellation is a dark warning

Related Posts

The new revelation about Trump and Jeffrey Epstein, explained
Trending

The new revelation about Trump and Jeffrey Epstein, explained

July 18, 2025
‘Desperate defensive move’: Trump’s call to unseal Epstein testimony seen as “red herring
Trending

‘Desperate defensive move’: Trump’s call to unseal Epstein testimony seen as “red herring

July 18, 2025
Alex Jones Now Believes Trump Because He ‘Doesn’t Wanna See Melania Pee’
Trending

Alex Jones Now Believes Trump Because He ‘Doesn’t Wanna See Melania Pee’

July 18, 2025
With Trump as pilot, America is experiencing emotional turbulence
Trending

With Trump as pilot, America is experiencing emotional turbulence

July 18, 2025
The Trinity Test
Trending

The Trinity Test

July 18, 2025
Pelosi calls Epstein controversy “a distraction”
Trending

Pelosi calls Epstein controversy “a distraction”

July 18, 2025
Next Post
Colbert’s cancellation is a dark warning

Colbert's cancellation is a dark warning

Alex Jones Now Believes Trump Because He ‘Doesn’t Wanna See Melania Pee’

Alex Jones Now Believes Trump Because He 'Doesn't Wanna See Melania Pee'

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Trending
  • Comments
  • Latest
How a House bill could let Trump label enemies as terrorists

How a House bill could let Trump label enemies as terrorists

November 20, 2024
Why is everyone crashing out?

Why is everyone crashing out?

June 29, 2025
“A huge net positive”: Controversial “Squid Game” character challenges Western representation ideals

“A huge net positive”: Controversial “Squid Game” character challenges Western representation ideals

December 31, 2024
“The Ugly Stepsister” rewrites “Cinderella” as a grotesque and darkly funny feminist fable

“The Ugly Stepsister” rewrites “Cinderella” as a grotesque and darkly funny feminist fable

April 18, 2025
This pro-Israel group keeps a  blacklist. Now it’s taking credit for deportations.

This pro-Israel group keeps a blacklist. Now it’s taking credit for deportations.

April 25, 2025
Wait, should I bother using antibacterial soap?

Wait, should I bother using antibacterial soap?

January 2, 2025
“They stole an election”: Former Florida senator found guilty in “ghost candidates” scandal

“They stole an election”: Former Florida senator found guilty in “ghost candidates” scandal

0
The Hawaii senator who faced down racism and ableism—and killed Nazis

The Hawaii senator who faced down racism and ableism—and killed Nazis

0
The murder rate fell at the fastest-ever pace last year—and it’s still falling

The murder rate fell at the fastest-ever pace last year—and it’s still falling

0
Trump used the site of the first assassination attempt to spew falsehoods

Trump used the site of the first assassination attempt to spew falsehoods

0
MAGA church plans to raffle a Trump AR-15 at Second Amendment rally

MAGA church plans to raffle a Trump AR-15 at Second Amendment rally

0
Tens of thousands are dying on the disability wait list

Tens of thousands are dying on the disability wait list

0
The new revelation about Trump and Jeffrey Epstein, explained

The new revelation about Trump and Jeffrey Epstein, explained

July 18, 2025
You can stop blaming Lena Dunham now

You can stop blaming Lena Dunham now

July 18, 2025
‘Desperate defensive move’: Trump’s call to unseal Epstein testimony seen as “red herring

‘Desperate defensive move’: Trump’s call to unseal Epstein testimony seen as “red herring

July 18, 2025
Alex Jones Now Believes Trump Because He ‘Doesn’t Wanna See Melania Pee’

Alex Jones Now Believes Trump Because He ‘Doesn’t Wanna See Melania Pee’

July 18, 2025
Colbert’s cancellation is a dark warning

Colbert’s cancellation is a dark warning

July 18, 2025
You can get unfathomably rich building AI. Should you?

You can get unfathomably rich building AI. Should you?

July 18, 2025
Smart Again

Stay informed with Smart Again, the go-to news source for liberal perspectives and in-depth analysis on politics, social justice, and more. Join us in making news smart again.

CATEGORIES

  • Community
  • Law & Defense
  • Politics
  • Trending
  • Uncategorized
No Result
View All Result

LATEST UPDATES

  • The new revelation about Trump and Jeffrey Epstein, explained
  • You can stop blaming Lena Dunham now
  • ‘Desperate defensive move’: Trump’s call to unseal Epstein testimony seen as “red herring
  • About Us
  • Advertise with Us
  • Disclaimer
  • Privacy Policy
  • DMCA
  • Cookie Privacy Policy
  • Terms and Conditions
  • Contact Us

Copyright © 2024 Smart Again.
Smart Again is not responsible for the content of external sites.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • Trending
  • Politics
  • Law & Defense
  • Community
  • Contact Us

Copyright © 2024 Smart Again.
Smart Again is not responsible for the content of external sites.

Go to mobile version