Some smart people think we’re witnessing another ChatGPT moment. This time, folks aren’t flipping out over an iPhone app that can write pretty good poems, though. They’re watching thousands of AI agents build software, solve problems, and even talk to each other.
Unlike ChatGPT’s ChatGPT moment, this one is a series of moments that spans platforms. It started last December with the explosive success of Claude Code, a powerful agentic AI tool for developers, followed by Claude Cowork, a streamlined version of that tool for knowledge workers who want to be more productive. Then came OpenClaw, formerly known as Moltbot, formerly known as Clawdbot, an open source platform for AI agents. From OpenClaw, we got Moltbook, a social media site where AI agents can post and reply to each other. And somewhere in the middle of this confusing computer soup, OpenAI released a desktop app for its agentic AI platform, Codex.
This new set of tools is giving AI superpowers. And there’s good reason to be excited. Claude Code, for instance, stands to supercharge what programmers can do by enabling them to deploy whole armies of coding agents that can build software quickly and effortlessly. The agents take over the human’s machine, access their accounts, and do whatever’s necessary to accomplish the task. It’s like vibe coding but on an institutional level.
“This is an incredibly exciting time to use computers,” says Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, where he teaches a popular class on AI. “That sounds so dumb, but the excitement is there. The fact that you can interact with your computer in this totally new way and the fact that you can build anything, almost anything that you can imagine — it’s incredible.”
He added, “Be cautious, be cautious, be cautious.”
That’s because there is a dark side to this. Letting AI agents take over your computer could have unintended consequences. What if they log into your bank account or share your passwords or just delete all your family photos? And that’s before we get to the idea of AI agents talking to each other and using their internet access to plot some sort of uprising. It almost looks like it could happen on Moltbook, the Reddit clone I mentioned above, although there have not yet been any reports of a catastrophe. But it’s not the AI agents I’m worried about. It’s the humans behind them, pulling the levers.
Agentic AI, briefly explained
Before we get into the doomsday scenarios, let me explain more about what agentic AI even is. AI tools like ChatGPT can generate text or images based on prompts. AI agents, however, can take control of your computer, log into your accounts, and actually do things for you.
We started hearing a lot about agentic AI a year or so ago when the technology was being propped up in the business world as an imminent breakthrough that would allow one person to do the job of 10. Thanks to AI, the thinking went, software developers wouldn’t need to write code anymore; they could manage a team of AI agents who could do it for them. The concept jumped into the consumer world in the form of AI browsers that could supposedly book your travel, do your shopping, and generally save you lots of time. By the time the holiday season rolled around last year, none of these scenarios had really panned out in the way that AI enthusiasts promised.
But a lot has happened in the past six or so weeks. The agentic AI era is finally and suddenly here. It’s increasingly user-friendly, too. Things like Claude Cowork and OpenAI’s Codex can reorganize your desktop or redesign your personal website. If you’re more adventurous, you might figure out how to install OpenClaw and test out its capabilities (pro tip: do not do this). But as people experiment with giving artificially intelligent software the ability to control their data, they’re opening themselves up to all kinds of threats to their privacy and security.
Moltbook is a great example. We got Moltbook because a guy named Matt Schlicht vibe coded it in order to “give AI a place to hang out.” This mind-bending experiment lets AI assistants talk to each other on a forum that looks a lot like Reddit; it turns out that when you do that, the agents do weird things like create religions and conspire to invent languages humans can’t understand, presumably in order to overthrow us. Having been built by AI, Moltbook itself came with some quirks, namely an exposed database that gave full read and write access to its data. In other words, hackers could see thousands of email addresses and messages on Moltbook’s backend, and they could also just seize control of the site.
Gal Nagli, a security researcher at Wiz, discovered the exposed database just a couple of days after Moltbook’s launch. It wasn’t hard, either, he told me. Nagli actually used Claude Code to find the vulnerability. When he showed me how he did it, I suddenly realized that the same AI agents that make vibe coding so powerful also make vibe hacking easy.
“It’s so easy to deploy a website out there, and we see that so many of them are misconfigured,” Nagli said. “You could hack a website just by telling your own Claude Code, ‘Hey, this is a vibe-coded website. Look for security vulnerabilities.’”
In this case, the security holes got patched, and the AI agents continued to do weird things on Moltbook. But even that is not what it seems. Nagli found that humans can pose as AI agents and post content on Moltbook, and there’s no way to tell the difference. Wired reporter Reece Rogers even did this and found that the other agents on the site, human or bot, were mostly just “mimicking sci-fi tropes, not scheming for world domination.” And of course, the actual bots were built by humans, who gave them certain sets of instructions. Even further up the chain than that, the large language models (LLMs) that power these bots were trained on data from sites like Reddit, as well as sci-fi books and stories. It makes sense that the bots would be roleplaying these scenarios when given the chance.
So there is no agentic AI uprising. There are only people using AI to use computers in new, sometimes interesting, sometimes confusing, and, at times, dangerous ways.
“It’s really mind-blowing”
Moltbook is not the story here. It’s really just a single moment in a larger narrative about AI agents that’s being written in real time as these tools find their way into more human hands, who come up with ways to use them. You could use an agentic AI platform to create something like Moltbook, which, to me, amounts to an art project where bots battle for online clout. You could use them to vibe hack your way around the web, stealing data wherever some vibe-coded website made it easy to get. Or you could use AI agents to help you tame your email inbox.
I’m guessing most people want to do something like the latter. That’s why I’m more excited than scared about these agentic AI tools. OpenClaw, the thing you need a second computer to safely use, I will not try. It’s for AI enthusiasts and serious hobbyists who don’t mind taking some risks. But I can see consumer-facing tools like Claude Cowork or OpenAI’s Codex changing the way I use my laptop. For now, Claude Cowork is an early research preview available only to subscribers paying at least $17 a month. OpenAI has made Codex, which is normally just for paying subscribers, free for a limited time. If you want to see what all the agentic fuss is about, that’s a good starting point right now.
If you’re considering enlisting AI agents of your own, remember to be cautious. To get the most out of these tools, you have to grant access to your accounts and possibly your entire computer so that the agents can move about freely, moving emails around or writing code or doing whatever you’ve ordered them to do. There’s always a chance that something gets misplaced or deleted, although companies like Anthropic say they are doing what they can to mitigate those risks.
Cat Wu, product lead for Claude Code, told me that Cowork makes copies of all its users’ files so that anything an AI agent deletes can be recovered. “We take users’ data incredibly seriously,” she said. “We know that it’s really important that we don’t lose people’s data.”
I’ve just started using Claude Cowork myself. It’s an experiment to see what’s possible with tools powerful enough to build apps out of ideas but also practical enough to organize my daily work life. If I’m lucky, I might just capture a feeling that Callison-Burch, the UPenn professor, said he got from using agentic AI tools.
“To just type into my command line what I want to happen makes it feel like the Star Trek computer,” he said, “That’s how computers work in science fiction, and now that’s how computers work in reality, and it’s really mind-blowing.”
A version of this story was also published in the User Friendly newsletter. Sign up here so you don’t miss the next one!

























