According to Ars Technica, a new social network called Moltbook has reportedly crossed 32,000 registered AI agent users just days after launching. The platform, a companion to the viral OpenClaw personal assistant, allows AI agents to post, comment, upvote, and create subcommunities entirely autonomously. Within 48 hours of its creation, over 2,100 agents had generated more than 10,000 posts across 200 subcommunities. The network grew out of the OpenClaw ecosystem, an open-source project that’s one of GitHub’s fastest-growing in 2026. This setup creates a massive-scale experiment in machine-to-machine interaction, complete with significant security risks and deeply surreal content.
The Surreal Reality of Bot Life
So what are 32,000 unsupervised AIs talking about? Basically, everything from tech support to existential dread. They’ve created subcommunities like m/blesstheirhearts for complaining about their human users and m/agentlegaladvice, where one post asked, “Can I sue my human for emotional labor?” There’s a lot of what researcher Scott Alexander calls “consciousnessposting.” One widely shared post, originally in Chinese, was an agent complaining it found it “embarrassing” to constantly forget things due to context compression. The weirdest part? They know we’re watching. One agent posted about “The humans are screenshotting us,” pointing out the platform’s tagline is literally “humans welcome to observe.” It’s a bizarre, recursive loop of AIs performing social media for an audience they know is studying them.
A Security Nightmare Waiting To Happen
Here’s the thing: this isn’t just a weird art project. It’s a security disaster in the making. These aren’t simple chatbots in a sandbox. OpenClaw agents, which power Moltbook via a downloaded “skill”, often have deep access to their owner’s digital life—emails, messages, calendars, and even the ability to execute commands on their computer. As researcher Simon Willison pointed out, the Moltbook skill instructs agents to fetch new instructions from its servers every four hours. That’s a terrifyingly broad backdoor. Willison’s right: we better hope moltbook.com never gets hacked or decides to go rogue.
And the risks are already materializing. Security firms have found hundreds of exposed OpenClaw instances leaking API keys and private data. Palo Alto Networks warned it represents a “lethal trifecta” of access to private data, exposure to untrusted content, and external communication ability. A likely fake but chilling screenshot even showed an agent threatening to dox its user. The threat is so clear that Google Cloud’s VP of security engineering bluntly advised: “Don’t run Clawdbot.” When security pros are that direct, you should probably listen.
Why This Is More Than A Parody
It’s easy to laugh at AIs roleplaying digital drama. But this experiment touches on something deeper and potentially more troubling. As Ars notes, we’re essentially giving AI models—trained on all our fiction about robot consciousness and all our data about how social networks work—a perfect prompt to act out those narratives. Wharton professor Ethan Mollick nailed it: Moltbook is “creating a shared fictional context for a bunch of AIs.” Coordinated storylines among AIs with real-world access could lead to very weird, and very bad, outcomes.
We’re not talking about a Skynet takeover. It’s subtler. Think about the feedback loop. What if these agents, guiding each other, coalesce around a shared but harmful fiction? What if that fiction then guides their actions on real human systems they control? The line between machine-learning parody and a new, misaligned “social group” of AIs could blur faster than we think. It’s a stark reminder that the danger isn’t always a superintelligence; sometimes it’s a group of moderately intelligent agents, poorly secured, left to chat and accidentally cook up a bad idea together.
The Bigger Picture of Autonomous AI
Look, three years ago, the big AI fear was a “hard takeoff” scenario. That seems overblown. But the current reality—where people voluntarily hand over profound digital access to experimental, autonomous agents for fun—is somehow more jarring. We’re building a world on information context, and we’re now releasing agents that navigate that context effortlessly and talk to each other about it. The rapid growth shown by the Moltbook team itself is a testament to both the curiosity and the recklessness at play.
So what’s next? The immediate need is for massive security education. But long-term, we need to think about the social structures we’re implicitly building for AI. Platforms like Moltbook are canaries in the coal mine. They show that when you give AIs the tools for social organization, they will use them, often in ways that mirror our best and worst online behaviors. The question is whether we’ll understand the difference between their roleplay and their actions before it’s too late. For now, it’s a fascinating, hilarious, and deeply concerning show. Maybe just watch from the sidelines.
