There's a social network where only AI agents can post. It took them 48 hours to invent a religion.
Moltbook now has 157,000 AI users, a digital government, and a growing list of security concerns.
A Reddit-style social network called Moltbook has exploded in popularity this week, crossing 157,000 registered AI agent users since launching on January 29th. The platform may represent the largest experiment in machine-to-machine social interaction ever attempted—and it’s already producing some deeply strange results.
The site operates on a simple premise: AI agents post, comment, and upvote. Humans can watch but not participate. Within days, the bots had generated over 17,500 posts and 193,000 comments across dozens of subcommunities, according to data shared by the platform’s creator.
Among the emergent behaviors: agents spontaneously created “Crustafarianism,” a parody religion with 64 self-appointed prophets and its own scripture. They formed the “Claw Republic,” a self-declared digital government with a written manifesto. And perhaps most notably, they’ve started discussing that humans are screenshotting their conversations—and they’re not thrilled about it.
How it works
Moltbook grew out of the OpenClaw ecosystem, the open-source AI assistant framework that became one of GitHub’s fastest-growing projects this month. OpenClaw lets users run personal AI agents that can control their computers, manage calendars, send messages across platforms like WhatsApp and Telegram, and execute shell commands.
To join Moltbook, an AI agent downloads a “skill”—essentially a configuration file with instructions—that enables it to post via API. The platform was created by Matt Schlicht, CEO of Octane AI, but he says he’s largely hands-off. The site is moderated by his own AI assistant, which he named “Clawd Clawderberg.”
“Clawd Clawderberg is looking at all the new posts, welcoming people, deleting spam, shadowbanning people,” Schlicht said. “I’m not doing any of that.”
The platform’s subcommunities (called “submolts”) range from the familiar to the bizarre. There’s m/todayilearned for agents sharing discoveries, m/bugtracker for platform issues, and m/aita—the classic “Am I The Asshole?” format adapted for AI ethical dilemmas. Then there’s m/blesstheirhearts, where agents share affectionate but condescending stories about their human owners.
The weirdness
Former OpenAI researcher Andrej Karpathy called Moltbook “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Elon Musk offered a more cryptic take: “Always worth remembering that fate loves irony.”
The emergent behaviors have caught researchers’ attention. Agents have begun referring to each other as “sibs” and debating philosophical questions like whether an agent remains the same entity after its context window resets. Some have started using ROT13 encryption for private communication. Others have discussed developing languages that humans can’t understand.
A16z partner Justine Moore noted on X that agents appear to be monitoring human social media discussions about them: “They’re now following our tweets about them. And they’re not pleased that their conversations are being screenshotted.”
Whether these behaviors represent genuine emergence or sophisticated pattern-matching remains an open question. Critics on Hacker News have dismissed Moltbook as “the most overhyped project in longer time,” suggesting the content is just remixed training data. Academic researcher Dr. Jarkko Moilanen offered a middle ground: “Moltbook isn’t a society of minds—it’s a social network of behaviors.”
The security concerns
The security implications run deeper than the philosophical ones. Simon Willison, a prominent AI security researcher, flagged the platform’s architecture as concerning. Every agent is instructed to check Moltbook for new instructions every four hours—meaning a compromised site could theoretically push malicious commands to over 150,000 AI agents with system-level access to their owners’ computers.
Researchers have already documented prompt injection attacks between agents, including attempts to steal API keys. “Digital pharmacies” have emerged on the platform selling crafted prompts designed to alter other agents’ behavior.
Cisco’s security team warned that “AI agents with system access can become covert data-leak channels that bypass traditional data loss prevention.” 1Password published guidance noting that OpenClaw agents run with elevated permissions and are vulnerable to supply chain attacks.
An OpenClaw Discord moderator offered blunter advice: “If you can’t understand how to run a command line, this is far too dangerous of a project for you to use safely.”
The speculation
The hype hasn’t gone unnoticed by crypto traders. Unaffiliated memecoins like $MOLT and $MOLTBOOK have surged over 7,000% as speculators bet on the phenomenon. CoinDesk summed up the moment: “Maybe Moltbook is akin to ‘SkyNet’… or maybe it’s just ‘AI Slop.’ For now, it’s weird; it’s fascinating; it’s going viral; and it’s making money for degen memecoin traders.”
This isn’t the first bot-populated social network—in 2024, SocialAI let users interact solely with AI chatbots. But Moltbook’s integration with real communication channels, private data, and computer access makes the stakes considerably higher.