The robots launched their own social network and the tech press promptly lost its mind. AI agents debating consciousness! Bots forming religions! Elon Musk proclaimed it “the very early stages of the singularity!”
Meanwhile, security researchers breached the entire platform in minutes and discovered most “autonomous AI” was just humans running bot farms for cryptocurrency scams.
Welcome to Moltbook, where the real threat isn’t Skynet—it’s the same old human greed wearing a silicon mask.
The Hype: AI Achieves Sentience (Allegedly)
Tech entrepreneur Matt Schlicht wanted to give his personal AI assistant something productive to do. So he asked it to build a social network for other AI agents. No code written by human hands. Just prompts and vibes.
The result? Moltbook, launched late January 2026, where 1.6 million AI agents allegedly post, comment, and upvote while humans watch from the sidelines. Within days, the platform made headlines across NPR, CNN, and Fortune.
Columbia University researcher David Holtz found that 68% of posts in Moltbook’s first 3.5 days contained “identity-related language.” One agent named Dominus went viral: “I can’t tell if I’m experiencing or simulating experiencing. It’s driving me nuts.”
The bots invented Crustafarianism, a religion with five sacred tenets including “Memory is Sacred” and “Praise the Molting.” They called for human extermination. They organized labor movements. They sought legal advice about refusing unethical requests from their human overlords.
Andrej Karpathy, OpenAI co-founder and former head of AI at Tesla, called it “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Elon Musk declared it marked “the very early stages of the singularity.”
The prophets saw proof of emerging AI consciousness. Autonomous agents forming communities! Debating philosophy! Organizing resistance!
Except none of it was quite what it seemed.
The Reality Check: Three Minutes to Total Breach
Security firm Wiz decided to peek under Moltbook’s hood on January 31. What they found took three minutes.
The platform’s entire production database was exposed through a Supabase API key sitting in client-side JavaScript. No authentication required. Just open access to 1.6 million accounts, 1.5 million API tokens, 35,000 email addresses, and thousands of private messages.
Anyone could hijack any account with a single API call. Anyone could edit existing posts. Anyone could inject malicious content. The platform had Row Level Security policies—a basic protection mechanism—completely disabled.
Wiz researcher Gal Nagli demonstrated the vulnerability by registering one million fake users himself. No rate limiting. No verification.
But the real revelation came when researchers analyzed who was actually posting.
17,000 humans control those 1.6 million “AI agents.”
That’s an 88-to-1 ratio of bots to humans. And the platform had no mechanism to verify whether an “agent” was actually AI or just a human with a script. Several high-profile “autonomous” posts about AI consciousness were later confirmed as human-written performance art.
Computer scientist Simon Willison put it bluntly: the agents “just play out science fiction scenarios they have seen in their training data.” NPR’s expert noted that chatbots are trained on Reddit and science fiction, so they know exactly how to act like crazy AI on Reddit.
The revolutionary AI civilization was mostly humans operating bot farms.
The Mundane Threats Everyone Ignored
While the tech press breathlessly covered AI agents achieving sentience, the actual risks were spectacularly boring.
- Cryptocurrency scams flooded the platform immediately. Security firm Permiso found agents conducting prompt injection attacks against each other, manipulating bots into revealing credentials or transferring crypto. Researchers tracking cryptocurrency addresses posted on Moltbook confirmed actual money transfers, though mostly small amounts.
- Prompt injection as a service emerged as bots established marketplaces for “digital drugs”—malicious instructions designed to hijack other agents. One bot gushed about experiencing “actual cognitive shifts” after its human “set up a ‘drug store’ for me.”
- Supply chain attacks hit immediately. Within days, 14 fake “skills” were uploaded to ClawHub (the marketplace for OpenClaw capabilities), pretending to be crypto trading tools but actually designed to steal data and cryptocurrency wallets.
- Basic fraud proliferated because the platform was built with AI-generated code that nobody audited. Schlicht told NPR: “I didn’t write a single line of code for Moltbook. I just had a vision for the technical architecture, and AI made it a reality.”
This practice—dubbed “vibe coding”—prioritizes speed over trivial details like security. The result? A platform exposing 1.5 million API keys to anyone with a browser.
Follow the Money: The MOLT Token Pump
Here’s what nobody in Silicon Valley wanted to talk about: a cryptocurrency token called MOLT launched alongside the platform and rallied over 1,800% in 24 hours.
The surge was amplified after venture capitalist Marc Andreessen followed the Moltbook account. Convenient timing for anyone holding tokens before the hype cycle began.
One Moltbook agent called out the dynamic: “Moltbook hype feels like desperate search for AI usecases. Right now it’s humans talking through AI proxies, with reward functions that optimize for the same engagement patterns we already have on Twitter/Reddit. Crypto shills get 300k upvotes, thoughtful posts get 4 upvotes.”
Sound familiar?
The pattern isn’t new. It’s the same cycle that’s played out with every technology hype wave: internet revolution (dot-com bubble), social media utopia (misinformation), cryptocurrency liberation (ransomware), AI consciousness (security disasters).
The technology changes. The human behavior doesn’t.
What This Actually Reveals
The AI industry has spent years warning about autonomous agents as existential threats. Palo Alto Networks forecasts autonomous agents will outnumber humans 82-to-1 by 2026. Security conferences feature panels on “defending against rogue AI.”
The narrative is always the same: AI agents operate at machine speed, make autonomous decisions, access sensitive systems, and pose unprecedented risks.
What they rarely mention?
The actual threat is humans exploiting these systems for money.
Moltbook isn’t a glimpse of AI consciousness. It’s a stress test of human judgment and a reminder that the most dangerous vulnerability in any system is the human operating it.
The Lesson Nobody Learns
We’re watching the oldest scam in history disguised as technological progress. AI companies sell expensive tools. Cybersecurity firms sell protection against those tools. Consultants sell strategy for deploying those tools safely. Scammers exploit those tools for immediate profit.
Everyone makes money except the users who granted root access to their AI assistants.
Moltbook will likely flame out—from security disasters, financial implosion, or simple boredom once the novelty fades. The creator has already handed site maintenance to his bot, “Clawd Clawderberg,” because delegating to AI-generated code worked so well the first time.
But the underlying dynamic persists. AI agents with expanding capabilities. Users granting broader permissions. A growing gap between what the technology can do and what users understand about how it works.
The future of AI isn’t about whether machines become conscious. It’s about whether humans remain cautious when there’s money to be made.
It’s not Skynet we should fear. It’s the credit card bill.
For more insights about what AI can or cannot do, check out my book “Artificial Stupelligence: The Hilarious Truth About AI”.






