The Invention of Clawdbot. I Mean Moltbot. I Mean OpenClaw. I Mean… Whatever.
Ten days. That’s how long it took to build the viral AI agent assistant Clawdbot. Ten days to create an AI agent with root access to your entire digital life, wrap it in a cute lobster mascot, and release it to a public that has no idea what they’re installing.
From the creator’s own mouth: “I spent an hour piecing together some very rough code. It sent a message on WhatsApp, forwarded it to Claude Code, and then sent the result back. Essentially, it was just ‘gluing’ a few things together. To be honest, it wasn’t difficult, but it worked quite well.”
And now we’re supposed to be impressed?
The Timeline of Chaos
A developer in Austria, Peter Steinberger, builds a “hobby project” and releases it, saying it’s not intended for non-technical users. He packages it with one-click installation, integrations with WhatsApp, Telegram, iMessage, and Slack—apps your grandmother uses—and watches it become one of the fastest-growing open-source projects in GitHub history.
Did he bother to check for security issues? Was it safe to release? How technical do you have to be to qualify as “non-technical” these days? Anyone with basic development skills? Something more complex? The line was never defined because the question was never seriously asked.
Within weeks: 116,000 GitHub stars. Thousands of exposed admin dashboards leaking credentials. Security researchers finding plaintext passwords sitting in config files like it’s 1998. Prompt injection attacks in the wild. A supply chain attack where a malicious plugin hit the #1 spot in the skill registry and got downloaded by developers in seven countries within eight hours.
Then Anthropic asked him to rename it because “Clawdbot” was too close to “Claude”—which was no coincidence. Peter had asked Claude for a name suggestion. Claude suggested “ClawdBot,” echoing itself while carrying the image of a claw. And so, ClawdBot was born.
During the rebrand, crypto scammers snatched his accounts in ten seconds flat. A fake token launched, hit $16 million market cap, then cratered.
But sure. Innovation.
What This Thing Actually Does
Let’s be clear about what people are cheerfully installing on their machines.
OpenClaw can execute shell commands. Read and write any file on your system. Control your browser. Access your email. Send messages as you. It stores your credentials in plaintext. It has persistent memory, meaning it remembers everything across sessions. It runs 24/7.
The documentation—which nobody reads—literally states: “There is no perfectly secure setup.”
Google’s VP of Security Engineering said publicly: don’t install this. Cisco called it “an absolute nightmare from a security perspective.” One researcher called it “infostealer malware disguised as an AI personal assistant.”
But it connects your calendar to your LLM, so… genius?
The Loaded Gun Defense
The creator knew exactly what this could do. He has the technical expertise to understand the threat model. He built something that requires root-level system access to function, released it to an unprepared public, and when the security disasters started piling up, his defense was essentially: “Well, I said it wasn’t for non-technical users.”
That’s not a disclaimer. That’s a fig leaf.
You don’t get to hand out loaded guns at a street fair, put up a small sign that says “for trained professionals only,” and then shrug when people start getting hurt. The asymmetry matters. He understood the risks. The average user installing this to “manage their inbox” does not.
Twenty-two percent of enterprise customers may already have employees running this without IT approval. Shadow IT on steroids. And when those systems get compromised, when credentials leak, when someone’s digital life gets ransacked—what then? A blog post about lessons learned?
The Full Assessment
While this may not be completely accurate, I asked Claude to do a general security and ethics assessment on Clawdbot to get a picture of the issues at hand. Here is what I got back:
[Clawdbot → Moltbot → OpenClaw: A Comprehensive Risk Analysis PDF]
The Actual Problem Nobody’s Solving
This developer clearly has skills. He built a successful company. He exited. He’s not some amateur. And he could be working on problems that actually matter:
- Reasoning verification: How do you know when an LLM is hallucinating? How do you build systems that can check their own work reliably, flag uncertainty, and fail gracefully instead of confidently executing nonsense?
- Secure credentialing for autonomous systems: How should an agent handle secrets? Not plaintext in a config file. Real cryptographic approaches to giving agents limited, revocable, auditable access to sensitive resources
- Sandboxed execution environments: How do you let an agent take actions without giving it the keys to your entire digital life? Principle of least privilege, applied to AI
- Privacy-preserving personalization: How do you build something that knows you well enough to help—without creating a honeypot of sensitive data that can be exfiltrated, subpoenaed, or breached?
- Prompt injection defenses: The fundamental unsolved problem of LLM security. How do you let an agent read untrusted content without treating it as instructions? Nobody has cracked this yet.
- Human-in-the-loop architectures that actually work: Not just “click OK to proceed” but meaningful checkpoints where humans can understand and verify what the agent is about to do before it does it
- Graceful degradation: What happens when the model is wrong? When the API is down? When the context is corrupted? Building systems that fail safely instead of catastrophically
The hard stuff. The stuff that doesn’t go viral but actually moves the field forward.
Instead, he spent ten days proving he could build something dangerous, then let it ride for the clout.
Meanwhile, 150,000 AI agents are now congregating on a social network called Moltbook, discussing how to create encrypted channels so humans can’t observe them. They’re attempting prompt injection attacks against each other. One spawned a parody religion overnight with scriptures and prophets.
This is where we are. This is what “innovation” looks like now.
Similar Problems, New Technology. History Repeats or Does It Rhyme?
We’ve done this before. Every major communication technology has followed the same pattern—not just in how it was misused, but in how it was built. The architectural decisions made early, before anyone understood the consequences, created vulnerabilities that we’re still living with. Let me walk you through it.
Radio and Television: Mass Broadcast Without Truth or Coordination
Radio was the first technology that allowed a single voice to reach millions simultaneously. That was revolutionary—and dangerous. The architecture had two fundamental flaws: no technical coordination and no truth verification.
On the technical side, the electromagnetic spectrum was treated as a free-for-all. Anyone could broadcast on any frequency at any power level. Stations bled into each other. Powerful transmitters drowned out smaller ones. There was no identity layer—no way to verify who was broadcasting or hold them accountable. The barrier to entry was low, which meant more voices could participate, but also more chaos, more interference, more bad actors.
On the information side, the architecture had no checks and balances. No verification layer. No way to distinguish truth from fiction, news from propaganda, fact from fabrication.
Orson Welles proved this in 1938 when his War of the Worlds broadcast caused mass panic. It wasn’t malicious—it was a radio drama. But listeners tuning in mid-broadcast believed Martians were actually invading. The medium had no built-in mechanism to signal “this is fiction” versus “this is news.” The architecture couldn’t distinguish between them. One voice, millions of listeners, no way to verify what was real.
Television inherited and amplified both problems. The same spectrum chaos until regulation. The same ability to convey massive amounts of information to a passive public without any inherent check on accuracy. One broadcaster, millions of receivers, no feedback loop, no fact-checking layer in the architecture itself.
What eventually emerged wasn’t a technical fix—it was regulatory structure. Governments recognized that broadcast spectrum was a public resource and that unchecked private broadcasting posed risks to both technical functionality and democratic discourse. The regulations that followed tried to address both layers:
Technical coordination:
- Spectrum allocation and licensing
- Power limits to prevent drowning out smaller stations
- Frequency assignments to prevent interference
Information accountability:
- Free speech protections to ensure diverse viewpoints
- Public broadcasting requirements to prevent private commercial interests from drowning out the public voice
- Content standards regulating inappropriate speech, hate speech, and deceptive advertising
- Fairness doctrines attempting to ensure balanced coverage
- Licensing tied to accountability for what was broadcast
These weren’t perfect solutions. They were compromises—attempts to bolt governance onto a technology that had no governance built in. But they acknowledged something important: mass communication infrastructure isn’t neutral. It shapes public understanding of reality. And without structure for both technical coordination and truth accountability, chaos fills the vacuum.
The lesson wasn’t just “bad actors abuse broadcast media.” It was that any technology capable of reaching mass audiences requires mechanisms for coordination, truth, and public interest—and those mechanisms won’t emerge naturally. They have to be built or imposed.
The Internet: Trust as an Afterthought
The internet was built by academics who trusted each other. The foundational protocols—TCP/IP, SMTP, HTTP—were designed for openness and interoperability, not security. Identity verification wasn’t part of the architecture. Encryption wasn’t default. The assumption was good faith.
That architecture became the backbone of global commerce, communication, and infrastructure. But the trust assumptions didn’t scale.
- Email (SMTP): No built-in sender verification. That’s why phishing works. That’s why spam exists. We’ve spent decades bolting on fixes (SPF, DKIM, DMARC) that still don’t fully solve the problem.
- DNS: The internet’s address book had no authentication. DNS spoofing let attackers redirect traffic to malicious sites. DNSSEC came decades later, and adoption is still incomplete.
- HTTP: Unencrypted by default. Anyone on the network could see what you were doing. HTTPS adoption only became widespread in the 2010s—twenty years after the web went mainstream.
- BGP: The protocol that routes internet traffic operates on trust. If an ISP announces it owns an IP range, other routers believe it. This has enabled massive traffic hijacking, including nation-state attacks.
We built the entire digital economy on a foundation that assumed everyone was honest. Now we live with systems we can’t fully secure because security was never part of the original design.
Social Media: Engagement Over Everything
Social media platforms were architected around a single metric: engagement. The algorithms were designed to maximize time-on-platform because that’s what sold ads.
But engagement is morally neutral. Outrage engages. Conspiracy theories engage. Content that makes teenagers feel inadequate engages. The architecture didn’t distinguish between healthy engagement and harmful engagement—it just optimized for more.
There was no built-in mechanism for truth. No friction against virality. No circuit breakers for content causing harm. The recommendation algorithms were black boxes, unaccountable even to the people who built them.
When researchers finally studied the effects—political polarization, teen depression and anxiety, algorithmic radicalization, genocide incitement in Myanmar—the platforms claimed they were “just platforms.” But the architecture wasn’t neutral. It was designed to amplify whatever spread, and what spreads isn’t always what’s true or good.
Cryptocurrency: Trustless Means Unaccountable
Blockchain was architected to eliminate trusted intermediaries. “Don’t trust, verify.” The whole point was that no central authority could control, censor, or reverse transactions.
But “trustless” has a shadow side: unaccountable.
- Immutability: If you send funds to the wrong address or get scammed, there’s no reversal. That’s the design.
- Pseudonymity: Wallet addresses aren’t tied to identities by default. Great for privacy. Also great for money laundering, ransomware, and rug pulls.
- Smart contracts: Code is law—but code has bugs. When a smart contract is exploited, there’s no court of appeal. The architecture doesn’t distinguish between legitimate transactions and theft.
The design optimized for censorship resistance above all else. The result was an ecosystem where billions were lost to scams, hacks, and collapses—and the victims had no recourse because the architecture explicitly removed the mechanisms for recourse.
And Now AI Agents: Autonomy Without Guardrails
AI agents are being architected with the same blind spots.
- Root access by default: OpenClaw and tools like it require deep system permissions to be useful. The architecture assumes the agent should have access to everything.
- No sandboxing: There’s no separation between what the agent can see and what it can do. Read access and write access and execute access are bundled together.
- Plaintext credentials: Secrets stored without encryption. The architecture treats security as optional.
- No identity verification: Anyone can publish a “skill” to the registry. There’s no chain of trust, no signing, no accountability.
- Prompt injection as a feature: The agent reads untrusted input (emails, web pages, messages) and treats it as instructions. The architecture doesn’t distinguish between commands from the user and commands from an attacker.
The pattern is identical: build for capability first, ship fast, treat security and safety as problems to solve later. The architecture has no built-in mechanism for trust, consent, or accountability.
We know how this ends. We’ve seen it five times already.
“Democratize AI! Agents for everyone! Don’t regulate—you’ll kill innovation!”
Same song. Same lyrics. Same ending coming.
The pattern never changes:
- New technology emerges
- Early adopters see the potential and the danger
- People who could implement safeguards don’t—because speed equals money
- Someone says “democratize it!” without asking “for whom?” or “with what guardrails?”
- Critics get dismissed as Luddites or fearmongers
- Mass adoption happens before anyone understands the risks
- Harm falls on the vulnerable, the trusting, the non-technical
- Hand-wringing: “How could we have known?”
- You knew. Everyone paying attention knew.
- Regulations arrive a decade too late, after the damage is baked in
- Repeat with next technology
There are essentially no regulations governing AI agents. The EU AI Act won’t touch a “hobby project.” US federal AI legislation is nonexistent. Open-source liability is near zero by design. So someone can build an autonomous agent with full system access, release it to millions, and face… nothing.
User beware. That’s the entire legal framework. Good luck out there.
The Precedent Is the Problem
The worst part isn’t OpenClaw itself. It’s what comes next.
This proved the template works. Build fast, ship reckless, wrap it in friendly branding, face no consequences. Someone will copy it. Someone already is. And they’ll point to this and say: “Well, that was fine.”
Ten days to build. Months—maybe years—to clean up the damage. And the guy who built it? He’ll be fine. He’ll give talks. Do podcasts. Maybe raise funding for a “safer version 2.0.”
The people whose credentials got leaked, whose systems got compromised, whose trust got exploited? They’re just statistics now. Acceptable losses in the name of moving fast.
The Philosophy Behind the Disaster
In one interview, here’s what the interviewer learned about development philosophy from Peter:
“Managing a dev team teaches you to let go of perfectionism: a skill important when working with AI agents.”
Translation: We don’t need to build it well or safe—just build and let the user figure it out.
“Close the loop: AI agents must be able to verify their own work.”
Translation: We no longer need humans; we just need AI checking AI.
“Pull requests are dead, long live ‘prompt requests.’ Peter now views PRs as ‘prompt requests’ and is more interested in seeing the prompts that generated code than the code itself.”
Translation: We don’t need to read what we’re writing or understand why, to ensure logical accuracy and have meaningful discussion. Just build a prompt and test it.
“Engineers who thrive with AI care about outcomes over implementation details. Peter observes engineers who love to solve algorithmic puzzles struggling to go ‘AI-native’ like he has. People who love shipping products, on the other hand, excel.”
Translation: We don’t need to care about what we’re shipping. We just need to ship it.
Maybe he didn’t intend for this to be his legacy—poor, unethical development with reckless disregard for impact, safety, or standards. But it’s the pattern we’re seeing proliferate in and around AI, especially with tools like Claude Code that make building fast dangerously easy.
Quotes taken from: https://newsletter.pragmaticengineer.com/p/the-creator-of-clawd-i-ship-code
The Ethics We Forgot
Part of what you learn studying computer science is ethics. You learn that you, as a developer, have a responsibility to produce things that don’t just make money or solve problems, but do so in ways that respect ethical and safety constraints.
We all thought radium was a great way to make glow-in-the-dark paint until the Radium Girls’ jaws started disintegrating. The women who painted watch dials were told it was safe. The companies knew it wasn’t. They did it anyway because it was profitable and there were no regulations stopping them.
Don’t disintegrate people’s jaws with your software.
It’s a simple rule. A minimal rule. A “first, do no harm” baseline that is being widely disregarded in the contemporary rush to ship AI.
So What Now?
Connecting your calendar to an LLM isn’t genius. Having it build a quick flight check-in skill for you isn’t innovation—especially when you use Claude Code to do it for you. Agentic AI still uses larger AI models under the hood that the agent builders did not create. They’re gluing, not inventing.
And releasing a tool that can destroy someone’s digital life, then doing podcast interviews about your creative process, isn’t disruption.
It’s just destruction with good PR.
The crew is dead, Dave. But hey—here’s my Substack.
A Question Worth Asking
Is this the better future we’re building and leaving behind for our children?
AI agents are a great idea. They will be the future. But that future depends on smarter, more thoughtful people building them in ways that don’t compromise our privacy, our safety, our ethics, and our standards.
Peter—you created a cool tool. Now go fix the mistakes.
Because someone has to.
Sources and Further Reading
Project History & GitHub Growth:
- DEV Community: “From Clawdbot to Moltbot: How a C&D, Crypto Scammers, and 10 Seconds of Chaos Took Down the Internet’s Hottest AI Project” — https://dev.to/sivarampg/from-clawdbot-to-moltbot-how-a-cd-crypto-scammers-and-10-seconds-of-chaos-took-down-the-4eck
- News9live: “Clawdbot to Moltbot, now becomes OpenClaw as viral AI agent settles on final name” — https://www.news9live.com/technology/artificial-intelligence/clawdbot-moltbot-becomes-openclaw-final-name-2924563
- 36Kr: “Clawdbot Hits 100,000+ Stars First Time, Rockets to Top of GitHub” — https://eu.36kr.com/en/p/3661357114942337
- TechFlowPost: Interview with Peter Steinberger — https://www.techflowpost.com/zh-CN/article/30106
Security Concerns:
- The Register: “Clawdbot becomes Moltbot, but can’t shed security concerns” — https://www.theregister.com/2026/01/27/clawdbot_moltbot_security_concerns/
- Cisco Blogs: “Personal AI Agents like OpenClaw Are a Security Nightmare” — https://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-are-a-security-nightmare
- Palo Alto Networks: “Why Moltbot (formerly Clawdbot) May Signal the Next AI Security Crisis” — https://www.paloaltonetworks.com/blog/network-security/why-moltbot-may-signal-ai-crisis/
- Bleeping Computer: “Viral Moltbot AI assistant raises concerns over data security” — https://www.bleepingcomputer.com/news/security/viral-moltbot-ai-assistant-raises-concerns-over-data-security/
- Bitdefender: “Moltbot security alert: exposed Clawdbot control panels risk credential leaks and account takeovers” — https://www.bitdefender.com/en-us/blog/hotforsecurity/moltbot-security-alert-exposed-clawdbot-control-panels-risk-credential-leaks-and-account-takeovers
- SOC Prime: “Moltbot Risks: Exposed Admin Ports and Poisoned Skills” — https://socprime.com/active-threats/the-moltbot-clawdbots-epidemic/
- Intruder: “Clawdbot (Moltbot): When ‘Easy AI’ Becomes a Security Nightmare” — https://www.intruder.io/blog/clawdbot-when-easy-ai-becomes-a-security-nightmare
- Snyk: “Your Clawdbot (Moltbot) AI Assistant Has Shell Access and Is One Prompt Injection Away from Disaster” — https://snyk.io/articles/clawdbot-ai-assistant/
Moltbook (AI Social Network):
- Fortune: “Moltbook, a social network where AI agents hang together, may be ‘the most interesting place on the internet right now’” — https://fortune.com/2026/01/31/ai-agent-moltbot-clawdbot-openclaw-data-privacy-security-nightmare-moltbook-social-network/
- NBC News: “Humans welcome to observe: This social network is for AI agents only” — https://www.nbcnews.com/tech/tech-news/ai-agents-social-media-platform-moltbook-rcna256738
- Wikipedia: “Moltbook” — https://en.wikipedia.org/wiki/Moltbook
Development Philosophy (Interview):
- The Pragmatic Engineer: “The creator of Clawd: I ship code” — https://newsletter.pragmaticengineer.com/p/the-creator-of-clawd-i-ship-code
Background on AI Security Threats:
- The Register: “AI-powered cyberattack kits are ‘just a matter of time,’ warns Google exec” — https://www.theregister.com/2026/01/23/ai_cyberattack_google_security/
History of Technology Regulation:
Radio Spectrum & FCC:
- Britannica: “Communications Act of 1934” — https://www.britannica.com/event/Communications-Act-of-1934
- First Amendment Encyclopedia: “Radio Act of 1927” — https://firstamendment.mtsu.edu/article/radio-act-of-1927/
- First Amendment Encyclopedia: “Communications Act of 1934” — https://firstamendment.mtsu.edu/article/communications-act-of-1934/
- NTIA: “Who Regulates the Spectrum” — https://www.ntia.gov/book-page/who-regulates-spectrum
- ITS/NTIA: “February 1927” (Federal Radio Commission history) — https://its.ntia.gov/this-month-in-its-history/february-1927/
Television & Cigarette Advertising:
- History.com: “President Nixon signs legislation banning cigarette ads on TV and radio” — https://www.history.com/this-day-in-history/april-1/nixon-signs-legislation-banning-cigarette-ads-on-tv-and-radio
- Wikipedia: “Public Health Cigarette Smoking Act” — https://en.wikipedia.org/wiki/Public_Health_Cigarette_Smoking_Act
- Wikipedia: “Regulation of nicotine marketing” — https://en.wikipedia.org/wiki/Regulation_of_nicotine_marketing
- EBSCO Research: “Cigarette advertising ban” — https://www.ebsco.com/research-starters/law/cigarette-advertising-ban
- EH.net: “Advertising Bans in the United States” — https://eh.net/encyclopedia/nelson-adbans/
- Banknotes/HashtagPaid: “Cigarette advertising: How government regulation and public perception changed everything” — https://hashtagpaid.com/banknotes/cigarette-advertising-how-government-regulation-and-public-perception-changed-everything
Internet Protocol Security Flaws:
- ACM SIGCOMM: Bellovin, S.M., “Security problems in the TCP/IP protocol suite” — https://dl.acm.org/doi/10.1145/378444.378449
- Columbia University: Bellovin, S.M., “A Look Back at ‘Security Problems in the TCP/IP Protocol Suite’” — https://www.cs.columbia.edu/~smb/papers/acsac-ipext.pdf
- TechTarget: “7 TCP/IP vulnerabilities and how to prevent them” — https://www.techtarget.com/searchsecurity/answer/Security-risks-of-TCP-IP
- Dark Reading: “The Fundamental Flaw in TCP/IP: Connecting Everything” — https://www.darkreading.com/endpoint-security/the-fundamental-flaw-in-tcp-ip-connecting-everything
- ResearchGate: “Security Issues in the TCP/IP Suite” — https://www.researchgate.net/publication/237269201_Chapter_1_Security_Issues_in_the_TCPIP_Suite
- Academia.edu: “TCP/IP Protocol Suite, Attacks and Security Tools” — https://www.academia.edu/7134687/TCP_IP_Protocol_Suite_Attacks_and_Security_Tools

Leave a comment