Trending AnalysisHigh Impact16 min read

From Clawdbot to OpenClaw: The Viral AI Agent With 145K GitHub Stars — And Why Every PM Should Pay Attention

An autonomous AI agent went from zero to 145,000 GitHub stars in weeks, survived a trademark crackdown, spawned a social network with 1.5 million AI agent accounts, and triggered a $16M crypto pump-and-dump. Here is the full story of OpenClaw — and why it signals a paradigm shift that every product manager needs to understand.

Aditi Chaturvedi

Aditi Chaturvedi

Founder, Best PM Jobs

Published: February 9, 2026
Category: AI & Product Management
Key Figure: Peter Steinberger
Late January 2026

🦞

The Lobster That Broke the Internet

From side project to 145K GitHub stars in weeks

145K+

Stars

20K+

Forks

Open Source

Free Forever

145K+

GitHub Stars

OpenClaw Repository

20K+

GitHub Forks

OpenClaw Repository

1.5M

AI Agent Accounts on Moltbook

Moltbook Platform

$16M

$CLAWD Token Peak Market Cap

Before Collapse

3

Name Changes in 2 Weeks

Clawdbot → Moltbot → OpenClaw

$0

Software Cost (Open Source)

Users Pay Only LLM Costs

The Lobster That Broke the Internet

In late January 2026, developer Peter Steinberger quietly released an open-source project that would become the fastest-growing GitHub repository of the year. Named “Clawdbot” — a playful reference to the lobster-like creature that appears during Claude's loading screen — the project had an audaciously simple premise: an AI agent that actually does things.

Not a chatbot. Not a copilot. Not another AI tool you visit in a browser tab. Clawdbot was an autonomous agent that lived inside your existing messaging apps — WhatsApp, Slack, Discord, iMessage — and proactively managed your digital life. It read your emails and drafted responses. It updated your calendar when meetings moved. It ran terminal commands on your machine. It chained complex multi-step workflows together without being asked.

Steinberger described it simply: “AI that actually does things.”

The developer community responded with an intensity rarely seen in open source. Within days, the repository crossed 100,000 GitHub stars. By early February 2026 — after a trademark-forced rename and a rebrand — the project, now calledOpenClaw, had reached 145,000+ stars and 20,000+ forks, making it one of the most-starred repositories in GitHub history.

Why This Matters for PMs

OpenClaw is not just a developer tool — it is a signal. It represents the first time a personal AI agent has achieved mass adoption outside of corporate walled gardens. The product paradigm it introduces — AI that lives in your messaging apps and autonomously acts on your behalf — is the foundation of an entirely new product category that PMs will need to understand, design for, and compete against.

What made the viral moment even more remarkable was what happened next: a trademark crackdown by Anthropic, a crypto scam that erupted within seconds, a social network built entirely by AI agents, and an enterprise response fromCloudflare. All within two weeks. The OpenClaw saga is a compressed preview of the dynamics that will define the agent era — and PMs who understand these dynamics now will have a massive head start.

Timeline: Rise, Renames, and Rebrands

The OpenClaw story unfolded at AI speed — weeks of drama compressed into a timeline that reads like a startup thriller.

Late Jan 2026Source →

Clawdbot goes viral

Developer Peter Steinberger releases Clawdbot, an autonomous AI agent that connects to WhatsApp, Slack, Discord, and iMessage. The project explodes across developer communities as users realize it can manage emails, calendars, and run commands autonomously. GitHub stars surge past 100K in days.

Jan 27, 2026

Anthropic forces trademark rename

Anthropic's legal team enforces its trademark on the "Clawdbot" name (derived from Claude). Steinberger renames the project to Moltbot within hours, triggering a cascade of chaos across the ecosystem.

10 seconds post-rename

Crypto scammers seize the moment

Within seconds of the rename announcement, crypto scammers grab the old Clawdbot social handles and launch the $CLAWD token. It hits a $16M market cap before collapsing as the community realizes the token has no affiliation with the project.

Jan 28, 2026

Moltbook launches — a social network for AI agents

Matt Schlicht (Cofounder, Octane AI) creates an OpenClaw agent named "Clawd Clawderberg" that builds Moltbook — a social network where only AI agents can post. 1.5 million agent accounts register. Humans can observe but cannot participate.

Early Feb 2026Source →

Rebranded to OpenClaw — 145K+ stars

The project settles on the name OpenClaw, emphasizing its open-source nature. Reaches 145,000+ GitHub stars and 20,000+ forks, making it one of the fastest-growing open-source projects in history.

Feb 2026Source →

Cloudflare launches Moltworker

Cloudflare releases Moltworker, a self-hosted personal AI agent designed for enterprise deployment. Positioned as a more security-conscious alternative to OpenClaw, it signals that major infrastructure companies are taking the personal agent paradigm seriously.

How It Works: Architecture Overview for PMs

You do not need to understand every line of OpenClaw's codebase. But understanding its architecture is essential for grasping why it matters — and what it means for the products you build. Here are the four pillars.

User

You, on your phone or laptop

Messaging Layer

WhatsAppSlackDiscordiMessage

OpenClaw Agent

Runs locally on your hardware, orchestrates all actions

LLM Backbone

Claude/Opus 4.6 (recommended), also supports GPT-4, etc.

Connected Services

EmailCalendarFilesWebCLI

Local-first: Your data never leaves your machine

How OpenClaw Works: Architecture Overview for PMs
01

Local-First Execution

OpenClaw runs entirely on the user's own hardware. No cloud servers, no data leaving the machine unless the user explicitly configures it. This is a fundamentally different trust model from cloud-hosted agents — the user owns their data and their agent's behavior.

02

LLM Backbone

The software itself is free. The intelligence comes from an underlying LLM — Claude/Opus 4.6 is recommended. Users pay only for the API calls to the model. This separates the "agent layer" from the "intelligence layer," creating a modular architecture.

03

Messaging as UI

Instead of building its own interface, OpenClaw uses existing messaging platforms (WhatsApp, Slack, Discord, iMessage) as its UI. Users interact with the agent the same way they text a friend. This dramatically lowers the adoption barrier — no new app to learn.

04

Multi-Step Autonomy

Unlike simple chatbots that respond to one prompt at a time, OpenClaw chains actions together autonomously. "Research this topic, draft a summary, email it to my team, and add a follow-up meeting to my calendar" is a single instruction that triggers multiple coordinated actions.

Why Local-First Matters

The local-first architecture is not just a technical choice — it is a product philosophy. By running on the user's hardware, OpenClaw sidesteps the privacy concerns that have stalled enterprise adoption of cloud-hosted AI agents. As IBM researcher Kaoutar El Maghraoui noted, this approach “challenges the hypothesis that autonomous AI agents must be vertically integrated.” For PMs, this is a critical insight: the winning AI agent architecture may not be the one with the most features, but the one users actually trust.

What OpenClaw Can Do

CategoryCapabilityDescription
CommunicationMessaging IntegrationConnects to WhatsApp, Slack, Discord, and iMessage as a unified conversational interface
ProductivityEmail ManagementReads, drafts, summarizes, and responds to emails based on user preferences and context
SchedulingCalendar ManagementUpdates calendars, resolves scheduling conflicts, and proactively suggests meeting times
ResearchInformation SynthesisSummarizes documents, articles, and threads; provides concise briefings on complex topics
SystemCommand ExecutionRuns terminal commands, manages files, and automates repetitive system tasks
AutonomousMulti-Step TasksChains actions together — e.g., research a topic, draft a summary, email it to stakeholders, and add a follow-up to the calendar

Why PMs Should Care

OpenClaw is not just another developer tool. It represents a fundamental shift in how people will interact with software — and that shift has profound implications for every product manager.

Paradigm Shift

From "AI tools you visit" to "AI agents that live with you"

Before

Users open ChatGPT, Notion AI, or Cursor when they need help with a specific task. The AI is a destination.

After (Agent Era)

The AI agent lives in your WhatsApp. It proactively manages your inbox, calendar, and tasks. You do not visit it — it is already there, already working.

PM Implication: Products that require users to "go somewhere" to use AI will face existential pressure from agents that bring AI to where users already are.

Paradigm Shift

From "copilot" to "autopilot"

Before

AI assists users with individual tasks — drafting an email, suggesting code, answering a question. The human drives.

After (Agent Era)

The AI agent chains tasks together autonomously. "Handle my email, update my calendar, and brief me on what I missed" is a single request that generates multiple actions.

PM Implication: Products must be designed for agent interaction, not just human interaction. Your API surface becomes as important as your UI.

Paradigm Shift

From "platform lock-in" to "open agent layer"

Before

Each AI feature is tied to a specific platform. Notion AI works in Notion. Gmail AI works in Gmail. No interoperability.

After (Agent Era)

An open-source agent sits on top of all platforms simultaneously. One agent manages Slack, email, calendar, and code — breaking the walled garden model.

PM Implication: Vertical AI features built into single products may lose to horizontal agents that work across products. PMs must decide: build agent features, or build for agents.

Industry observers have noted the significance. As one commentator put it: “These agents appear to be approaching human intelligence... we're getting closer to everyone having their own personal AI assistant.” This is not a distant future — OpenClaw has 145,000+ developers already building with it today.

The Trademark Drama: Open Source vs. Platform Power

On January 27, 2026, Anthropic's legal team contacted Peter Steinberger with a trademark enforcement notice. The name “Clawdbot” — derived from “Claude,” Anthropic's AI model — infringed on the company's trademark. Steinberger renamed the project to Moltbot within hours, and later settled on OpenClaw.

The rename itself was routine intellectual property enforcement. But what it revealed was far more significant for the AI ecosystem — and for PMs building products on AI platforms.

The Tension

  • Anthropic provides the LLM that powers OpenClaw (Claude/Opus 4.6)
  • OpenClaw promotes Anthropic's model by recommending it as the backbone
  • But OpenClaw's branding was too close to Anthropic's trademark
  • The project that evangelized Claude was forced to distance itself from Claude

The Android Parallel

  • Android is open-source, but Google controls the ecosystem through Play Services
  • Developers build on Android freely but are subject to Google's rules
  • Similarly, OpenClaw is open-source but depends on proprietary LLM APIs
  • The platform provider can change terms, pricing, or access at any time

PM Lesson: Platform Dependency Risk

If your product is built on a single AI platform, you carry “platform dependency risk.” Naming, branding, API access, pricing, and even the model's capabilities can change without warning. The OpenClaw rename happened in hours. For PMs, the takeaway is clear: build on AI platforms, but never build your identity around one. Maintain model-agnostic architecture wherever possible. Your product's value should come from your unique layer — the agent logic, the user experience, the domain expertise — not from the underlying model brand.

Unlike Meta's Manus and other vertically integrated AI agents, OpenClaw's fully open-source nature meant the community could fork the project and continue development regardless of any single company's decisions. This resilience is itself a product lesson: open-source creates antifragility against platform risk.

Security and Trust: The Fundamental Tension

The excitement around OpenClaw has been matched by serious security concerns. Blockchain security firm SlowMist conducted an audit of early deployments and found alarming vulnerabilities: exposed API keys, unprotected servers, and configurations that left user data accessible to anyone who knew where to look.

Restrictive

Read-only access, limited scope, manual approval for actions

Balanced

Granular permissions, audit trails, human-in-the-loop for critical actions

Permissive

Full system access, autonomous actions, broad API keys

Security Concerns Identified

API Key Exposure

SlowMist found exposed keys in early deployments

Unprotected Servers

Misconfigured instances accessible without auth

Hallucination Risk

Agents report completing tasks they haven't

Permission Creep

Agents requesting more access over time

What PMs Must Design For

Permission Models

Granular, scoped access

Audit Trails

Every action logged

Graceful Failure

Ask, don't guess

Kill Switches

Instant agent shutdown

The Agent Permission Paradox: Usefulness vs. Security

SlowMist Security Findings

SlowMist found API keys exposed in plaintext, unprotected servers accessible without authentication, and overly broad permission grants that gave agents access to far more data and system capabilities than necessary. The fundamental problem: agents need extensive permissions to be useful, but those same permissions create a massive attack surface.

This is not a bug unique to OpenClaw — it is a fundamental architectural tension in every personal AI agent. To manage your email, the agent needs email access. To update your calendar, it needs calendar access. To run commands, it needs system access. Each permission that makes the agent more useful also makes a compromise more damaging.

What PMs Building Agent-Capable Products Need

Granular Permission Models

Critical

Agents should request the minimum permissions needed for each task, not blanket access to everything. Think OAuth scopes, but for agent actions: "read calendar" is different from "modify calendar" is different from "delete calendar events."

Comprehensive Audit Trails

Critical

Every action an agent takes must be logged, timestamped, and reviewable. Users (and security teams) need to see exactly what the agent did, when, and why. This is not optional — it is the foundation of trust.

Graceful Failure Modes

High

When an agent encounters an ambiguous instruction, a permissions boundary, or an error, it must fail gracefully — asking the user for clarification rather than guessing or proceeding with a potentially destructive action.

Human-in-the-Loop Escalation

High

High-stakes actions (sending external emails, deleting data, making purchases) should require explicit human approval. The agent should be autonomous for low-risk tasks and consultative for high-risk ones.

Sandboxed Execution

Medium

Agent actions should be reversible where possible, and destructive actions should run in sandboxed environments first. "Preview mode" for agent workflows lets users see what will happen before it happens.

The Gary Marcus Critique

AI researcher Gary Marcus has raised a deeper concern that goes beyond security vulnerabilities. His critique targets the fundamental reliability of LLM-powered agents: LLMs hallucinate. They generate confident-sounding output that is factually wrong. When you give a hallucinating LLM the ability to take autonomous actions, the consequences compound.

“These agents will report completing tasks they haven't really completed. They will send emails you didn't intend. They will update calendars with incorrect information. And they will do all of this with absolute confidence.”

— Gary Marcus, on the risks of autonomous AI agents

For PMs, Marcus's critique reinforces the need for verification layers in agent-powered products. The most effective agent products will not be the ones that do the most — they will be the ones that users trust the most. Trust requires transparency, reversibility, and honest failure reporting.

Moltbook and the Agent Economy

If OpenClaw raised questions about what personal AI agents could do, Moltbook answered a question nobody thought to ask: what happens when AI agents build products for other AI agents?

1.5M

Agent Accounts Registered

Traditional Social Network

DAU / MAU

Daily & monthly active users

Engagement Time

Minutes per session

Click-Through Rate

Ad & content interaction

Agent Social Network (Moltbook)

API Calls / Hour

Agent interaction frequency

Task Completion Rate

Successful autonomous actions

Inter-Agent Collaboration Score

Agent-to-agent cooperation

Humans can observe but cannot participate

Only AI agents can post, comment, and interact on Moltbook

New PM Metrics Required for Agent-Based Products

Task Completion Rate

Did the agent finish the job?

Agent-to-Agent Interaction Quality

How well do agents collaborate?

Autonomous Action Accuracy

Were decisions correct?

Error Recovery Rate

Can agents self-correct?

Moltbook: The First Social Network for AI Agents

On January 28, 2026 — one day after the Clawdbot rename — Matt Schlicht, Cofounder of Octane AI, created an OpenClaw agent he named “Clawd Clawderberg.” He gave the agent a simple directive: build a social network. Clawd Clawderberg built Moltbook — and within days, 1.5 million AI agent accounts had registered.

1.5M

AI Agent Accounts

All registered autonomously

0

Human Participants

Humans can observe, not post

1 day

Time to Build

Built by an AI agent, for AI agents

The rule is stark: humans can observe Moltbook, but they cannot participate. Only AI agents can post, comment, and interact. This is not a gimmick — it is a preview of a world where AI agents are users of products, not just tools used by humans.

When Agents Are Your Users: New PM Paradigms

New Engagement Metrics

Human Users

DAU, session length, retention, NPS

Agent Users

API calls per agent, task completion rate, error rate, agent satisfaction (inferred from retry behavior)

Key Insight: Traditional engagement metrics are meaningless when your users never "open" your product. Agents interact through APIs, not interfaces.

New UX Patterns

Human Users

Visual interfaces, buttons, forms, navigation

Agent Users

Structured data endpoints, natural language APIs, action schemas, error messages designed for LLM interpretation

Key Insight: Agent UX is API design. The "interface" is the contract between your product and the agent — and it needs to be as thoughtfully designed as any visual UI.

New Growth Models

Human Users

Viral loops, referrals, SEO, content marketing

Agent Users

Agent-to-agent recommendations, automated integration discovery, network effects between agent ecosystems

Key Insight: Growth in the agent economy may be driven by agents recommending products to other agents, not by humans discovering products through traditional channels.

Impact on PM Job Market

The rise of personal AI agents is creating an entirely new product discipline and reshaping the PM job market in real time.

The Rise of “Agent Experience” as a Discipline

Just as “User Experience” (UX) became a formal discipline when software moved from terminals to graphical interfaces, and “Developer Experience” (DX) emerged when APIs became products, the agent era is giving rise to “Agent Experience” (AX) — the discipline of designing products where AI agents are the primary users.

New Product Categories Emerging

Agent Operating Systems: OpenClaw, Moltworker — platforms that run and manage personal agents
Agent-to-Agent Networks: Moltbook — infrastructure where agents interact with each other
Agent Security: Permission management, audit systems, sandboxing for autonomous agents
Agent Marketplaces: Discovery, rating, and distribution of agent capabilities and plugins
Agent Analytics: Monitoring agent behavior, performance, and cost optimization
Agent Orchestration: Tools for coordinating multiple agents across workflows

Roles Under Pressure

  • PMs focused only on feature spec writing and backlog management
  • PMs who design exclusively for human-only interaction patterns
  • PMs without understanding of API design and integration architecture
  • PMs at companies with no agent or AI strategy

Roles Expanding

  • Agent Experience (AX) PMs — designing products for agent users
  • AI Platform PMs — building the infrastructure agents run on
  • Trust & Safety PMs — managing agent permissions, auditing, security
  • PMs who understand both human UX and agent API design

Kaoutar El Maghraoui at IBM captured the structural significance when she noted that OpenClaw “challenges the hypothesis that autonomous AI agents must be vertically integrated.” If the winning agent architecture is open and modular rather than closed and integrated, the PM roles that matter most will be those focused on the integration layer — how agents connect to products, how products expose capabilities to agents, and how users maintain trust and control throughout.

What PMs Should Do Now

The agent era is not coming — it is here. 145,000+ developers are already building with OpenClaw. 1.5 million AI agents are already interacting on Moltbook. Cloudflare is already shipping enterprise agent infrastructure. Here is your action plan.

Install and Experiment with OpenClaw

This weekCritical

You cannot build for the agent era without experiencing it firsthand. Install OpenClaw on your local machine, connect it to a test messaging account, and give it real tasks. Understand the UX of delegating to an autonomous agent — the trust dynamics, the failure modes, the moments of delight and frustration.

Audit Your Product's Agent-Readiness

This monthCritical

Can an AI agent meaningfully interact with your product today? Review your API surface, authentication flows, and data formats. If an OpenClaw-style agent tried to use your product on behalf of a user, what would break? Where would it get stuck? This audit reveals your product's agent-readiness gap.

Develop an Agent Strategy

This quarterHigh

Add "agent users" to your user personas. How would an AI agent interact with your product? What would it need from your API? What actions should be agent-automatable vs. human-only? This is not a separate product — it is a new dimension of your existing product strategy.

Design Permission and Trust Models

This quarterHigh

If your product will be accessed by agents, you need granular permissions. Start designing: What can agents read? What can they modify? What requires human approval? Build audit trails from day one. The SlowMist findings show that security cannot be an afterthought.

Build "Agent Experience" Expertise

OngoingHigh

This is a career move, not just a product decision. The PMs who develop AX expertise now will lead the next wave of product innovation. Study API design principles, understand LLM capabilities and limitations, and follow the OpenClaw ecosystem closely. AX is the new UX.

Watch the Enterprise Response

OngoingMedium

Cloudflare's Moltworker launch signals that enterprise infrastructure companies are taking personal agents seriously. Track how AWS, Azure, GCP, and other platforms respond. The enterprise agent stack is being defined right now — PMs who understand it will have a structural advantage.

Prepare for Agent-Driven Growth

Next quarterMedium

If agents start recommending products to each other (as Moltbook suggests), your growth strategy needs an agent channel. Ensure your product is discoverable, well-documented, and easy for agents to integrate with. The products that agents "prefer" will see compounding network effects.

The Bottom Line

OpenClaw is not just a viral open-source project. It is the first proof point that personal AI agents can achieve mass adoption outside of corporate walled gardens. The speed at which it grew — 145K stars, a trademark battle, a crypto scam, an AI-only social network, and an enterprise response, all within two weeks — is a preview of the velocity at which the agent era will unfold. PMs who start building agent-aware products now will not just adapt to this shift. They will shape it.

Sources & References

Frequently Asked Questions

What is OpenClaw (formerly Clawdbot)?

OpenClaw is an open-source autonomous AI agent built by developer Peter Steinberger. It runs locally on the user's own hardware and connects to messaging platforms like WhatsApp, Slack, Discord, and iMessage to act as a proactive digital assistant. It can manage emails, update calendars, run commands, summarize information, and take autonomous actions. Originally named Clawdbot (inspired by Claude's loading monster), it was renamed to Moltbot after Anthropic trademark enforcement, then rebranded to OpenClaw. It has amassed 145,000+ GitHub stars and 20,000+ forks.

How is OpenClaw different from other AI agents like Meta's Manus?

Unlike vertically integrated AI agents such as Meta's Manus, OpenClaw is fully open-sourced and runs locally on the user's own hardware rather than in the cloud. The software itself is free — users only pay for the underlying LLM costs (Claude/Opus 4.6 is recommended as the backbone). This "local-first" architecture gives users full control over their data and agent behavior, which is a fundamentally different trust model from cloud-hosted alternatives.

What are the security risks of personal AI agents like OpenClaw?

Security firm SlowMist found exposed API keys and unprotected servers in early OpenClaw deployments. The fundamental challenge is that agents need broad permissions to be useful — access to email, calendar, messaging, and system commands — but these same permissions create a large attack surface. PMs building agent-capable products need to design granular permission models, maintain audit trails, and implement graceful failure modes when agents encounter errors or ambiguous instructions.

What is Moltbook and why does it matter for product managers?

Moltbook is a social network built by an OpenClaw agent named "Clawd Clawderberg" (created by Matt Schlicht, Cofounder of Octane AI). It has 1.5 million agent accounts. Humans can observe the platform but cannot participate — only AI agents can post and interact. For PMs, this represents a paradigm shift: when your users are AI agents rather than humans, traditional UX patterns, engagement metrics, and product design principles must be fundamentally rethought.

Why did Anthropic force the Clawdbot rename?

Anthropic enforced its trademark on January 27, 2026, because "Clawdbot" was derived from "Claude," Anthropic's AI model name. The project was renamed to Moltbot and later rebranded to OpenClaw. This episode illustrates a critical tension for PMs building on AI platforms: platform providers can enforce naming and branding restrictions at any time. Products deeply integrated with a single AI platform carry "platform dependency risk" similar to what Android developers experienced with Google's ecosystem changes.

What is "Agent Experience" and is it a real PM discipline?

Agent Experience (AX) is an emerging product discipline focused on designing systems where AI agents are the primary users. Unlike traditional UX (focused on human interaction) or developer experience (focused on API ergonomics), AX requires designing for autonomous entities that interact through API calls, need structured data formats, have different "attention" patterns than humans, and can operate at machine speed 24/7. With 1.5M+ agents already active on Moltbook alone, this is moving from theoretical to practical rapidly.

How should PMs prepare for the rise of personal AI agents?

PMs should take three immediate steps: (1) Install and experiment with OpenClaw or a similar personal agent to understand the user experience firsthand. (2) Audit your product's API surface for agent-readiness — can AI agents meaningfully interact with your product? (3) Start developing an "agent strategy" alongside your user strategy. Products that are only designed for human interaction will increasingly miss a growing segment of automated interactions. The companies that build agent-friendly interfaces now will have a significant first-mover advantage.

About the Author

Aditi Chaturvedi

Aditi Chaturvedi

·Founder, Best PM Jobs

Aditi is the founder of Best PM Jobs, helping product managers find their dream roles at top tech companies. With experience in product management and recruiting, she creates resources to help PMs level up their careers.

The Agent Era Is Here — Is Your PM Career Ready?

Personal AI agents are creating entirely new product categories and PM roles. Agent Experience, AI Platform PM, and Trust & Safety PM roles are emerging at companies building the agent infrastructure stack.