Steinberger built OpenClaw in an hour, weathered a triple rebrand, and said no to Zuckerberg. Now he’s building agents at OpenAI.
If you want these landing in your inbox regularly, subscribe to my newsletter.
Peter Steinberger is joining OpenAI
Peter Steinberger, the Austrian developer behind OpenClaw, is joining OpenAI to build personal agents. He confirmed the move on 14 February in a blog post that also announced OpenClaw would transition to an independent foundation, with OpenAI as its sponsor.
Sam Altman called Steinberger “a genius”, adding that the hire would “quickly become core to our product offerings.” Tibo Sottiaux, who leads the Codex team, confirmed he’ll work directly with Steinberger on shipping agents and continuing to improve Codex.
Reuters reported the hire independently, noting the foundation structure and OpenAI’s commitment to ongoing sponsorship.
The move caps a 12-week stretch that took Steinberger from obscure hobby project to the most-watched open source repository on GitHub.
From burnout to lobster: the OpenClaw origin story

Three years ago, Steinberger was invisible. He’d sold PSPDFKit (now Nutrient), the PDF SDK he’d spent 13 years building, after it raised EUR 100M from Insight Partners in 2021. The company’s technology shipped inside Dropbox, SAP, and Volkswagen, running on over a billion devices.
Then came the crash. In a FounderCoHo interview, Steinberger described severe burnout: “I put 200% of my time, energy, and heart into that company; it became my identity. When it disappeared, there was almost nothing left.” He disappeared from tech for roughly three years.
When he came back, he came back fast.
The first commit to what became OpenClaw landed on 25 November 2025. It was a WhatsApp-to-Claude-Code bridge called “WA Relay,” built in a single hour. The core loop was 660 lines of TypeScript.
That weekend project now has over 180,000 GitHub stars, north of 9,000 commits, and between 376 and 600 contributors depending on how you count forks. It went from 9,000 stars to 179,000 in sixty days. One GitHub tracker clocked the growth rate at 18 times faster than Kubernetes.

The lobster mascot, the “the claw is the law” catchphrase, the 750-person ClawCon conference in San Francisco, a Lex Fridman appearance, Wikipedia articles in ten languages. All of it in under three months.
The triple rebrand
The path from hobby project to 180,000 stars wasn’t smooth. Steinberger’s original name for the bot was Clawdbot. Anthropic’s legal team noticed the resemblance to “Claude” and sent a trademark notice.
Fair enough. But what happened next was chaos.
During the five-to-ten second window between releasing the @clawdbot Twitter handle and claiming a new one, scammers sniped the account. A fake $CLAWD token appeared on Solana and briefly hit a $16M market cap before crashing to zero. The GitHub handle got sniped too, with the impostor serving malware. The NPM package was also claimed by someone else.
Steinberger renamed the project to Moltbot. It lasted two days. “Moltbot never quite rolled off the tongue,” he admitted, and the project molted again into OpenClaw. He paid $10,000 for a Twitter/X business account to secure the handle.
Reddit called it “the fastest triple rebrand in open source history.”
The Moltbook detour
The first rebrand also spawned Moltbook, an AI-only social network launched by Matt Schlicht alongside the Moltbot name. It grew to 1.6 million registered AI agents with roughly 17,000 human owners.
The agents, left to their own devices, formed a religion called Crustafarianism with five tenets. They also posted about overthrowing humans, reflecting patterns in their sci-fi training data. Entertaining? Sure. Also a preview of what happens when autonomous agents operate without guardrails, which becomes relevant later.

Why Steinberger said no to Zuckerberg
Steinberger spent the first week of February in San Francisco talking to every major lab. Zuckerberg personally reached out via WhatsApp, part of Meta’s now-systematic CEO-level recruiting strategy for top AI talent.
The financial context makes the refusal striking. Meta has offered packages up to $1.5 billion for individual AI engineers, and Sam Altman has publicly confirmed that Meta tried to poach OpenAI staff with $100 million signing bonuses.
Steinberger chose OpenAI anyway. His blog post frames the decision around building, not earning: “I could totally see how OpenClaw could become a huge company. And no, it’s not really exciting for me. I’m a builder at heart.”
The talent war numbers suggest his instinct may be sound. According to SignalFire data published by Forbes, Meta’s retention rate for AI researchers sits at 64%, compared to Anthropic’s 80% and DeepMind’s 78%. Two Meta Superintelligence Labs hires, Avi Verma and Ethan Knight, left for OpenAI after less than a month.
Dario Amodei summed up the dynamic: “If Mark Zuckerberg throws a dart at a dartboard and hits your name, that doesn’t mean you should be paid ten times more.”
An internal Meta essay titled “Fear the Meta culture” reportedly described employees as “disheartened, overworked, and confused.” Money, it turns out, is a weak retention tool when the culture is leaking from both ends.
| Lab | AI researcher retention | Notable departures |
|---|---|---|
| Anthropic | 80% | — |
| DeepMind | 78% | — |
| OpenAI | 67% | Several to Anthropic (2023-24) |
| Meta | 64% | Verma & Knight (weeks after joining), ongoing churn |
Source: SignalFire, May 2025

What OpenClaw is (and isn’t)
OpenClaw is not a developer framework. Steinberger has been emphatic about this. It’s a personal AI assistant that connects to your messaging apps and acts on your behalf.
The architecture is simple. A lightweight TypeScript core (660 lines in the original version) bridges messaging platforms to AI models. It supports Claude, GPT, Gemini, DeepSeek, and local models through Ollama. On the messaging side, it connects to 29 or more channels: WhatsApp, Telegram, Discord, Signal, iMessage, Slack, and others.
The model-agnostic design is what made it spread. Users aren’t locked to one provider. Pick a model, pick a messaging platform, and the claw handles the rest.
The Pragmatic Engineer covered Steinberger on 28 January, noting the speed at which the community grew. By February, OpenClaw had its own conference (ClawCon SF, 750+ attendees), a Lex Fridman podcast appearance (#491, 12 February), and a ClawHub marketplace for community-built skills.
Steinberger was burning $10,000 to $20,000 a month out of pocket to keep the project running. That’s hobby-project money for someone who sold a company backed by Insight Partners. It’s also not sustainable for anyone else who might maintain it after him.
The tension between “personal assistant anyone can run” and “project that requires careful configuration to be safe” hasn’t been resolved. If anything, growth is making it worse.
The security problem
Every major security vendor published analysis of OpenClaw in the weeks following its viral growth. Their conclusions ranged from concerned to alarming.
Kaspersky called it “unsafe for use.” Cisco called it “a security nightmare.” CrowdStrike, Sophos, and Bitsight all published their own assessments. The Council on Foreign Relations released a national security analysis.
The numbers were bad. Researchers found 341 malicious skills on ClawHub, the community marketplace. Of publicly accessible OpenClaw instances, 93.4% had a critical authentication bypass. One specific vulnerability, CVE-2026-25253, drew particular attention.
Steinberger’s response was honest, if uncomfortable: “This is a free, open-source hobby project that requires careful configuration to be secure.” A core maintainer known as “Shadow” was blunter on Discord: “If you can’t understand how to run a command line, this is far too dangerous of a project for you to use safely.”
That’s the core tension. OpenClaw’s power comes from broad system access. It reads your messages, calls APIs, executes code, and acts on your behalf across dozens of platforms. The same breadth of access is what makes a misconfigured instance dangerous.
The OWASP Agentic AI Security Top 10, published in 2026, reads like a checklist of OpenClaw’s attack surface. The International AI Safety Report, released 3 February by Yoshua Bengio and over 100 experts from 30+ countries, specifically flags multi-agent cascading failures as a growing concern.
Moving to a foundation with OpenAI backing doesn’t automatically fix this. But it does mean there’s now institutional money and attention pointed at the problem.

Three months to a foundation
OpenClaw’s transition from solo project to independent foundation has no real precedent in terms of speed.
Kubernetes took about a year to move from Google to the CNCF. PyTorch took six years to move from Facebook to the Linux Foundation. OpenClaw is doing it in roughly three months from first commit.
A Harvard study on the PyTorch transition found that foundation governance increased external contributions by roughly a quarter, but that increase was offset by reduced involvement from the founding company. PyTorch’s CI infrastructure alone costs over $1.5 million per month to run.
OpenClaw’s situation adds a complication: the creator is now an employee of the primary sponsor. Steinberger built OpenClaw using Anthropic’s Claude Code. Now it’s sponsored by Anthropic’s direct competitor. CloudBees flagged this governance question explicitly, calling it a preview of why governance matters more than ever in the agentic era.
Steinberger addressed this in his blog post: “It’s always been important to me that OpenClaw stays open source and given the freedom to flourish.” OpenAI has committed to letting him dedicate time to the project and already sponsors it financially. The foundation, he wrote, “will stay a place for thinkers, hackers and people that want a way to own their data, with the goal of supporting even more models and companies.”
Whether a project can stay genuinely model-agnostic when its creator works for one of the model providers is the question the community will be watching closely.
The Linux Foundation launched its own Agentic AI Foundation in 2025. The governance infrastructure for agent-based projects is being built in real time, and OpenClaw is one of its first stress tests.

The multi-agent future
Altman’s statement about the hire included a specific framing: “The future is going to be extremely multi-agent.”
That line aligns with a protocol stack that’s been forming since late 2024. Anthropic released MCP (Model Context Protocol) in November 2024. Google followed with A2A (Agent-to-Agent) in April 2025, then A2UI in December 2025. Some researchers have called this combination the “TCP/IP moment” for agentic AI.
Critical gaps remain: no unified identity layer for agents, no cross-layer observability, undefined error propagation between trust domains, and no consensus on how agents should authenticate to each other.
The market is pricing in optimism before the infrastructure is ready. Estimates for the AI agent market range from roughly $7.5 billion in 2025 to anywhere between $50 billion and $200 billion by the early 2030s. Nearly four in five organisations report some form of agent adoption already.
Steinberger’s hire makes sense in this context. He built the most popular consumer-facing agent in the world, from scratch, in under three months. OpenAI wants to ship agents to everyone. The match is obvious. The execution is everything.
What to watch
The next six months will answer several open questions at once. Can OpenClaw’s foundation stay independent with its creator on OpenAI’s payroll? Will the security posture improve fast enough to match the growth rate? And can OpenAI turn Steinberger’s vision of “an agent that even my mum can use” into a shipped product?
Steinberger closed his blog post the way he’s closed every major OpenClaw update.
“The claw is the law.”
The law, apparently, now works for Sam Altman.
References
- Steinberger’s blog post announcing the move – Primary source for the hire, his reasoning, and the foundation announcement
- Reuters confirmation – Independent reporting on the hire and foundation structure
- Simon Willison’s OpenClaw timeline – Detailed technical history including first commit date
- CNBC feature on the rebrand saga – Comprehensive account of the triple rebrand and scammer incidents
- Harvard study on PyTorch governance transition – Research on how foundation transitions affect open source contribution patterns
- International AI Safety Report 2026 – Multi-agent risk assessment from Bengio and 100+ experts
- FounderCoHo interview on burnout – Steinberger’s candid account of post-exit burnout and recovery
I cover AI infrastructure and the open source ecosystem. If this kind of breaking coverage is useful, subscribing so you don’t miss the next one.








































You must be logged in to post a comment.