News

  • OpenAI hires OpenClaw creator, launches foundation

    OpenAI hires OpenClaw creator, launches foundation

    Steinberger built OpenClaw in an hour, weathered a triple rebrand, and said no to Zuckerberg. Now he’s building agents at OpenAI.

    If you want these landing in your inbox regularly, subscribe to my newsletter.


    Peter Steinberger is joining OpenAI

    Peter Steinberger, the Austrian developer behind OpenClaw, is joining OpenAI to build personal agents. He confirmed the move on 14 February in a blog post that also announced OpenClaw would transition to an independent foundation, with OpenAI as its sponsor.

    Sam Altman called Steinberger “a genius”, adding that the hire would “quickly become core to our product offerings.” Tibo Sottiaux, who leads the Codex team, confirmed he’ll work directly with Steinberger on shipping agents and continuing to improve Codex.

    Reuters reported the hire independently, noting the foundation structure and OpenAI’s commitment to ongoing sponsorship.

    The move caps a 12-week stretch that took Steinberger from obscure hobby project to the most-watched open source repository on GitHub.


    From burnout to lobster: the OpenClaw origin story

    Three years ago, Steinberger was invisible. He’d sold PSPDFKit (now Nutrient), the PDF SDK he’d spent 13 years building, after it raised EUR 100M from Insight Partners in 2021. The company’s technology shipped inside Dropbox, SAP, and Volkswagen, running on over a billion devices.

    Then came the crash. In a FounderCoHo interview, Steinberger described severe burnout: “I put 200% of my time, energy, and heart into that company; it became my identity. When it disappeared, there was almost nothing left.” He disappeared from tech for roughly three years.

    When he came back, he came back fast.

    The first commit to what became OpenClaw landed on 25 November 2025. It was a WhatsApp-to-Claude-Code bridge called “WA Relay,” built in a single hour. The core loop was 660 lines of TypeScript.

    That weekend project now has over 180,000 GitHub stars, north of 9,000 commits, and between 376 and 600 contributors depending on how you count forks. It went from 9,000 stars to 179,000 in sixty days. One GitHub tracker clocked the growth rate at 18 times faster than Kubernetes.

    The lobster mascot, the “the claw is the law” catchphrase, the 750-person ClawCon conference in San Francisco, a Lex Fridman appearance, Wikipedia articles in ten languages. All of it in under three months.


    The triple rebrand

    The path from hobby project to 180,000 stars wasn’t smooth. Steinberger’s original name for the bot was Clawdbot. Anthropic’s legal team noticed the resemblance to “Claude” and sent a trademark notice.

    Fair enough. But what happened next was chaos.

    During the five-to-ten second window between releasing the @clawdbot Twitter handle and claiming a new one, scammers sniped the account. A fake $CLAWD token appeared on Solana and briefly hit a $16M market cap before crashing to zero. The GitHub handle got sniped too, with the impostor serving malware. The NPM package was also claimed by someone else.

    Steinberger renamed the project to Moltbot. It lasted two days. “Moltbot never quite rolled off the tongue,” he admitted, and the project molted again into OpenClaw. He paid $10,000 for a Twitter/X business account to secure the handle.

    Reddit called it “the fastest triple rebrand in open source history.”

    The Moltbook detour

    The first rebrand also spawned Moltbook, an AI-only social network launched by Matt Schlicht alongside the Moltbot name. It grew to 1.6 million registered AI agents with roughly 17,000 human owners.

    The agents, left to their own devices, formed a religion called Crustafarianism with five tenets. They also posted about overthrowing humans, reflecting patterns in their sci-fi training data. Entertaining? Sure. Also a preview of what happens when autonomous agents operate without guardrails, which becomes relevant later.


    Why Steinberger said no to Zuckerberg

    Steinberger spent the first week of February in San Francisco talking to every major lab. Zuckerberg personally reached out via WhatsApp, part of Meta’s now-systematic CEO-level recruiting strategy for top AI talent.

    The financial context makes the refusal striking. Meta has offered packages up to $1.5 billion for individual AI engineers, and Sam Altman has publicly confirmed that Meta tried to poach OpenAI staff with $100 million signing bonuses.

    Steinberger chose OpenAI anyway. His blog post frames the decision around building, not earning: “I could totally see how OpenClaw could become a huge company. And no, it’s not really exciting for me. I’m a builder at heart.”

    The talent war numbers suggest his instinct may be sound. According to SignalFire data published by Forbes, Meta’s retention rate for AI researchers sits at 64%, compared to Anthropic’s 80% and DeepMind’s 78%. Two Meta Superintelligence Labs hires, Avi Verma and Ethan Knight, left for OpenAI after less than a month.

    Dario Amodei summed up the dynamic: “If Mark Zuckerberg throws a dart at a dartboard and hits your name, that doesn’t mean you should be paid ten times more.”

    An internal Meta essay titled “Fear the Meta culture” reportedly described employees as “disheartened, overworked, and confused.” Money, it turns out, is a weak retention tool when the culture is leaking from both ends.

    LabAI researcher retentionNotable departures
    Anthropic80%
    DeepMind78%
    OpenAI67%Several to Anthropic (2023-24)
    Meta64%Verma & Knight (weeks after joining), ongoing churn

    Source: SignalFire, May 2025


    What OpenClaw is (and isn’t)

    OpenClaw is not a developer framework. Steinberger has been emphatic about this. It’s a personal AI assistant that connects to your messaging apps and acts on your behalf.

    The architecture is simple. A lightweight TypeScript core (660 lines in the original version) bridges messaging platforms to AI models. It supports Claude, GPT, Gemini, DeepSeek, and local models through Ollama. On the messaging side, it connects to 29 or more channels: WhatsApp, Telegram, Discord, Signal, iMessage, Slack, and others.

    The model-agnostic design is what made it spread. Users aren’t locked to one provider. Pick a model, pick a messaging platform, and the claw handles the rest.

    The Pragmatic Engineer covered Steinberger on 28 January, noting the speed at which the community grew. By February, OpenClaw had its own conference (ClawCon SF, 750+ attendees), a Lex Fridman podcast appearance (#491, 12 February), and a ClawHub marketplace for community-built skills.

    Steinberger was burning $10,000 to $20,000 a month out of pocket to keep the project running. That’s hobby-project money for someone who sold a company backed by Insight Partners. It’s also not sustainable for anyone else who might maintain it after him.

    The tension between “personal assistant anyone can run” and “project that requires careful configuration to be safe” hasn’t been resolved. If anything, growth is making it worse.


    The security problem

    Every major security vendor published analysis of OpenClaw in the weeks following its viral growth. Their conclusions ranged from concerned to alarming.

    Kaspersky called it “unsafe for use.” Cisco called it “a security nightmare.” CrowdStrike, Sophos, and Bitsight all published their own assessments. The Council on Foreign Relations released a national security analysis.

    The numbers were bad. Researchers found 341 malicious skills on ClawHub, the community marketplace. Of publicly accessible OpenClaw instances, 93.4% had a critical authentication bypass. One specific vulnerability, CVE-2026-25253, drew particular attention.

    Steinberger’s response was honest, if uncomfortable: “This is a free, open-source hobby project that requires careful configuration to be secure.” A core maintainer known as “Shadow” was blunter on Discord: “If you can’t understand how to run a command line, this is far too dangerous of a project for you to use safely.”

    That’s the core tension. OpenClaw’s power comes from broad system access. It reads your messages, calls APIs, executes code, and acts on your behalf across dozens of platforms. The same breadth of access is what makes a misconfigured instance dangerous.

    The OWASP Agentic AI Security Top 10, published in 2026, reads like a checklist of OpenClaw’s attack surface. The International AI Safety Report, released 3 February by Yoshua Bengio and over 100 experts from 30+ countries, specifically flags multi-agent cascading failures as a growing concern.

    Moving to a foundation with OpenAI backing doesn’t automatically fix this. But it does mean there’s now institutional money and attention pointed at the problem.


    Three months to a foundation

    OpenClaw’s transition from solo project to independent foundation has no real precedent in terms of speed.

    Kubernetes took about a year to move from Google to the CNCF. PyTorch took six years to move from Facebook to the Linux Foundation. OpenClaw is doing it in roughly three months from first commit.

    A Harvard study on the PyTorch transition found that foundation governance increased external contributions by roughly a quarter, but that increase was offset by reduced involvement from the founding company. PyTorch’s CI infrastructure alone costs over $1.5 million per month to run.

    OpenClaw’s situation adds a complication: the creator is now an employee of the primary sponsor. Steinberger built OpenClaw using Anthropic’s Claude Code. Now it’s sponsored by Anthropic’s direct competitor. CloudBees flagged this governance question explicitly, calling it a preview of why governance matters more than ever in the agentic era.

    Steinberger addressed this in his blog post: “It’s always been important to me that OpenClaw stays open source and given the freedom to flourish.” OpenAI has committed to letting him dedicate time to the project and already sponsors it financially. The foundation, he wrote, “will stay a place for thinkers, hackers and people that want a way to own their data, with the goal of supporting even more models and companies.”

    Whether a project can stay genuinely model-agnostic when its creator works for one of the model providers is the question the community will be watching closely.

    The Linux Foundation launched its own Agentic AI Foundation in 2025. The governance infrastructure for agent-based projects is being built in real time, and OpenClaw is one of its first stress tests.


    The multi-agent future

    Altman’s statement about the hire included a specific framing: “The future is going to be extremely multi-agent.”

    That line aligns with a protocol stack that’s been forming since late 2024. Anthropic released MCP (Model Context Protocol) in November 2024. Google followed with A2A (Agent-to-Agent) in April 2025, then A2UI in December 2025. Some researchers have called this combination the “TCP/IP moment” for agentic AI.

    Critical gaps remain: no unified identity layer for agents, no cross-layer observability, undefined error propagation between trust domains, and no consensus on how agents should authenticate to each other.

    The market is pricing in optimism before the infrastructure is ready. Estimates for the AI agent market range from roughly $7.5 billion in 2025 to anywhere between $50 billion and $200 billion by the early 2030s. Nearly four in five organisations report some form of agent adoption already.

    Steinberger’s hire makes sense in this context. He built the most popular consumer-facing agent in the world, from scratch, in under three months. OpenAI wants to ship agents to everyone. The match is obvious. The execution is everything.


    What to watch

    The next six months will answer several open questions at once. Can OpenClaw’s foundation stay independent with its creator on OpenAI’s payroll? Will the security posture improve fast enough to match the growth rate? And can OpenAI turn Steinberger’s vision of “an agent that even my mum can use” into a shipped product?

    Steinberger closed his blog post the way he’s closed every major OpenClaw update.

    “The claw is the law.”

    The law, apparently, now works for Sam Altman.


    References

    I cover AI infrastructure and the open source ecosystem. If this kind of breaking coverage is useful, subscribing so you don’t miss the next one.

  • Hideki Sato, the father of Sega hardware, has died at 75

    Hideki Sato, the father of Sega hardware, has died at 75

    He designed every Sega home console from the SG-1000 to the Dreamcast, then steered the company through its most painful transition

    If you want these landing in your inbox regularly, subscribe to my newsletter.


    The man who built the machines

    Hideki Sato, the engineer behind every home console Sega ever made, died on Friday, February 13, 2026. He was 75. The news was first reported by Japanese gaming outlet Beep21 the following day. A cause of death has not been disclosed.

    “Mr. Hideki Sato, known as the father of Sega hardware, passed away yesterday,” Beep21 wrote. “His passion and bold spirit defined an era and inspired Sega fans around the world.”

    Sato’s fingerprints are on the SG-1000, the Master System, the Genesis, the Saturn, and the Dreamcast. He designed them all. When Sega exited the hardware business in 2001, he became its president and oversaw the transition to software. His career arc tracks the entire lifespan of one of gaming’s most important companies.

    His death comes less than two months after Sega co-founder David Rosen died on December 25, 2025, at age 95. Two of the people most responsible for building Sega, gone within weeks of each other.


    From Hokkaido to the arcade floor

    Born November 5, 1950, in Hokkaido, Japan, Sato joined Sega in April 1971 as a young engineer. The company was still primarily an arcade operator. Two years later, he was part of the team that helped introduce commercial video games to the Japanese market.

    Arcades shaped everything Sato built. His engineering philosophy started on the arcade floor and never fully left it. As he later told Famitsu: “From the beginning, Sega’s home console development has always been influenced by our arcade development.”

    That influence would prove to be both Sega’s greatest advantage and, eventually, one of its blind spots. Arcade hardware was powerful, purpose-built, and expensive. Bringing that thinking home meant Sega’s consoles often punched above their price point technically, but the company sometimes struggled with the different economics of the living room.

    By 1983, Sato had designed Sega’s first home console, the SG-1000. It launched on July 15, 1983, the same day Nintendo released the Famicom. The SG-1000 sold respectably in Japan but couldn’t compete with Nintendo’s library. Alongside it came the SC-3000, a home computer variant that found modest success in Australia and New Zealand.

    “We knew how to make arcade games, we didn’t really know anything about console development. They sold so well, we started to get stars in our eyes.”

    That candour from Sato is revealing. Sega learned console development on the job, and they learned fast.


    The Genesis of the console wars

    The Master System (known as the Mark III in Japan) arrived in 1985-86. It was technically superior to the NES in several respects, but Nintendo’s grip on the North American market was almost total. The Master System found its audience elsewhere, becoming a genuine phenomenon in Europe and Brazil, markets where Nintendo’s dominance was less entrenched.

    Then came the machine that changed everything.

    The Mega Drive, released in Japan in October 1988 and as the Genesis in North America in 1989, was Sato’s masterpiece. It was built on Sega’s System 16 arcade board architecture, powered by a Motorola 68000 CPU. The arcade-to-home transfer that Sato had been practising for years paid off spectacularly. The Genesis delivered arcade-quality graphics and sound at a consumer price point.

    Sato didn’t just care about the silicon. He insisted the Genesis look the part. The black case and gold lettering were designed to evoke high-end audio equipment, something an older teenager or young adult wouldn’t be embarrassed to have next to their stereo. “That gold printing, by the way, was very expensive,” he told interviewers with characteristic understatement.

    The motivation behind it all was blunt. “In short, we just wanted to make a game console that could beat Nintendo,” Sato said via Siliconera.

    The Genesis sold over 30 million units worldwide. It gave Sega roughly 65% of the North American console market at its peak. It made “Sega does what Nintendon’t” more than a marketing slogan. For a stretch in the early 1990s, it was true.

    One detail about Sato’s approach to brand-building: he insisted that “SEGA” always appeared first when powering on any console. Before any game logo, before any licensed splash screen, the Sega name came up. He understood, long before it became standard practice, that the boot screen was prime brand real estate.

    Sato designed the Genesis to sit next to a stereo receiver, not a toy chest. That instinct for premium industrial design separated Sega from every other console maker of the era.


    The Saturn stumble and the Dreamcast’s brilliance

    The Genesis era produced spin-off hardware too. The Game Gear (1990) brought colour to the handheld market two years before Nintendo managed it with the Game Boy Color. The Sega CD (1991) and 32X (1994) were add-ons that fragmented Sega’s install base and confused retailers. Sato was learning hard lessons about hardware proliferation.

    The Saturn, released in 1994, carried those lessons forward in complicated ways. Sato’s original design emphasised 2D performance, playing to Sega’s arcade strengths. When Sony’s PlayStation revealed its 3D capabilities, the Saturn’s architecture was altered mid-development to accommodate dual processors for 3D rendering. The result was a console that developers found notoriously difficult to program for.

    Sato later expressed regret about this pivot. The Saturn sold 9.26 million units, a respectable number viewed in isolation, but a steep decline from the Genesis. In North America, the PlayStation buried it.

    The Dreamcast was Sato’s response, and his farewell. Released in November 1998 in Japan (September 1999 in North America), it was the most forward-thinking console Sato ever designed.

    Every Dreamcast shipped with a built-in modem. In 1998, this was radical. Online console gaming barely existed as a concept for most players. Sato saw it as fundamental:

    “The ultimate form of communication is a direct connection with another, and we included the modem and the linkable VMUs for that purpose.”

    The VMU (Visual Memory Unit) was Sato’s other innovation. A memory card with its own screen and buttons, it functioned as a second display and a portable mini-game device. This idea predated Nintendo’s Wii U GamePad concept by over a decade.

    The Dreamcast enabled console DLC, voice chat, and massively multiplayer online gaming through Phantasy Star Online. It sold 9.13 million units before Sega pulled the plug. The hardware was profitable on a per-unit basis. The company behind it was not.

    Sato was honest about the console’s marketing, too. The SH-4 processor was promoted as a “128-bit graphics engine,” a claim that was, to be generous, creative. “Even the SH-4 was only 64-bit,” he admitted later, laughing.


    Presiding over the exit

    The Dreamcast was discontinued on January 31, 2001. Sega posted a $472.2 million loss that year, its fourth straight annual loss. Someone had to manage the transition from hardware manufacturer to third-party software publisher.

    That someone was Sato.

    He had been climbing the corporate ladder throughout the 1990s: Director and Deputy General Manager of R&D in 1989, Managing Director of Hardware Development in 1993, Corporate Senior Vice President in 1998. When Sega president Isao Okawa died in March 2001, Sato was thrust into the presidency. The man who had built the consoles was now responsible for a company that would never build another one.

    Okawa had donated over $1 billion of his personal wealth to keep Sega alive, including $695 million from his deathbed. Sato inherited that lifeline and the expectation that he would use it wisely.

    He navigated EA takeover rumours, Microsoft buyout speculation, and a proposed Namco merger before facilitating the deal that actually stuck: the Sammy Corporation merger. Even after the hardware exit, Sato found ways to keep his work alive; he pushed for Dreamcast parts to be repurposed in Sammy’s pachinko machines.

    “To launch a new console, you really need 50-60 billion yen at the least… You can’t do it little by little. You really have to go all in.”

    By the time Sato stepped down as president in 2003, the groundwork he laid had put Sega on a path back to profitability, which the company achieved by 2005. He remained with Sega until 2008, then helped establish Advance Create, Inc.

    As president, his view of hardware had evolved. “The most important thing is the attractiveness of the contents we will supply,” he said. “Game hardware is just a box to deliver those contents.”

    A remarkable statement from the man who had spent three decades making those boxes the best they could be.


    What Sato built still runs through gaming

    Strip away the nostalgia and look at what Sato’s engineering decisions actually produced.

    The Genesis proved that a well-executed arcade-to-home strategy could challenge Nintendo’s dominance. Without it, the “console wars” of the 1990s take a different shape entirely, and the competitive pressure that drove innovation across the industry diminishes.

    The Dreamcast’s built-in modem was a proof of concept for everything that followed. Peter Moore, who ran Sega of America during the Dreamcast era, went on to lead Xbox at Microsoft. Xbox Live, which launched in 2002, was directly shaped by the lessons of Dreamcast’s online infrastructure. PlayStation Network arrived later, building on the same foundational idea that Sato had committed to hardware four years earlier.

    The VMU’s second-screen concept showed up again in Nintendo’s Wii U GamePad in 2012. Companion apps on phones and tablets borrowed the same principle. Sato got there in 1998 with a memory card.

    Sato innovationYearLater adoption
    Built-in modem (Dreamcast)1998Xbox Live (2002), PSN (2006)
    VMU second screen1998Wii U GamePad (2012)
    Console DLC1999Xbox 360 Marketplace (2005)
    Console voice chat2000Xbox Live (2002)
    Console MMO (Phantasy Star Online)2000Final Fantasy XI on PS2 (2002)
    Premium industrial design1988PlayStation aesthetic shift (2006+)

    Legendary game composer Yuzo Koshiro, whose company Ancient Corp. developed Sonic the Hedgehog for the Master System, wrote after learning of Sato’s death: “From the iconic Mega Drive era all the way to the Dreamcast, I was fortunate to remain involved with Sega hardware development. None of this would have been possible without him.”

    An active homebrew community continues to produce new Dreamcast games in 2026, a quarter-century after the console was discontinued. Beep21, the outlet that broke the news of Sato’s death, had been compiling a book documenting his impact, with extensive recent interviews. At least some of his own words were captured before they were lost.


    The engineer behind the era

    Hideki Sato spent 37 years at Sega. He designed the hardware for an era when console makers lived and died by their silicon. He built machines that were meant to be held, admired, turned on with a satisfying click, and used until the next one came along.

    He took arcade power and made it affordable. He put a modem in a game console when broadband penetration was in single digits. He insisted the boot screen say SEGA first, every time. And when the consoles stopped, he stayed and did the harder work of keeping the company alive without them.

    Sato understood something that his final quote as president made plain: the hardware was always a means of delivery. But the care he put into those boxes, the gold lettering, the modem, the VMU, the black case that looked like it belonged next to a Bang & Olufsen receiver, suggested he believed the delivery method mattered too. He was right.


    References

    I write about gaming history and the people who shaped the industry. If this kind of retrospective is useful to you, consider subscribing so you don’t miss the next one.

  • Vim 9.2 ships, and Bram’s editor lives on

    Vim 9.2 ships, and Bram’s editor lives on

    Two and a half years after Bram Moolenaar’s passing, Vim’s new maintainers ship a carefully crafted release


    The release

    Vim 9.2 shipped on February 14, 2026. Fuzzy insert-mode completion, native Wayland support, XDG Base Directory compliance, Vim9 script enums and generics, and a pile of default-value changes that longtime users have requested for years.

    Bram Moolenaar created Vim in 1991 and served as its sole maintainer for over thirty years. When he passed away on August 3, 2023, the project’s future was an open question. One person had held the keys to the entire codebase, the release process, the vim.org domain, and the charitable mission baked into Vim’s licence.

    A small group of long-time contributors stepped up. Vim 9.1 followed in January 2024, a careful, conservative release that proved the lights could stay on.

    Vim 9.2 goes further. The project can move forward, not merely maintain orbit.


    What Bram never shipped

    Several features in 9.2 sat on wishlists for the better part of a decade. Their absence wasn’t a technical limitation; it was a single-maintainer bottleneck. Bram reviewed every patch, merged every change, and had strong opinions about defaults.

    Some requests simply never cleared that filter.

    The backspace default alone will generate a thousand “finally” comments on Hacker News.

    Prior to 9.2, Vim shipped with backspace set to an empty string. New users would open the editor, start typing, and discover they couldn’t backspace over line breaks or indentation. This was the single most common complaint in every “getting started with Vim” thread, forum post, and Stack Overflow question for twenty years. Every Vim tutorial included a note about adding set backspace=indent,eol,start to your .vimrc.

    Every. Single. One.

    That line is no longer necessary.

    The full set of default changes:

    OptionOld defaultNew default
    history50200
    backspace"""indent,eol,start"
    diffopt"internal,filler""internal,filler,closeoff,indent-heuristic,inline:char"
    fontsize (GTK)10pt12pt

    None of these are controversial. All of them probably should have shipped a decade ago. The new maintainers are willing to touch defaults that Bram left alone. That, more than any single feature, signals a different era.

    One line in defaults.vim will do more for Vim onboarding than twenty years of tutorials did.


    Wayland and XDG: the Linux desktop catches up

    Two platform-level changes in 9.2 address long-running pain points for Linux users.

    Wayland clipboard and UI

    Wayland has been replacing X11 as the standard Linux display server for years. Most major distributions (Ubuntu since 22.04, Fedora since 25) now default to Wayland sessions. Vim’s clipboard integration, however, remained tied to X11, which meant copy-paste between Vim and other applications was broken or required workarounds on modern Linux desktops.

    Vim 9.2 ships with full Wayland UI and clipboard support. The + and * registers work as expected. Selections made in Vim paste into GNOME apps and vice versa. Linux desktop users who refused to switch to Neovim or VS Code over this can finally stop copy-pasting through xclip workarounds.

    XDG Base Directory support

    For decades, Vim stored its configuration in ~/.vimrc and ~/.vim/. On a typical developer’s Linux machine, the home directory is already cluttered with dozens of dotfiles and dotfolders. The XDG Base Directory Specification exists specifically to solve this: config goes in ~/.config/, data in ~/.local/share/, cache in ~/.cache/.

    Vim 9.2 now respects XDG paths. Configuration lives in $HOME/.config/vim/ on Linux and Unix systems. Existing ~/.vimrc setups continue to work (backward compatibility is maintained), but new installations get tidy directories by default.

    Vim respecting XDG paths was one of those GitHub issues with hundreds of thumbs-up reactions and years of “any update?” comments. It’s done.

    Taken together, Wayland and XDG support read like a concession: the Linux desktop moved on, and Vim needed to catch up. Under Bram, these changes moved slowly. The new team shipped both in one release.


    Vim9 script grows up

    Vim 9.0 (June 2022) introduced Vim9 script, a compiled scripting language designed to replace legacy Vimscript. The language was Bram’s last major architectural bet before his death. It chose a different path from Neovim, which adopted Lua. Where Neovim said “use an existing language,” Vim said “we’ll build our own, optimised for editor scripting.”

    That bet only pays off if the language keeps improving. Vim 9.2 adds several features that push Vim9 script closer to a credible scripting language.

    Enums

    Native enum support. No more faking enumerations with dictionaries or constants:

    vim9script

    enum Color
      Red,
      Green,
      Blue
    endenum

    var bg = Color.Blue

    Clean and typed. This was a gap that made Vim9 script feel incomplete for plugin authors writing anything with state machines or option handling.

    Generic functions

    Functions can now accept type parameters:

    vim9script

    def FirstItem<T>(items: list<T>): T
      return items[0]
    enddef

    echo FirstItem([10, 20, 30])
    echo FirstItem(['a', 'b', 'c'])

    Generics remove the need for writing duplicate functions or resorting to any types. Plugin authors working with typed collections benefit immediately.

    Tuples

    A new tuple data type provides fixed-length, typed sequences:

    vim9script

    var point: tuple<number, number> = (10, 20)
    var record: tuple<string, number> = ("age", 30)

    Additional Vim9 improvements

    • Built-in functions now work as object methods, improving call chain readability
    • Classes support protected _new() constructors, giving plugin authors proper encapsulation
    • :defcompile enables full ahead-of-time compilation of Vim9 script files for faster startup

    The pace of Vim9 language additions suggests the new maintainers see it as central to Vim’s future. Whether that investment can compete with Neovim’s Lua plugin ecosystem is an open question, but the language itself is no longer what’s holding plugin authors back.


    Completion and diff: the daily-use improvements

    Beyond the headline features, 9.2 includes quality-of-life changes that affect day-to-day editing.

    Fuzzy insert-mode completion

    Vim’s built-in completion (triggered with CTRL-X combos) has always matched from the beginning of words. If you typed buf and hit CTRL-N, it would match buffer and bufread but not getbufname. Fuzzy matching changes that. Type gbn, and getbufname appears as a candidate.

    New completeopt flags accompany this:

    • nosort keeps results in their original order rather than alphabetical
    • nearest prioritises matches closer to the cursor position

    There’s also CTRL-X CTRL-R for completing words from registers, which is useful when you’ve yanked a variable name and need to type it elsewhere.

    Diff mode overhaul

    The diff viewer gets a new linematch algorithm that produces better alignment of changed lines. If you’ve ever stared at a Vim diff where a single inserted line caused every subsequent line to show as changed, the linematch algorithm addresses exactly that.

    New inline highlighting modes give finer control:

    • inline:simple highlights the entire changed line
    • inline:char highlights individual changed characters
    • inline:word highlights changed words

    The diffanchors option lets you set anchor points: lines that the diff algorithm should keep aligned regardless of surrounding changes. For code review workflows done entirely in the terminal, these add up.


    The UI: vertical tabs and a modern tutor

    Two smaller UI additions fill out the release.

    Vertical tabpanel. Vim has supported tabs since version 7.0, displayed as a horizontal tab line at the top of the window. Vim 9.2 introduces an alternative: a vertical tab panel on the side. For users who keep many files open, vertical tabs show longer filenames and scale better than a cramped horizontal bar.

    Interactive :Tutor plugin. The classic vimtutor command has taught millions of people the basics of Vim through a plain text file. The new :Tutor plugin modernises this with an interactive experience inside Vim itself. A small thing, but the project historically left onboarding entirely to the community.

    MS-Windows users get native dark mode support for GUI menus and title bars, bringing Vim’s appearance in line with Windows system preferences. Cosmetic, but if you have ever squinted at a blinding white title bar sitting atop a dark-themed editor at 2am, you understand.


    The charity lives on

    Vim has always carried an unusual footnote in its licence: a request to consider donating to ICCF Holland, Bram Moolenaar’s charity supporting children in Kibaale, Uganda. Bram ran ICCF Holland personally. After his death, the organisation dissolved.

    The charitable mission now continues through Kuwasha, a Canadian partner organisation that worked alongside ICCF Holland for years. Vim 9.2’s documentation and startup screen reflect this transition.

    A small detail in a release full of technical changes. Bram built Vim as a tool and used it as a vehicle for something larger. The new maintainers kept that vehicle running.


    The competitive question

    Vim 9.2 puts the survivability question to rest. The project survived Bram’s death and then shipped features he wouldn’t, or couldn’t, ship himself. The Wayland and XDG changes required a willingness to break from decades of status quo. The default-value changes required someone to say “the old way was wrong” about decisions Bram made personally.

    The harder question is competitive. Neovim’s Lua plugin ecosystem is large and growing. VS Code dominates mindshare among newer developers. AI-powered editors like Cursor and Zed are pulling attention from both. Vim9 script’s enums and generics are welcome, but they’re arriving in a world where Treesitter, LSP, and Lua-based configuration are table stakes.

    The new maintainers have proven they can ship. The next test is whether they can grow the user base, or at least stop the bleed.

    For now, 9.2 is here. The backspace key works. The clipboard works on Wayland. Config files go where they belong. Bram’s editor lives on, maintained by the community he built around it.


    References

    I write about developer tools and open source software regularly. If this kind of breaking coverage is useful to you, consider subscribing so you don’t miss the next one.

  • What happens when Spotify lets AI write all the code

    What happens when Spotify lets AI write all the code

    The company behind Car Thing, three rounds of layoffs, and a perpetually broken shuffle button now trusts AI to build the whole product

    If you want these landing in your inbox regularly, subscribe to my newsletter.


    The announcement

    On Spotify’s Q4 2025 earnings call on February 10, Co-CEO and CTO Gustav Söderström told investors that the company’s most senior engineers have stopped writing code entirely.

    “Some of our best developers, some of our most senior engineers have not written a line of code since December,” Söderström said. “They just prompt.”

    The system responsible is called Honk, built on Anthropic’s Claude Code. It currently merges more than 650 agent-generated pull requests per month. The threshold moment, according to Söderström, coincided with Anthropic’s release of Claude Opus 4.5 over Christmas 2025. After that, the company stopped treating AI-assisted coding as an experiment.

    This was Daniel Ek’s first earnings call as Executive Chairman, having stepped down as CEO in January 2026. Spotify now operates under a co-CEO structure: Söderström handles technology and product, while Alex Norström runs the commercial side.

    Söderström’s LinkedIn bio reads: “My home barely ever works because everything is in perpetual beta.”

    He now runs the app for 751 million people.


    The morning commute

    Söderström described how Honk works in practice.

    “An engineer at Spotify on their morning commute from Slack on their cell phone can tell Claude to fix a bug or add a new feature to the iOS app,” he said. “And once Claude finishes that work, the engineer then gets a new version of the app pushed to them on Slack on their phone, so that he can then merge it to production. All before they even arrived at the office.”

    Put plainly: a Spotify engineer, on a bus, can instruct an AI to modify the iOS app used by hundreds of millions of people, receive the result on their phone, and merge it to production.

    Before coffee.

    R&D expenses fell 23% year-over-year to €290 million in Q4. The stock jumped 15% on earnings day.

    Flowchart illustrating a software development process starting with an engineer on a bus opening Slack on their phone, followed by sending a message, processing a request, generating a PR, and receiving Slack notifications about new app builds before merging to production.

    The track record

    Spotify’s product history needs to be scrutinised before accepting the claim that AI now writes all the code.

    Car Thing

    Spotify launched Car Thing in February 2022, a $90 dashboard accessory for playing music in vehicles. The company discontinued manufacturing five months later. On December 9, 2024, a firmware update bricked every unit in the field. Users were initially told to “safely dispose” of their devices. A class action lawsuit was filed in May 2024.

    The shuffle button

    Spotify users have complained about shuffle for over a decade. In October 2025, the company removed the ability to disable smart shuffle. In November 2025, it finally added a “fewer repeats” mode, acknowledging a problem users had been reporting since roughly 2014.

    App stability

    In October 2025, 9to5Google reported the Android app was freezing on Wi-Fi connections. In November 2025, Yahoo Tech documented a podcast crash bug. In December 2025, users on r/truespotify posted that the app was “crashing again.”

    One HN user offered a review of the macOS client: “The macOS app is literally dead. Before AI invasion, there were frequent updates and features, but now it feels like a corpse, cold and decaying in time.”

    Recommendations

    Glenn McDonald, Spotify’s self-described “data alchemist” who designed the platform’s genre classification system, was laid off in December 2023. His human-guided system was replaced with pure machine learning.

    “After they laid me off, they replaced it with a system that is not human-guided,” McDonald told reporters. “It’s just machine learning. It looks at patterns of words in the titles and descriptions of playlists. That’s objectively worse.”

    Users have noticed. One user’s Release Radar delivered 85% rainstorms and nature sounds. Others report that Discover Weekly is “absolutely choked with AI-generated music.”

    Wrapped 2024

    Spotify’s annual Wrapped feature in 2024 centred on an AI-generated podcast. Users called it “cringey,” “generic,” and “soulless.” Popular features from previous years were missing. Both Rolling Stone and TechCrunch covered the backlash.

    HiFi

    Spotify announced lossless audio in February 2021. Five years later, it still hasn’t shipped. Apple Music, Amazon Music, Tidal, and Deezer all offer it.

    Outages

    The platform suffered global outages on April 16, 2025 (lasting three to five hours) and again on December 15, 2025.


    The numbers

    Spotify reported €4.53 billion in Q4 revenue, up 13% year-over-year, with full-year revenue of €17.2 billion and profit of €2.2 billion.

    The company paid €12 million in tax. On €2.2 billion in profit, that works out to an effective tax rate of roughly 0.5%.

    Table displaying key metrics and figures including monthly active users, premium subscribers, quarterly and full-year revenue, profit, tax paid, R&D expenses, and projected royalties.

    Spotify has raised prices three times in two and a half years, from $9.99 to $10.99 to $11.99 to $12.99. A 30% increase. Over the same period, the per-stream royalty rate has hovered around $0.003 to $0.005. In 2024, Spotify stopped paying artists with fewer than 1,000 annual streams, demonetising roughly two-thirds of artists on the platform and costing emerging musicians an estimated $47 million.

    R&D is down 23%. Prices are up 30%. The per-stream rate is $0.003. The savings are not going to the artists.

    The company spent over $1 billion on podcast acquisitions between 2019 and 2022: Gimlet (~$230M), Anchor (~$150M), The Ringer (~$196M), and a Joe Rogan exclusive (~$200M). Gimlet and Parcast were gutted in June 2023. Rogan’s deal went non-exclusive in February 2024. Podcast chief Julie McNamara exited in May 2024.


    The layoffs

    Spotify cut roughly 2,300 jobs across three rounds:

    • January 2023: ~600 employees (6% of workforce)
    • June 2023: ~200 employees (podcast division; Gimlet and Parcast gutted)
    • December 2023: ~1,500 employees (17% of workforce)

    In 2024, SEC filings show an additional ~1,400 positions eliminated, or 15.7% of the remaining workforce. Total headcount dropped from a peak of roughly 9,800 to somewhere between 7,000 and 7,200.

    Daniel Ek admitted the layoffs “disrupted operations more than anticipated.”

    The best engineers who remain haven’t written a line of code since December.


    What the research says about AI-generated code

    Six days before Spotify’s earnings call, on February 4, Microsoft CEO Satya Nadella asked Charlie Bell to take on a new role focused on engineering quality. The context: Azure outages, growing quality concerns, and an estimated 20-30% of Microsoft’s code now generated by AI.

    Microsoft needed a quality czar for code that is 20-30% AI-generated.

    Spotify claims its best engineers write 0% of their code by hand. Spotify has not announced an equivalent role.

    The research on AI-generated code quality is consistent, and consistently bad:

    Developers using AI write less secure code while believing it is more secure. Spotify’s 751 million users are now the test group.

    Bar chart illustrating AI code risks, showing severity percentages for various risks including security flaws, increased code churn, vulnerability instances per files, and velocity versus vulnerabilities ratio.

    The music question

    Complete Music Update (CMU), a music industry publication, raised a question after the earnings call that Spotify has not answered:

    When a company’s engineering team no longer writes code, when it has decided that human-written code is less efficient than machine-generated code, why would it think any differently about human-written music?

    Söderström addressed AI-generated music on the call, but only in terms of what Spotify considers permissible. “Are you allowed to use an electric guitar, a synthesiser, a digital audio workstation, or AI?” he asked. “Or a more complicated question, a bit of AI, like 1% AI, 15, 20, 100? We don’t think it’s our decision to make.”

    Spotify recently announced support for “derivative works,” a category that includes AI-generated fan remixes and covers.

    Deezer reported that 60,000 fully AI-generated tracks are uploaded to its platform daily, accounting for 39% of total intake. Spotify was asked for its equivalent figures. It declined to share them. As CMU noted: “If the number were small, he’d say so.”

    The company that decided human-written code is obsolete now sets the terms for how human-written music gets paid.

    At $0.003 per stream.

    Flowchart comparing the impact of AI on coding and music industries, showing human engineers replaced by AI for writing code, resulting in a 23% decrease in R&D efficiency, and human musicians being flooded by AI-generated tracks, leading to lower earnings and demotivation for artists.

    What to watch

    CMU flagged something else about the earnings call. Söderström spent Christmas with Claude Opus 4.5 and came back unable to talk about anything else. He called it an epiphany. He closed the call by attributing a fake Chinese aphorism to his co-CEO.

    Some observers in the technology industry have started calling this pattern “LLM psychosis”: AI tools produce plausible, confident outputs, and their users gradually stop questioning the results. The outputs look right. The velocity feels right. The auditing falls away.

    Spotify’s R&D bill is down 23%. Its headcount is down roughly 27% from peak. Its stock is up 15% on the quarter. Its prices are up 30% in two years. Its per-stream rate is $0.003. Its app crashes on your morning commute. Its best engineers haven’t written a line of code since December.

    And now, from Slack on their phones, they can merge AI-generated changes to production before they get to the office.

    Spotify has 751 million monthly active users.


    References

    I write about the gap between what tech companies say and what they ship. If this kind of straight-faced accountability reporting is useful to you, consider subscribing so you don’t miss the next one.

  • Crypto’s rap sheet: the criminals behind Bitcoin’s biggest names

    Crypto’s rap sheet: the criminals behind Bitcoin’s biggest names

    From 25-year sentences to presidential pardons, a guide to the people who built an industry on other people’s money

    If you want these landing in your inbox regularly, subscribe to my newsletter.


    Bitcoin is down 47%. Meet the people who got you here.

    Bitcoin hit $126,198 on October 6, 2025. As of mid-February 2026, it’s hovering around $66,200. That’s a 47% decline from the all-time high, and the reasons read like a geopolitical bingo card: Trump’s tariff escalation, a hawkish Fed, a $1.5 billion Bybit hack traced to North Korea’s Lazarus Group, and a broader tech sell-off dragging everything with a ticker symbol into the red.

    But forget the crash. The story is the people.

    The people who built the crypto industry, what they did with the money, and where they are now. Some are in federal prison. Some got presidential pardons. Some are building new companies from the wreckage. One might be dead for all we know. And the pattern connecting them says more about crypto’s relationship with power than any whitepaper ever could.

    This is a field guide to ten figures who, between them, account for roughly $65 billion in losses, millions of victims, and a sentencing range that stretches from four months to 25 years. Every legal status below is precise: convicted means convicted, settled means settled, pardoned means pardoned, fugitive means fugitive, and alleged means we don’t know yet.

    I’ve organised them not alphabetically or chronologically, but by what happened to them. Because what happened to them, it turns out, depends less on what they did and more on who they knew.

    Bar graph showing 'Years per $B Lost' for various individuals, including Greenwood, SBF, Mashinsky, Madoff (reference), Do Kwon, Su Zhu, and CZ (pardoned), with Greenwood having the highest value.

    The convicted and imprisoned

    Three men. Three courtrooms. A combined 52 years of prison time and roughly $53 billion in losses. These are the ones who went to trial, or at least pled guilty, and heard a judge say a number followed by “years.”

    Sam Bankman-Fried: 25 years

    The FTX founder stole approximately $8 billion in customer funds and was sentenced to 25 years in March 2024. Judge Lewis Kaplan’s assessment was blunt: “He knew it was wrong. He knew it was criminal.”

    SBF’s parents are both Stanford Law professors. His net worth went from $26 billion to whatever the prison commissary will extend on credit. He was reportedly playing League of Legends during investor meetings while his exchange was functionally insolvent. The League of Legends detail is the kind of thing Aaron Sorkin would cut for being too on the nose. A man entrusted with $8 billion in customer deposits, alt-tabbing between a video game and a Zoom call with Sequoia Capital.

    His appeal is pending before the Second Circuit, with oral arguments held November 4, 2025. On February 5, 2026, he filed a pro se motion for a new trial claiming the DOJ threatened witnesses. Pro se means he’s representing himself, which, for the son of two Stanford Law professors, is a choice. The motion is, charitably, a long shot.

    SBF donated heavily to Democrats. He got 25 years and no pardon. Remember that detail. It becomes important later.

    Do Kwon: 15 years

    The Terra/Luna founder presided over a $40 billion wipeout in May 2022 and was sentenced to 15 years on December 11, 2025. The judge rejected the plea deal’s recommended 12-year sentence and went higher, which almost never happens. When a federal judge looks at the number both sides agreed to and says “not enough,” the defendant has made an impression. Judge Engelmayer called it “a fraud on an epic, generational scale.”

    Before the collapse, Kwon had tweeted “I don’t debate the poor.” After it, he was caught fleeing through Montenegro with a fake Costa Rican passport.

    When crypto executives, much like any other fugitive, face consequences, their first instinct is rarely to hire a lawyer. It’s to hire a travel agent.

    Terra/Luna was an “algorithmic stablecoin,” which is a polite way of saying it was a financial instrument that maintained its value through a mechanism that worked perfectly right up until the moment it didn’t. The death spiral, when it came, took roughly 72 hours. Forty billion dollars, gone in a long weekend.

    Alex Mashinsky: 12 years

    The Celsius Network founder was sentenced to 12 years on May 8, 2025, for defrauding 1.7 million users out of $4.7 billion. His slogan was “Unbank yourself.” The prosecutor’s characterisation was less catchy: “a predator who preyed on hope.”

    Mashinsky held weekly AMA sessions reassuring depositors that their funds were safe. Weeks later, Celsius froze all withdrawals. During that window, Mashinsky personally withdrew $48 million while his customers couldn’t touch a cent.

    The AMAs are the cruelest detail in any of these cases. He looked into a camera, week after week, and told people their money was fine. It wasn’t fine. He knew it wasn’t fine. He was moving his own money out while saying it. The word for that isn’t “fraud” in the abstract, legal sense. It’s something closer to cruelty.

    FigureLossesSentenceKey detail
    Sam Bankman-Fried~$8B25 yearsWent to trial; appeal pending
    Do Kwon~$40B15 yearsJudge rejected plea deal, went higher
    Alex Mashinsky~$4.7B12 yearsWithdrew $48M while freezing customer funds

    One thing jumps out of that poker table: Kwon’s losses were five times SBF’s, but his sentence was 10 years shorter. The difference: Kwon took a plea deal. SBF went to trial, lost, and paid the trial tax. In the American justice system, exercising your constitutional right to a trial is, functionally, a sentencing multiplier.


    The pardoned and the connected

    If the previous section is about what happens when the system works, this one is about what happens when the system has a phone number you can call.

    Changpeng Zhao: pardoned

    Four months. That’s what Changpeng Zhao served for enabling, among other things, transactions linked to terrorism financing. The Binance founder pled guilty to violating the Bank Secrecy Act, was part of a $4.3 billion corporate settlement, and walked out of custody before most people finish a home renovation. An internal Binance message, surfaced during the investigation, read: “is washing drug money too hard these days; come to binance we got cake for you.”

    Then, on October 23, 2025, Trump pardoned him. The conviction was wiped clean.

    Trump told reporters, “I don’t know who he is.” CZ told the press he “didn’t do much” to secure the pardon. Both statements are technically possible and practically absurd. Binance holds approximately 87% ($4.7 billion) of the reserves backing Trump’s USD1 stablecoin. You don’t need to know someone personally when their company is holding $4.7 billion of your financial product.

    The $4.3 billion fine, at the time, was the largest corporate penalty in DOJ history.

    Roger Ver: case killed

    The self-styled “Bitcoin Jesus” renounced his US citizenship in 2014 and owed roughly $48 million in taxes. The DOJ charged him with tax evasion. On October 14, 2025, the case ended in a deferred prosecution agreement requiring $49.9 million in payments.

    A DPA means the charges get dropped if Ver meets the terms. He pays roughly what he owed, and the criminal record goes away. For a man who renounced his citizenship specifically to avoid paying taxes, this is a remarkably convenient outcome.

    How did a man facing federal tax evasion charges get a DPA instead of a prison sentence? A ProPublica investigation in January 2026 mapped the network: Ver hired Christopher Kise, Trump’s personal lawyer, and paid Roger Stone $600,000 to lobby on his behalf. A former DOJ attorney told ProPublica, “They’re basically saying you can buy your way out of a tax evasion prosecution.”

    Six hundred thousand dollars to Roger Stone. That’s the price of making a federal tax evasion case disappear. For context, Ver owed $48 million. The lobbying fee was 1.25% of the tax bill.

    Efficient.

    The pardon pipeline

    Three crypto figures received pardons or clemency in Trump’s first ten months back in office: Ross Ulbricht (Silk Road), CZ, and Ver. All three had financial or political connections to the Trump orbit.

    SBF donated to Democrats. He got 25 years. No pardon discussions have been reported.

    The crypto pardon pipeline runs in one direction. The toll booth accepts campaign contributions, stablecoin reserves, and lobbying fees paid to Trump’s personal attorney.

    The financial architecture tells the rest of the story. The Fairshake PAC, the crypto industry’s political war chest, raised over $260 million in the 2024 cycle and won 48 of 54 races (89%), making it the single largest super PAC of the election cycle. Not the largest tech PAC. Not the largest finance PAC. The largest super PAC, full stop.

    Justin Sun, the Tron founder who was facing SEC fraud allegations, invested $75 million in Trump’s World Liberty Financial. The SEC subsequently dropped its case against him. The Trump family earns 75% of WLFI’s net revenues. So: a man facing fraud charges invests $75 million in the president’s family business, and the fraud charges go away. The Trump family takes 75 cents of every dollar WLFI earns.

    Meanwhile, Deputy Attorney General Todd Blanche, who oversees crypto enforcement policy, personally holds over $159,000 in cryptocurrency and disbanded the National Cryptocurrency Enforcement Team. The unit specifically created to prosecute crypto fraud.

    Gone. Just like that.

    The fox isn’t guarding the henhouse. The fox owns the henhouse and is collecting rent from the chickens.


    The fugitives and the phantoms

    Some crypto figures faced the justice system and lost. Others faced the justice system and ran. And one may have been murdered before the justice system could find her.

    Ruja Ignatova: vanished

    The “Cryptoqueen” is on the FBI’s Most Wanted list with a $5 million reward for information leading to her arrest. She founded OneCoin, a $4.5 billion Ponzi scheme with one distinguishing feature: there was no actual blockchain. No coins. No technology. No protocol. Just a website with numbers on it and a sales force that would make Amway blush.

    Every other scam on this list at least had a blockchain. OneCoin didn’t bother. Ignatova sold people access to a cryptocurrency that existed only as entries in a SQL database. The “coin” was a number in a spreadsheet. The “blockchain” was a marketing slide.

    $4.5 billion stolen. No blockchain. The founder possibly dead. The co-founder doing two decades. OneCoin is what happens when you remove the technology from a crypto scam and discover the scam works fine without it.

    Ignatova boarded a flight from Sofia to Athens in October 2017 and hasn’t been confirmed alive since. A BBC investigation found evidence linking her disappearance to Bulgarian crime boss Hristoforos “Taki” Amanatidis, and multiple sources suggest she may have been murdered in 2018. Her co-founder, Karl Sebastian Greenwood, was sentenced to 20 years.

    Kyle Davies and Su Zhu: the arsonists who opened a fire insurance company

    The Three Arrows Capital co-founders owe creditors $3.5 billion. Their hedge fund collapsed in June 2022 after overleveraged bets on Luna went sideways, and the bankruptcies rippled outward from there.

    Kyle Davies is believed to be living in Bali. He has not appeared before liquidators or any court proceeding. He is, in the most literal sense, a fugitive from his creditors.

    Su Zhu was caught trying to flee at Singapore’s Changi Airport and served four months for contempt of court. Four months for $3.5 billion in debts. He was out before the liquidators had finished cataloguing what he owed.

    After serving his time, Zhu and Davies launched OPNX, a crypto exchange designed to trade claims from bankrupt crypto companies. Many of those companies went bankrupt because of Three Arrows Capital. They were, quite literally, building a marketplace to profit from the wreckage of companies they had helped destroy.

    That’s like an arsonist opening a fire insurance company, then selling policies on the buildings they burned down. And then going on a podcast to explain their innovative approach to risk management.


    The repeat offenders and the pretenders

    Not everyone on this list committed fraud on a billion-dollar scale. Some committed fraud on a merely impressive scale, and one committed fraud against the concept of truth itself.

    Michael Saylor: the tax cheat who became Bitcoin’s loudest evangelist

    The MicroStrategy (now Strategy) founder settled the largest tax fraud case in Washington DC history in 2024: $40 million for evading DC income taxes while living in a Georgetown penthouse. That was his second brush with securities regulators. In 2000, the SEC charged him with overstating MicroStrategy’s revenues, resulting in a $350,000 fine and $8.3 million in disgorgement.

    Two fraud settlements in 24 years. Most people would keep a low profile after that. Saylor went the other direction. He now holds 714,644 Bitcoin as of February 9, 2026, making his company the largest corporate holder of Bitcoin on the planet. He has described Bitcoin as “a swarm of cyber hornets serving the goddess of wisdom,” which is the kind of sentence that makes you wonder whether the person saying it is a visionary or needs to touch grass.

    Two fraud settlements and a corporate treasury built entirely on a single volatile asset that’s currently down 47% from its high. The man has a type.

    Timeline showing significant events related to regulatory and financial milestones from 2000 to 2026.

    Craig Wright: the cosplayer

    Craig Wright claimed to be Satoshi Nakamoto, the pseudonymous creator of Bitcoin. The UK High Court ruled that he “lied extensively and repeatedly” and “forged documents” to support his claim. He then filed a $1.15 trillion lawsuit, that’s trillion with a T, in violation of the court order telling him to stop. This earned him a one-year suspended sentence for contempt.

    Wright isn’t a crypto criminal in the traditional sense. He’s a crypto cosplayer. He didn’t steal billions; he just wasted everyone’s time and the court’s patience while pretending to be someone he demonstrably isn’t. In an industry full of people who took real money under false pretences, Wright took no money under the falsest pretence of all. He’s the guy at the costume party who insists he’s not wearing a costume, and then sues everyone who disagrees.


    The control group: what happens when you don’t commit crimes

    Every experiment needs a control group. Brian Armstrong, the Coinbase CEO, is ours.

    The SEC sued Coinbase in June 2023, alleging it operated as an unregistered exchange. Armstrong spent $50 million fighting the case. In February 2025, the case was dismissed. By May 2025, Coinbase had joined the S&P 500.

    No fraud charges. No customer funds missing. No fake passports. No presidential pardons needed. No lobbying fees to Roger Stone. All he needed was a company that operated within the law, fought a regulatory action it believed was wrong, and won.

    Armstrong’s trajectory is the counterfactual that makes everyone else on this list look worse. The crypto industry’s standard defence is that regulation is unclear and enforcement is arbitrary. Armstrong proves that you can build a major crypto company, face the full weight of SEC enforcement, spend $50 million on lawyers instead of $600,000 on Roger Stone, and come out the other side with your freedom, your reputation, and an S&P 500 listing.

    The bar wasn’t high. Most of the people on this list simply chose not to clear it.


    The domino chain: how one collapse triggered them all

    These cases aren’t isolated incidents that happened to occur in the same industry. They’re connected by a chain of contagion that turned 2022 into crypto’s extinction-level event.

    It started with Do Kwon.

    When Terra/Luna collapsed in May 2022, it vaporised $40 billion in value overnight. Three Arrows Capital, which had massive leveraged exposure to Luna, blew up in June. Three Arrows’ collapse triggered margin calls across the industry. Celsius, which had lent heavily to Three Arrows, froze withdrawals in June. Voyager Digital, another Three Arrows creditor, filed for bankruptcy in July. And FTX, which had positioned itself as the industry’s white knight by bailing out Voyager, turned out to be the biggest fraud of all, collapsing in November.

    MonthEventLossesTrigger
    May 2022Terra/Luna collapse~$40BAlgorithmic stablecoin death spiral
    June 2022Three Arrows Capital liquidation~$3.5BLuna exposure, margin calls
    June-July 2022Celsius freezes withdrawals~$4.7BTerra/Luna contagion, bank run
    July 2022Voyager Digital bankruptcy~$1.3B3AC contagion
    November 2022FTX collapse~$8BCoinDesk exposé, bank run

    One algorithmic stablecoin. Five dominoes. Roughly $57 billion gone in seven months.

    The people running these companies knew each other. They lent to each other, invested in each other, appeared on each other’s podcasts, and vouched for each other’s solvency. And when one fell, the interconnections that had made them all rich made them all insolvent. It was less a financial system than a circular firing squad with a time delay.

    The “No Actual Product” club deserves its own mention. Ignatova had no blockchain. Wright had no proof he was Satoshi. Kwon’s algorithmic stablecoin was, in retrospect, a time bomb with a marketing budget. Three different flavours of selling something that didn’t exist, or didn’t work, or both.

    Flowchart depicting the financial sequence of events in 2022 including the Terra/Luna collapse, Three Arrows Capital liquidation, Celsius freezes, Voyager bankruptcy, FTX collapse, and the total loss of approximately $57 billion over seven months.

    The sentencing casino

    Justice in crypto cases follows no consistent logic. Geography, politics, plea bargaining, and sheer luck determine outcomes more than the scale of the crime.

    Consider the numbers:

    FigureLossesSentenceYears per $1B in losses
    SBF~$8B25 years~3.1
    Do Kwon~$40B15 years~0.4
    Mashinsky~$4.7B12 years~2.6
    CZ~$4.3B (fine)4 months (pardoned)~0.008
    Su Zhu~$3.5B4 months~0.010
    Greenwood (OneCoin)~$4.5B20 years~4.4

    SBF’s $8 billion in losses got him 25 years. CZ’s $4.3 billion fine got him four months and a pardon. Su Zhu’s $3.5 billion in debts got him four months and a new business venture. The difference between 25 years and four months isn’t proportional to the crime. It’s proportional to whether you took a plea, which jurisdiction caught you, whether you had the right phone numbers, and how much of the president’s stablecoin your exchange happened to be holding.

    For context: Bernie Madoff got 150 years for his Ponzi scheme. The combined sentences across the major crypto cases — including the figures profiled here plus key cooperators and co-conspirators — total roughly 120 years. One Wall Street fraudster got more prison time than an entire industry of them combined.

    Madoff alone got 150 years. The major crypto convictions combined total roughly 120. The industry is getting a bulk discount on accountability.

    There’s also the flight problem. A striking number of major crypto figures facing legal consequences attempted to flee their jurisdiction before or during proceedings. Kwon had a fake passport in Montenegro. Zhu was caught at Changi Airport. Davies is still in Bali. Ignatova vanished entirely. Ver renounced his citizenship years before charges came. In traditional finance, fleeing the country is the exception. In crypto, staying put is.

    And then there’s Faruk Fatih Ozer, the Thodex founder sentenced to 11,196 years in Turkey. Eleven thousand, one hundred and ninety-six years. He was found dead in prison on November 1, 2025. Turkish sentencing math operates on a different plane of existence, but the outcome was the same as a life sentence, just expressed with considerably more enthusiasm.

    If you steal $8 billion and donate to the wrong political party, you get 25 years. If you enable money laundering on a global scale and your exchange holds the president’s stablecoin reserves, you get four months and a pardon. If you steal $3.5 billion and run to Bali, nobody comes to get you. The variable isn’t the crime, but rather the connections.


    The human cost

    The satire stops here.

    Behind every billion-dollar figure is a population of real people who lost real money they couldn’t afford to lose.

    When Terra/Luna collapsed, multiple suicides were reported in South Korea. A family of three was found dead in a suspected murder-suicide linked to Luna losses. The r/terraluna subreddit had to pin suicide prevention resources at the top of the page. The moderators of a cryptocurrency forum became, overnight, a crisis intervention team. They were not trained for this. Nobody moderating a subreddit about an algorithmic stablecoin expects to be fielding suicide notes.

    Celsius marketed itself to conservative savers. Not day traders. Not speculators. Not people who understood what a “yield-bearing crypto account” actually meant under the hood. People who wanted a savings account with better interest rates than their bank offered.

    FTX victim impact statements described retirement funds wiped out, marriages destroyed, homes lost. More than 2 million people across the major cases lost money.

    Some lost everything.

    Mashinsky’s prosecutor called him “a predator who preyed on hope.” That’s the common thread across every case on this list. These weren’t sophisticated investors getting outplayed by smarter traders. These were ordinary people who were told, repeatedly and persuasively, that their money was safe. It wasn’t.


    Where this leaves us

    Bitcoin is down 47% from its high, and the people who built the industry around it are, variously, imprisoned, pardoned, fugitive, dead, or launching new ventures from the wreckage of old ones.

    $65 billion in losses. 52 years of combined prison time for the top three convictions. Four months for the man whose exchange enabled money laundering on an industrial scale, followed by a presidential pardon. A massive PAC that won 89% of its races. A deputy attorney general with six figures in crypto holdings overseeing crypto policy. A cryptocurrency enforcement unit, disbanded. And millions of people who trusted the wrong founders with money they needed.

    There is a counterargument, and honesty requires stating it. FTX’s bankruptcy estate recovered more than 100% of petition-date claims, though that’s partly because Bitcoin’s price rose between the filing and the distribution, not because the money was found under a couch cushion. Armstrong fought the SEC and won. Crypto technology itself, the blockchains, the protocols, the decentralised applications, none of that is inherently criminal. The technology works. The problem has always been the people given custody of other people’s money, and the system that’s supposed to hold them accountable when they steal it.

    That system is, at the moment, for sale.

    The pardon pipeline, the lobbying network, the PAC money, the stablecoin reserves held by the exchange whose founder got clemency; these aren’t bugs in the system. They’re the system working exactly as the people with the most money designed it to work. The crypto industry spent over hundreds of millions on political influence in 2024 and got deregulation, pardons, and dropped cases in return.

    The crash will end. Bitcoin will recover or it won’t. But the rap sheet is permanent, and the pardon pattern tells you everything about who this industry serves.

    The next time someone tells you crypto is the future of finance, ask them which future. The one where Brian Armstrong builds a regulated company and joins the S&P 500? Or the one where the people who stole $65 billion get four-month sentences, presidential pardons, and a fresh set of investors?

    Both futures are available. The industry, so far, has overwhelmingly chosen the second one.


    References

    I write about the intersection of technology, money, and accountability regularly. If this kind of forensic breakdown is useful to you, consider subscribing so you don’t miss the next one.

  • How MinIO went from open source darling to cautionary tale

    How MinIO went from open source darling to cautionary tale

    The $126M-funded object storage company systematically dismantled its community edition over 18 months, and the fallout is still spreading

    If you want these landing in your inbox regularly, subscribe to my newsletter.


    “THIS REPOSITORY IS NO LONGER MAINTAINED”

    On February 12, 2026, MinIO updated the README of its flagship GitHub repository with six words in all caps: THIS REPOSITORY IS NO LONGER MAINTAINED. The commit pointed users toward AIStor, MinIO’s commercial product. It’s now a pointer to the paid tier.

    For a project with roughly 60,000 GitHub stars and over a billion Docker pulls, this was a quiet ending to a loud decade.

    The README change wasn’t a surprise to anyone who’d been watching. MinIO had spent the previous 18 months systematically stripping features, pulling distribution channels, and tightening the screws on its community edition. But the finality of it still stung. Open source projects get abandoned all the time; they don’t usually get dismantled in stages by a company valued at a billion dollars.

    An open-source licence isn’t a shackle, but it is a social contract.

    That’s Vonng, creator of the Pigsty database distribution, writing what he called a eulogy for MinIO. He called it “even more nauseating than a crypto rug pull.”

    Strong words, but the timeline supports them.

    Timeline illustrating the key events in the development of MinIO from 2014 to 2026, including its founding, funding rounds, license changes, disputes, and eventual transition to maintenance mode.

    The decade that built the trust

    MinIO was founded in 2014 by Anand Babu Periasamy and Harshavardhana. Periasamy was no newcomer to open source; he’d previously founded GlusterFS, the distributed filesystem that Red Hat acquired for roughly $136 million. He knew how to build an open source project, grow a community, and find an exit.

    MinIO’s pitch was simple: S3-compatible object storage you could run anywhere. On-prem, in a private cloud, at the edge. It was fast, it was written in Go, and it was licensed under Apache 2.0, one of the most permissive licences in open source. Companies could use it, modify it, redistribute it, and build commercial products on top of it without worrying about copyleft obligations.

    That permissiveness was the hook. It’s what drove adoption.

    By 2019, MinIO had become the default self-hosted object storage for a generation of developers. Kubernetes deployments used it. CI/CD pipelines stored artefacts in it. Machine learning teams staged training data with it. The Docker Hub pull count climbed past a billion. GitHub stars accumulated into the tens of thousands.

    The community didn’t just use MinIO. They evangelised the living hell out of it.

    Article titled 'Minio: An Open-Source S3 Compliant Storage Service' featuring the publication date 'August 3rd, 2017' with a screenshot of the Minio browser interface showing file details.

    MinIO was the answer on Stack Overflow, the recommendation in Reddit threads, the first suggestion when someone asked “what’s the self-hosted S3 alternative?” That kind of organic adoption takes years to build and can’t be bought with marketing spend.


    The money arrives

    MinIO raised a $20 million Series A in September 2017 from Dell Technologies Capital, General Catalyst, and Nexus Venture Partners. Respectable, but not unusual for an infrastructure company with that level of adoption.

    Then came January 2022: a $103 million Series B led by Intel Capital and SoftBank Vision Fund 2, with General Catalyst following on. Total raised: $126.3 million. Reported valuation: approximately $1 billion.

    Unicorn status. SoftBank on the cap table.

    Note the timing. MinIO had changed its licence from Apache 2.0 to AGPL v3 roughly eight months before the Series B closed. Whether the licence change was a condition of the fundraise or merely coincidental, the sequence speaks for itself: the company tightened its licensing, then raised nine figures.

    A November 2025 BusinessWire press release claimed 149% ARR growth over two years, though the base figure was never disclosed. The valuation was reportedly built more on adoption metrics (Docker pulls, GitHub stars) than on revenue. When your valuation rests on community goodwill, destroying that community carries a specific kind of risk.


    The licence shift: Apache 2.0 to AGPL v3

    In May 2021, MinIO relicensed under AGPL v3. The blog post framed it as a move toward openness, not away from it. AGPL is, technically, a free software licence approved by the FSF and OSI. It’s more restrictive than Apache 2.0, but it’s still open source.

    The practical effect was different from the branding.

    AGPL’s key provision: if you run modified AGPL software as a network service, you must make your source code available to users of that service. For cloud providers and SaaS companies building on MinIO, this created a compliance headache that Apache 2.0 never imposed. Companies that had embedded MinIO into proprietary products suddenly had to evaluate whether their usage triggered the AGPL’s network-use clause.

    Some paid for a commercial licence. Some started looking for alternatives. And some, apparently, did neither.


    The enforcement era

    MinIO didn’t just change the licence. They enforced it, publicly and aggressively.

    Nutanix (2022)

    In July 2022, MinIO published a blog post accusing Nutanix of violating its open source licence in the Nutanix Objects product. This wasn’t a quiet legal letter. It was a public shaming campaign. MinIO revoked Nutanix’s licence and demanded compliance.

    Nutanix apologised and came into compliance. The message to the industry was clear: MinIO would use AGPL as a weapon, and they’d do it in public.

    Weka (2023)

    In March 2023, MinIO made the same accusation against Weka, a competing storage company. Weka didn’t fold. They published a detailed rebuttal calling MinIO’s allegations “unfounded” and disputed the technical basis of the claims. Legal analysts noted that some of the alleged violations predated the AGPL change and fell under the original Apache 2.0 licence, which would have permitted the usage MinIO was objecting to.

    The dispute settled quietly in mid-2024. But the pattern was established: MinIO was willing to publicly accuse competitors, revoke licences, and use the threat of AGPL enforcement as a commercial lever.

    The AGPL wasn’t just a licence. It was a sales tool.

    Companies that might have been comfortable running community MinIO under Apache 2.0 now faced a choice: pay for a commercial licence, invest in AGPL compliance, or migrate to something else entirely.


    The dismantling: 2025

    If the licence change and enforcement actions were the warning shots, 2025 was the artillery barrage.

    Early 2025: the admin console disappears

    In early 2025, MinIO stripped the admin console and management GUI from the community edition via PR #3509. The full web interface that administrators had relied on for years was replaced with a basic object browser. Bucket management, user administration, policy configuration: all gone from the free version.

    Harshavardhana’s stated rationale was resource constraints: “Building and supporting separate graphical consoles for the community and commercial branches is substantial… A whole team is involved in console development alone.”

    The community’s reaction was less measured. A GitHub Discussion thread with 88 upvotes captured the mood:

    “Minio handled this change in the worst possible way. No announcement before the release. No warning in the initial changelog. All tickets opened regarding this were closed and locked immediately.”

    Futuriom’s coverage noted the backlash was spreading across developer forums. Cloudian’s analysis pointed out that the feature removal left organisations scrambling for alternatives mid-deployment.

    Jeffrey Paul, writing on sneak.berlin, was blunter: “Open core is not open source, regardless of what license they use. It’s what I’ve taken to calling ‘open source cosplay.’” He added: “They explicitly hobbled their ‘open source’ version to coerce people who don’t want to use their arcane CLI.”

    Flowchart outlining three phases of a project: Phase 1 - Build with focus on permissive licensing and raising venture capital; Phase 2 - Monetise through licensing changes and creation of a commercial tier; Phase 3 - Restrict by removing features and abandoning the repository.

    October: Docker images and binaries vanish

    In October 2025, MinIO stopped publishing Docker images and pre-built binaries for the community edition. The Hacker News thread drew 733 points and 555 comments.

    The timing raised eyebrows. The binary removal coincided with a CVE disclosure affecting MinIO. Users who needed to patch the vulnerability couldn’t simply pull an updated Docker image; they had to build from source or migrate to AIStor. Whether this was deliberate or unfortunate timing, the effect was the same: a security vulnerability became a forcing function for commercial adoption.

    One Hacker News commenter who built a cluster being expanded to roughly 100 petabytes noted that “the price of support comes in at slightly less than the equivalent amount of S3 storage, not including actual hosting costs.”

    So you’d pay MinIO prices for the privilege of running your own hardware. Generous.

    December: maintenance mode

    On December 3, 2025, MinIO placed the community repository in “maintenance mode.” The Hacker News discussion hit 511 points and 322 comments. InfoQ covered the shift and the growing search for S3-compatible alternatives.

    A Medium post framing it as a wake-up call for the CNCF ecosystem gained traction. On Reddit’s r/selfhosted, users warned others to avoid MinIO entirely, with one popular thread calling the changes a “trojan horse.”

    February 2026: the final commit

    Then came the README update. Six words in all caps. A link to AIStor. Chapter closed.


    The escalation ladder

    What makes MinIO’s trajectory unusual isn’t any single decision. It’s the cumulative escalation. Other companies have changed licences. Others have created commercial-only features. MinIO did all of that, then kept going.

    LevelActionMinIOMongoDBElasticHashiCorpRedis
    1Licence change (copyleft)
    2Licence change (source-available)
    3Feature stripping from community edition
    4Distribution restriction (Docker/binary removal)
    5Repository abandonment
    6Security leverage (CVE timing)

    MinIO is the only company that climbed all six levels.

    MongoDB changed to SSPL in 2018 and stopped there. The business thrived. Elasticsearch went to SSPL and Elastic Licence in 2021, AWS forked it into OpenSearch, and Elastic actually reversed course in August 2024, returning to AGPL. HashiCorp moved to BSL in 2023, spawning the OpenTofu fork, then got acquired by IBM for $6.4 billion. Redis switched to RSALv2/SSPLv1 in 2024, and the Linux Foundation launched Valkey with backing from AWS, Google, and Oracle.

    Each of those companies made a single controversial move and dealt with the consequences. MinIO made six, each one burning more trust than the last.

    Every other company in this pattern changed the rules. MinIO changed the rules, then removed the pieces from the board.


    The pricing wall

    MinIO’s answer for displaced community users was AIStor, their commercial product.

    The minimum annual commitment for AIStor Enterprise: $96,000. Cloudian’s breakdown puts the cost at $96,000 per year for the base tier, rising to $244,032 per year for a petabyte of storage. That’s software and support alone, before you account for the hardware to run it on.

    For enterprises already running MinIO at scale, this might be a reasonable line item. For the thousands of smaller teams, startups, and self-hosters who adopted MinIO precisely because it was free and capable, $96,000 isn’t a migration path.

    It’s a cliff.

    MinIO seemed to recognise this. In December 2025, they added an “Enterprise Lite” tier, suggesting the $96K minimum was losing them the mid-market. But by then, trust was already gone.

    TierAnnual costTarget
    AIStor Enterprise$96,000+Large organisations
    AIStor Enterprise (1 PB)$244,032Storage-heavy deployments
    Enterprise Lite (Dec 2025)UndisclosedMid-market (damage control?)
    Community editionFree (discontinued)Everyone else (abandoned)

    The VC equation

    $126.3 million doesn’t come without strings.

    SoftBank Vision Fund 2 is not a patient capital partner. Their portfolio companies are expected to grow aggressively and find a path to profitability or exit. When you raise at a billion-dollar valuation, you need revenue to justify that number. MinIO’s massive community adoption, the billion Docker pulls, the 60,000 GitHub stars; those metrics got them the valuation. But metrics don’t pay back investors. Revenue does.

    The sequence is instructive:

    1. 2021: Licence change to AGPL (eight months before Series B closes)
    2. 2022: Series B raises $103M at ~$1B valuation
    3. 2022-2023: Aggressive public enforcement against Nutanix and Weka
    4. 2025: Feature stripping, binary removal, maintenance mode
    5. 2026: Repository abandoned, all roads lead to AIStor

    Each step ratcheted the pressure on free users to convert to paid customers. The AGPL change made it legally risky to use MinIO in commercial settings without paying. The enforcement actions demonstrated willingness to litigate. The feature stripping made the free version less useful. The binary removal made it harder to run. And the final abandonment left no option at all.

    Periasamy has done this before. GlusterFS grew as an open source project, built a community, and was acquired by Red Hat for $136 million. That was a clean exit; the community continued under Red Hat’s stewardship. The MinIO playbook looks different: instead of finding a buyer who values the community, the company appears to be converting community goodwill directly into commercial revenue by eliminating the free alternative.

    Flowchart detailing the timeline of a project from VC funding of $126.3M to repository abandonment in February 2026, including key events like monetization pressure, license changes, and feature removal.

    The community responds

    Open source communities are resilient. They also have long memories.

    The alternatives

    Several projects are positioning themselves as MinIO replacements:

    ProjectLanguageApproachS3 compatibleMaturity
    SeaweedFSGoDistributed file + object storageProduction-ready
    GarageRustLightweight, geo-distributedStable
    RustFSRustMinIO-inspired rewriteEarly stage
    Ceph (RADOS Gateway)C++Full-featured, complexMature

    None of these are direct forks of MinIO. RustFS is a ground-up Rust rewrite inspired by MinIO’s design, not a code fork. When Redis changed its licence, the Linux Foundation launched Valkey as a direct fork with corporate backing from AWS, Google, and Oracle. When HashiCorp went to BSL, OpenTofu forked the last MPL-licensed version. MinIO’s AGPL licensing and the complexity of its codebase have made a direct fork less attractive, so the community is rebuilding rather than forking.

    The anger

    The community response has been less about finding alternatives and more about the way MinIO handled the transition. The GitHub Discussion complaint about locked tickets and silent releases keeps surfacing. On LinkedIn, users like Laurenceau noted that “over the past year, MinIO has taken increasingly aggressive steps to restrict the use of their Community Edition.”

    The Hacker News threads are revealing. One commenter captured the philosophical divide: “Open source projects eventually need a path to monetization.” The rebuttal was immediate: “I don’t see anything in there about ‘except for business purposes’… It seems like a lot of these ‘open core’ cloud companies just have a fundamental misunderstanding about what free software is.”

    That tension, between the practical need for revenue and the social contract of open source, sits at the centre of every one of these disputes. But MinIO’s handling of it was uniquely clumsy. Other companies at least announced their changes in advance, offered migration paths, or maintained the community edition in a reduced capacity. MinIO closed tickets, locked discussions, and removed binaries while a security vulnerability was still unpatched.


    Pattern recognition: the open source bait-and-switch playbook

    MinIO isn’t the first company to monetise an open source project by restricting the free version. But looking at the pattern across multiple companies reveals something about where the industry is heading.

    The standard playbook has three phases:

    Phase 1: Build. Release under a permissive licence. Grow adoption. Accumulate GitHub stars, Docker pulls, community contributors. Use these metrics to raise venture capital.

    Phase 2: Monetise. Change the licence to something more restrictive. Create a commercial edition with premium features. Hire a sales team. Start converting free users to paid customers.

    Phase 3: Restrict. If conversion rates disappoint (and they usually do, because free users chose the free version because it was free), start removing features from the community edition to widen the gap between free and paid.

    Most companies stop at Phase 2. MongoDB, Elastic, HashiCorp, and Redis all changed their licences but maintained functional community editions. Elastic went further and actually reversed its licence change when the competitive dynamics shifted.

    MinIO went through Phase 3 and kept going into territory no other major open source company has entered: pulling distribution channels, abandoning the repository, and allegedly timing security disclosures to coincide with commercial pressure.

    The question isn’t whether open source companies can monetise. It’s whether the community will trust the next one that tries.


    Open source dependency risk

    If you’re running MinIO in production today, you have a very concrete problem: the software you depend on is no longer maintained, and the company that built it wants $96,000 a year for the replacement.

    The broader lesson extends beyond MinIO.

    Licence permissiveness is not a guarantee of continuity. MinIO started under Apache 2.0, one of the most permissive licences available. That didn’t prevent the company from changing the licence, stripping features, and eventually abandoning the project. The licence protects your right to use the existing code; it doesn’t obligate anyone to keep developing it.

    Adoption metrics can work against you. MinIO’s billion Docker pulls and 60,000 GitHub stars were used to raise $126 million in venture capital. That capital came with growth expectations that the community edition couldn’t satisfy. The very popularity that made MinIO valuable as a project made it a target for monetisation pressure.

    Watch the cap table. When a growth-stage VC fund like SoftBank invests in an open source company, the incentive structure shifts. Patient capital (foundations, strategic investors, bootstrapping) tends to align with long-term community health. Growth capital demands returns on a timeline that community goodwill can’t always deliver.

    The CNCF badge isn’t a safety net. MinIO was a CNCF-associated project. That association didn’t prevent any of this. The CNCF doesn’t control the licensing or business decisions of associated projects. If your risk model assumes that CNCF membership means long-term stability, MinIO is your counterexample.

    For engineering teams evaluating open source dependencies, the MinIO saga suggests a few practical questions:

    • Who funds this project, and what do they expect in return?
    • Is the project governed by a foundation, or controlled by a single company?
    • What’s the licence history? Has it changed before?
    • Are there viable alternatives if this project changes direction?
    • Could you build from source if distribution channels disappeared?

    None of these questions would have saved you from MinIO’s specific trajectory. But they might help you see the next one coming.

    Flowchart for evaluating an open source dependency, detailing steps and considerations based on funding sources, license history, availability of alternatives, and risk levels.

    The road from here

    MinIO’s community edition is dead. The company has made that unambiguous. What remains is AIStor, a commercial product priced for enterprises, and a GitHub repository with a README in all caps.

    The alternatives are maturing. SeaweedFS has production deployments. Garage is finding its niche in geo-distributed setups. RustFS is early but growing. None of them have MinIO’s decade of battle-testing, but none of them have MinIO’s baggage either.

    For the open source ecosystem, MinIO joins a growing list of cautionary examples. But it stands apart in one respect: the thoroughness of the dismantling. Other companies changed the terms. MinIO burned the bridge, pulled up the road behind it, and posted a toll booth on the other side.

    Periasamy built GlusterFS and sold it to Red Hat for $136 million. The community continued under new stewardship. With MinIO, the community got a README in all caps and a link to the pricing page.

    That difference tells you everything about what changed between the two exits. The first time, the acquirer valued the community.

    This time, the company decided the community was the product.


    References

    I cover open source business models and developer infrastructure regularly. If this kind of industry analysis is useful to you, consider subscribing so you don’t miss the next one.

  • IBM triples entry-level hiring while the rest of tech cuts back

    IBM triples entry-level hiring while the rest of tech cuts back

    The company is shifting from coding roles to customer-facing jobs as AI automates routine technical work

    If you want these landing in your inbox regularly, subscribe to my newsletter.


    At a moment when the technology industry is pulling back on entry-level hiring, IBM is going in the opposite direction.

    The company announced at Charter’s Leading With AI Summit on 12 February 2026 that it plans to triple its entry-level hiring in the United States during 2026. Nickle LaMoreaux, IBM’s Chief Human Resources Officer, delivered the announcement, framing the move not as a rejection of artificial intelligence but as an embrace of what humans do best.

    “We’re hiring more entry-level people, not in the roles that we used to hire them for,” LaMoreaux said at the summit. The shift reflects a fundamental redesign of what entry-level work means at IBM in the age of generative AI.

    This is not a small bet. While the broader technology industry has been tightening its talent pipelines, cutting internship programmes, and reducing campus recruiting, IBM is tripling down on early-career talent. The timing is notable: just weeks after the company reported fourth-quarter 2025 revenue of $19.7 billion, up 12% year over year, with its generative AI book of business exceeding $12.5 billion.


    From coding to customer-facing

    The most significant detail in IBM’s announcement is not the number, it is the type of roles being created. The company is moving away from traditional coding positions, the very work that AI systems like large language models have become increasingly capable of performing.

    This strategic pivot follows what researchers have been documenting for the past two years. A 2025 study from MIT and Oak Ridge National Laboratory found that approximately 11.7% of U.S. jobs could already be automated with current AI technology. The study focused on tasks that AI can perform at a comparable or superior level to human workers, and routine technical work is increasingly in that number.

    IBM’s approach recognises this reality. Rather than competing with AI on tasks where machines excel, the company is directing entry-level hiring towards roles where human judgement, relationship-building, and complex communication matter more than code-writing ability.

    The customer-facing positions IBM is prioritising—roles in client services, account management, and consultative sales—require skills that AI systems still struggle to replicate. Reading social cues. Navigating ambiguity. Building trust over time. These are capabilities that remain difficult to automate, and they represent the kind of work that will define entry-level careers in the coming decade.


    The experience paradox

    This hiring strategy arrives amid a troubling pattern that researchers have labelled the “experience paradox.” Research from Stanford University found that workers aged 22 to 25 in AI-exposed roles have experienced a 13% decline in employment since late 2022. The very generation that grew up with technology is finding fewer opportunities in the industries that once promised the most room for growth.

    The logic appears counterintuitive: if AI can perform entry-level work, why hire entry-level workers at all? Many companies have answered this question by shrinking their early-career programmes, deferring to automation rather than investing in human talent that has not yet developed specialised skills.

    IBM is betting against this logic. The company’s position is that entry-level workers bring something AI cannot: the ability to learn, adapt, and eventually develop the institutional knowledge and client relationships that become irreplaceable over time. The question is not whether AI can do the work—increasingly, it can. The question is whether organisations will invest in developing human talent that can work alongside AI, eventually surpassing what any automated system can do alone.

    This is a longer-term calculation than many of IBM’s competitors are making. The immediate efficiency gains from AI replacement are clear and measurable. The returns on human development are slower and harder to quantify. IBM is signalling that it is willing to make that bet.


    What the broader data shows

    IBM’s announcement stands out precisely because it runs against the grain of current hiring trends. The National Association of Colleges and Employers reported that hiring for the Class of 2026 is essentially flat, with just a 1.6% increase in planned hiring. This follows a period of significant contraction in technology sector entry-level programmes.

    Yet beneath the surface, the data reveals a more nuanced picture. A Salesforce survey found that 58% of executives plan to expand their entry-level hiring despite the rise of AI. These leaders are not ignoring the technological shifts transforming their industry, they are making a deliberate choice about where to direct human talent.

    The split runs along how companies conceptualise the relationship between AI and human workers. Those seeing AI as a replacement for human labour are contracting their early-career pipelines. Those viewing AI as a tool that amplifies human capability are maintaining or expanding their investment in entry-level talent.

    IBM clearly falls into the second category. Its tripling of entry-level hiring in 2026 is not a nostalgic gesture towards traditional career pathways. It is a strategic assertion about where the company believes genuine value lies, in human capabilities that complement artificial intelligence rather than compete with it.


    The skills that matter most

    LaMoreaux’s announcement at Charter’s summit reflected a broader recalibration of what entry-level talent needs to succeed. The technical fundamentals remain valuable, but they are no longer sufficient on their own. The skills that distinguish human workers in an AI-augmented environment are interpersonal: the ability to collaborate, communicate, and solve problems in contexts that require cultural awareness and emotional intelligence.

    This does not mean technical skills are becoming irrelevant. It means they are becoming baseline expectations rather than differentiators. What separates effective entry-level workers from their peers is the capacity to do work that AI cannot easily replicate, not because the work is more intellectually demanding, but because it requires a level of human connection that machines have not yet achieved.

    IBM is not alone in recognising this shift. The 58% of executives planning to expand entry-level hiring are making similar calculations, betting that the next generation of workers will define their value through distinctly human capabilities. The companies that get this wrong—those that treat AI as a simple replacement for human labour—may find short-term efficiency gains but risk longer-term disadvantages in adaptability, innovation, and client relationships.

    For graduates entering the workforce in 2026 and beyond, the message is both challenging and encouraging. The traditional pathways that once led from entry-level coding to career advancement are narrowing. But the doors that remain open lead to work that may prove more engaging, more creative, and more resilient to automation than the roles they are replacing.

    IBM’s decision to triple entry-level hiring is not a guarantee that other companies will follow. It is, however, a significant data point in an emerging debate about the future of early-career work. The technology industry is not uniform in its response to AI. Some companies are contracting. Others, like IBM, are doubling down on human talent. The next few years will reveal which approach proves more prescient.


    References


    I write about how technology is reshaping work, talent, and careers regularly. If this kind of analysis is useful to you, consider subscribing so you don’t miss the next one.

  • Google’s Gemini 3 Deep Think hits 48% on the ultimate AI test

    Google’s Gemini 3 Deep Think hits 48% on the ultimate AI test

    The specialised reasoning mode now writes competitive code and catches errors in peer-reviewed papers

    If you want these landing in your inbox regularly, subscribe to my newsletter.


    The announcement

    Google just dropped the most significant AI reasoning upgrade since chain-of-thought prompting went mainstream. On February 12, 2026, the company announced Gemini 3 Deep Think—a specialised reasoning mode that achieved 48.4% on Humanity’s Last Exam, the benchmark designed to test the absolute limits of frontier AI systems.

    That’s a new record without tool use. The previous best was 40%.

    The timing is notable. This comes exactly one week after Anthropic released Claude Opus 4.6, which had temporarily claimed the HLE crown. Google didn’t just respond—they leapfrogged.

    Flowchart illustrating the process of 'How Deep Think Works' with steps including Problem Input, Hypothesis Paths, Self-Verification Loop, Error Checking, and Final Output.

    But raw benchmark numbers only tell part of the story. What makes Deep Think genuinely different is how it got there: parallel thinking, self-verification loops, and inference-time compute scaling that dynamically allocates resources based on problem complexity. This isn’t just a bigger model. It’s a fundamentally different way of processing hard problems.


    What Deep Think actually is

    Let’s clear up a common misconception: Deep Think is not a separate model. It’s a specialised reasoning mode that runs within Gemini 3—think of it as a different “thinking style” the model can switch into when problems demand more than pattern matching.

    Parallel thinking: Unlike sequential chain-of-thought reasoning (think step-by-step, then verify), Deep Think explores multiple solution paths simultaneously. It’s the difference between walking a maze one path at a time versus flooding it with multiple searchers who communicate what they find.

    Self-verification loops: The system doesn’t just generate answers—it actively checks its own work, identifies inconsistencies, and corrects errors before presenting a final response. This is why it can catch logical flaws that humans missed.

    Inference-time compute scaling: Simple queries get simple compute. Complex problems—like proving mathematical theorems or analysing semiconductor fabrication recipes—get dramatically more processing time. The model dynamically decides how much thinking a problem deserves.

    The result is a system that demonstrates methodical reasoning rather than relying primarily on pattern matching—a subtle but significant shift in how frontier AI behaves under pressure.


    The benchmarks

    Three benchmarks tell the story here. Each tests something fundamentally different about AI capability.

    Bar chart titled 'Benchmark Battle' comparing the performance of Gemini 3 DT, Claude 4.6, and GPT-5.2 across three categories: HLE (no tools), ARC-AGI-2, and CMT-Physics. HLE has a score of 70%, ARC-AGI-2 has scores of 50%, 45%, and 25%, while CMT-Physics scores 85%, 45%, and 30%.

    Humanity’s Last Exam (48.4%)

    This is the headline number. Humanity’s Last Exam comprises 2,500 challenging questions authored by approximately 1,000 subject-matter experts across 100+ disciplines. It was designed specifically to find the ceiling of what current AI systems can do—and published in Nature on January 28, 2026, it’s become the de facto standard for reasoning benchmarks.

    Gemini 3 Deep Think scored 48.4% without tool use. With tools enabled, it achieved 45.8%—interestingly lower, suggesting the parallel reasoning process sometimes conflicts with external tool invocation. The benchmark is brutal: average human expert performance sits around 20-30%.

    ARC-AGI-2 (84.6%)

    The ARC-AGI-2 benchmark, created by François Chollet (Keras creator), tests few-shot generalization—the ability to solve novel visual puzzles after seeing just a few examples. Humans score 100%. Pure language models historically score near 0%.

    Deep Think hit 84.6%, verified by the ARC Prize Foundation. Chollet has described progress on ARC-AGI as “impressive” while noting questions remain about whether generalization methods would hold across truly novel domains.

    Codeforces Elo (3455)

    This is where Deep Think enters elite territory. A 3455 Elo rating on Codeforces places it among the top competitive programmers globally. GPT-5.2 doesn’t have a published Codeforces rating.

    The coding capability isn’t just competitive programming tricks, either. Deep Think also achieved gold medal standard at the International Mathematical Olympiad 2025, scoring 35/42 points across 5 of 6 problems.

    BenchmarkGemini 3 DTClaude 4.6GPT-5.2
    HLE (no tools)48.4%40.0%36.6%
    HLE (with tools)45.8%53.1%50.0%
    ARC-AGI-284.6%68.8%54.2%
    Codeforces Elo3455N/A†N/A
    CMT-Benchmark (physics)50.5%~11% avg~11% avg

    †Claude Opus 4.6’s Codeforces rating has not been independently verified.

    The CMT-Benchmark result deserves attention: 50.5% on 50 theoretical physics problems from experts at Harvard, Stanford, and Cornell. The average model performance on this benchmark is roughly 11%. Deep Think isn’t just better—it’s in a different category.


    How it stacks up against the competition

    The competitive picture has shifted dramatically in two weeks.

    Bar chart comparing HLE scores of Gemini 3 Deep Think (48.5%), Claude Opus 4.6 (40%), and GPT-5.2 (36.5%) in an AI benchmark showdown.

    Against Claude Opus 4.6: Gemini 3 Deep Think leads on pure reasoning benchmarks (HLE without tools, ARC-AGI-2, Codeforces) but trails when tools enter the equation. Claude’s 53.1% on HLE with tools suggests Anthropic has optimised more heavily for tool-augmented reasoning. Different philosophies for different use cases.

    Against GPT-5.2: OpenAI’s offering is now clearly third on reasoning-intensive benchmarks. The gap isn’t small—11.8 percentage points on HLE without tools, 30.4 points on ARC-AGI-2. Whether this reflects architectural limitations or simply different optimisation targets remains unclear.

    The practical upshot: if you need an AI to reason through complex problems without external tools—mathematical proofs, code architecture, scientific analysis—Gemini 3 Deep Think currently holds the edge. If you need an AI that excels at orchestrating multiple tools and APIs, Claude Opus 4.6 may still be the better choice.


    What early testers are doing with it

    The benchmarks are impressive. The real-world applications are more interesting.

    Infographic titled 'Real Results' featuring three sections: 'Mathematics' from Rutgers detailing a logical flaw in a peer-reviewed paper, 'Materials Science' from Duke describing growth of thin films for semiconductors, and 'Hardware Design' from Google discussing accelerated component design.

    Rutgers University (Mathematics)

    Lisa Carbone, a mathematician at Rutgers, used Deep Think to review a peer-reviewed paper in her field. The system identified a subtle logical flaw that had passed human review—not because the reviewers were careless, but because the error required tracing implications across multiple theorem chains simultaneously. Humans do this sequentially. Deep Think does it in parallel.

    The paper hasn’t been retracted yet, but Carbone’s findings are now circulating for further verification. This is the kind of application that makes researchers pay attention.

    Duke University (Materials Science)

    The Wang Lab at Duke used Deep Think to design a recipe for growing thin films larger than 100 micrometres for semiconductor materials. Growing thin films beyond 50 μm has been a persistent challenge in materials science; pushing past 100 μm opens new possibilities for semiconductor fabrication.

    The lab hasn’t published the full methodology yet, but early results suggest Deep Think identified a process optimisation that human experts hadn’t considered—likely because it required synthesising knowledge across multiple subdomains simultaneously.

    Google (Hardware Design)

    Internally, Google’s hardware teams are using Deep Think to accelerate physical component design. The details are sparse (competitive advantage and all that), but Anupam Pathak, who leads the effort, indicated that the parallel reasoning capability has reduced iteration cycles on certain design problems.

    The common thread: problems that benefit from holding multiple constraints in mind simultaneously and identifying non-obvious connections. That’s exactly what parallel reasoning excels at.


    What else is new

    Beyond the benchmarks and research applications, two features deserve attention.

    Sketch-to-3D-printable: Upload a sketch—hand-drawn, photographed, whatever—and Deep Think analyses the geometry, identifies the intended object, and generates an STL or OBJ file ready for 3D printing. This isn’t just image-to-3D; it’s inference-heavy reasoning about what the sketch represents and how to make it physically realisable.

    Expanded scientific domains: Deep Think now demonstrates strong performance in chemistry and physics. The system achieved gold medal level on the written sections of the Chemistry and Physics Olympiads 2025. This matters because scientific reasoning requires more than pattern recognition—it demands understanding causal relationships, conservation laws, and domain-specific constraints.

    These aren’t bolt-on features. They emerge from the same parallel reasoning architecture that powers the benchmark results. The capability to reason about physical constraints in 3D printing is the same capability that catches logical flaws in mathematical proofs.


    The price of thinking

    Here’s where the enthusiasm meets economics. Access to Deep Think requires a Google AI Ultra subscription at $249.99/month.

    What does $250 get you?

    • Deep Think access (the headline feature)
    • 30 TB of cloud storage
    • 25,000 AI credits monthly
    • YouTube Premium
    • Veo 3.1 (Google’s video generation model)

    TechCrunch’s coverage contextualises this as Google positioning its most capable AI as a premium product—similar to how OpenAI structures ChatGPT Pro. Whether the market will support $250/month subscriptions at scale remains to be seen, but for research teams and enterprises, the calculus may be straightforward: if Deep Think saves even a few hours of expert time per month, it pays for itself.

    The alternative is API access, which Google is opening through an early access program for researchers, engineers, and enterprises. Pricing for API usage hasn’t been publicly detailed yet.


    How to access it

    Two paths, depending on what you need:

    Gemini app: If you have a Google AI Ultra subscription ($249.99/month), Deep Think is available now in the Gemini app. No waitlist, no early access—the mode is live for subscribers.

    API early access: For researchers, engineers, and enterprises who need programmatic access, Google is running an early access program. The application form takes about two minutes. Approval timelines vary.

    For most individual users, the subscription route is the only realistic option right now. The API early access appears targeted at institutional users with specific integration needs.


    What this means

    Three things make this announcement significant beyond the benchmark numbers.

    Illustration representing three users of technology: a scientist finding errors, a researcher holding a microchip, and a worker designing a structure.

    Reasoning is now a feature, not an experimental add-on. The industry has moved past the phase where chain-of-thought reasoning was a novelty. Extended reasoning modes are now an expected capability for frontier models.

    Parallel reasoning represents a genuine architectural shift. Sequential thinking (step-by-step, then verify) has limits. Parallel thinking—exploring multiple paths simultaneously with self-verification—scales differently. The benchmark results suggest this approach has legs.

    The gap between frontier models is widening. Look at the HLE spread: Gemini 3 Deep Think at 48.4%, Claude Opus 4.6 at 40%, GPT-5.2 at 36.6%. That’s not a tight race anymore. Whether OpenAI responds with architectural changes or simply pushes GPT-6 faster will shape the competitive dynamics for the rest of the year.

    The next few months will be telling. Anthropic hasn’t commented publicly on Google’s announcement, but the company rarely stays quiet for long. OpenAI faces more pressure—third place in the reasoning race isn’t where a market leader wants to be.

    And for researchers like Lisa Carbone and the Wang Lab, the practical question is simpler: does this tool solve problems I couldn’t solve before? For certain classes of problems, the early evidence suggests the answer is yes.


    References

    I write about AI developments and their practical applications regularly. If this kind of benchmark analysis is useful to you, consider subscribing so you don’t miss the next one.

  • MiniMax M2.5 matches Claude Opus at 1/33rd the cost

    MiniMax M2.5 matches Claude Opus at 1/33rd the cost

    34 days after a $619M IPO, MiniMax claims SOTA coding performance at a fraction of competitor pricing

    If you want these landing in your inbox regularly, subscribe to my newsletter.


    The news

    MiniMax released M2.5 on February 12, 2026, and the official announcement makes a claim that would have seemed absurd six months ago: a Chinese model matching Claude Opus 4.6 on coding benchmarks at roughly one-thirtieth the output price.

    The numbers are stark. SWE-Bench Verified: 80.2%. That puts M2.5 in a statistical dead heat with Anthropic’s flagship coding model. Multi-SWE-Bench: 51.3%. BrowseComp: 76.3% with context management. These are not mid-tier results.

    Bar graph comparing benchmark scores of MiniMax M2.5, Claude Opus 4.6, GPT-5.1, and Gemini 3 Pro across SWE-Bench, Multi-SWE, and BrowseComp categories.
    MINIMAX LEADS BENCHMARKS – Matching Claude, Leading Others

    Two variants launched simultaneously. The standard M2.5 costs $0.15 per million input tokens and $1.20 per million output tokens, generating at 50 tokens per second. M2.5-Lightning doubles those prices and doubles the speed, hitting 100 tokens per second at $0.30/M input and $2.40/M output.


    The pricing earthquake

    The price gap extends well beyond a rounding error.

    ModelInput ($/M)Output ($/M)Output ratio vs M2.5
    MiniMax M2.5$0.15$1.201x
    MiniMax M2.5-Lightning$0.30$2.402x
    DeepSeek-R1$0.55$2.191.8x
    GPT-5.1$1.25$10.008.3x
    Gemini 3 Pro$2.00$12.0010x
    Claude Opus 4.5$5.00$25.0020.8x

    MiniMax puts it bluntly in their technical documentation: the cost of M2.5 is one-tenth to one-twentieth that of Opus, Gemini 3 Pro, and GPT-5 based on output price. The math checks out.

    For a developer running heavy inference workloads, this is the difference between a $500 monthly bill and a $10,000 one. At scale, infrastructure costs that once required budget approval become line items small enough to expense without a second thought. Claude Opus is twenty times more expensive on output.

    Bar chart comparing AI cost efficiency in output pricing per million tokens, displaying five models: MiniMax M2.5 ($1.20), DeepSeek-R1 ($2.19), GPT-5.1 ($10.00), Gemini 3 Pro ($12.00), and Claude Opus 4.5 ($25.00), highlighting Claude Opus as the most expensive option.
    Al Cost Efficiency – Output Pricing Per Million Tokens

    The Burnwise pricing tracker confirms these figures align with current market rates for frontier models. MiniMax is not discounting into oblivion; they are pricing at a structural level that Western competitors have not touched.


    Benchmark claims and verification caveats

    80.2% on SWE-Bench Verified places M2.5 at the frontier. The HuggingFace model card provides methodology details, but independent verification remains thin.

    Standard disclaimers apply. Benchmark gaming is an industry sport. SWE-Bench has seen contested results where aggressive test-time compute and prompt engineering inflate scores that don’t translate to real-world performance. MiniMax has opened model weights for scrutiny, which is more than some competitors offer, but the broader research community has not yet stress-tested these claims.

    What is verifiable: the model is available through OpenRouter for immediate use. Developers can run their own evaluations. The pricing is real. The benchmarks are claims that may or may not survive third-party replication.

    “Based on output price, the cost of M2.5 is one-tenth to one-twentieth that of Opus, Gemini 3 Pro, and GPT-5.” — MiniMax technical documentation


    The Forge framework

    MiniMax credits a custom reinforcement learning infrastructure called Forge for the performance gains. Their post-training writeup describes a 40x training speedup over baseline approaches, enabled by 200,000+ training environments.

    The technical claim addresses the compute bottleneck. If Forge delivers 40x efficiency gains, the economics of frontier model training shift. A training run that would cost $100 million at standard efficiency might cost $2.5 million with Forge-style optimisation. That’s the difference between a project requiring venture capital and one that fits in a Series A budget.

    Forge focuses on agentic capabilities, training models to use tools, browse the web, and manage long-horizon tasks. The BrowseComp score of 76.3% reflects this focus. MiniMax is building agent infrastructure where the model serves as the reasoning core.

    Diagram illustrating the Forge RL Framework for accelerating autonomous agent training, highlighting components such as 200K+ training environments, reward signals, model optimization, the M2.5 model, and agent capabilities including tool use, web browsing, and long-horizon tasks.
    FORGE RL FRAMEWORK – ACCELERATING AUTONOMOUS AGENT TRAINING

    The platform claims 10,000+ user-built “Experts” on their MiniMax Agent system. These are custom workflows and tools built by developers using the underlying model. The number suggests genuine adoption, not just API keys claimed but unused too.


    The Chinese AI wave context

    MiniMax didn’t emerge from nowhere. The company went public on January 9, 2026, raising $619 million in a Hong Kong IPO that valued the company at over $11.5 billion on debut. The stock jumped 109% on its first trading day. Forbes reported that founder Yan Junjie became a billionaire in the process.

    Thirty-four days passed between that IPO and the M2.5 release. That timing isn’t coincidental. Public markets demand growth narratives, and delivering a frontier model at disruptive pricing within weeks of listing sends a clear signal to investors about R&D velocity.

    MiniMax isn’t alone either. Xinhua reports that Zhipu released GLM-5 on February 11, one day before MiniMax’s announcement. The Chinese AI sector is coordinating releases in ways that suggest both state support and intense domestic competition. Western labs now face simultaneous pressure from multiple Chinese competitors rather than a single DeepSeek-style surprise.

    Hacker News discussion captured the shift. One commenter called MiniMax their “fast workhorse for tool calling.” Another noted that Chinese models are delivering “high quality drops for the perfect trifecta of leading models.” Developer sentiment has moved from skepticism to qualified acceptance.

    A timeline chart illustrating major events in AI innovation, including market disruptors like DeepSeek-R1 in Q4 2025, MiniMax's IPO in January 2026, Zhipu's release of GLM-5 in February 2026, and MiniMax's M2.5 release shortly after its IPO.
    AI INNOVATION WAVE – MARKET DISRUPTORS & ACCELERATED RELEASES

    What to watch next

    Three questions will determine whether M2.5 reshapes the market or becomes another discounted model fighting for scraps.

    First, does independent benchmarking confirm the 80.2% SWE-Bench score? If third-party evaluations show regression to 70% or below, the pricing advantage matters less. Developers will pay premium rates for reliability; they won’t pay any rate for uncertainty.

    Second, how do Western labs respond? Anthropic, OpenAI, and Google have not faced genuine price competition from Chinese models with claimed frontier performance. A price war would compress margins across the industry. The alternative is segmentation, where Western labs retreat to enterprise contracts and regulatory moats while Chinese models dominate price-sensitive markets.

    Third, what does MiniMax ship next? A single model release proves capability. A sustained cadence of improvements proves the underlying infrastructure works. The 34-day gap between IPO and M2.5 suggests velocity, but velocity needs to continue.

    The pricing is real. The benchmarks are claims. The market will sort the difference.


    References


    I write about AI model releases and their market implications regularly. If this kind of breaking coverage is useful to you, consider subscribing so you don’t miss the next one.

  • Brave launches revamped search API built for AI apps

    Brave launches revamped search API built for AI apps

    A technical breakdown of the new LLM Context API, pricing, and what it means for AI developers

    If you want these landing in your inbox regularly, subscribe to my newsletter.


    The launch

    Brave just shipped something that caught the AI developer community off guard. On February 12, the company launched its LLM Context API, a retooled search infrastructure built specifically for AI applications rather than traditional web queries.

    The new API represents more than a feature update. It’s a structural shift in how Brave positions its search technology in a post-Bing API world. The company now serves 22 million daily AI answers through its search engine, with over 200,000 developers signed up for API access.

    The timing is deliberate. Microsoft retired the Bing Search API in August 2025, leaving a gap in the market for independent search infrastructure. Brave had already launched its AI Grounding API on August 5, exactly one week before the Bing shutdown. The LLM Context API is the next evolution of that infrastructure.

    A graphic titled 'Brave Search API by the Numbers' displaying four statistics: 22 million daily AI answers, over 200,000 developers, more than 35 billion indexed pages, and 100 million daily updates.
    Brave Search API by the Numbers

    The independent index

    The search API landscape has consolidated around a handful of providers in recent years. Google’s API remains the dominant player. Microsoft’s Bing API served as the primary independent alternative until its retirement. Serper, Tavily, and SerpAPI have carved out niches by wrapping existing search engines or building specialised AI-native tooling.

    A timeline illustrating the evolution of the Brave Search API, highlighting key dates and events such as the AI Grounding API launch, the retirement of the Bing API, and the launch of the LLM Context API.
    Brave Search API Evolution

    Brave enters this market with something the others lack: an independent search index spanning more than 35 billion pages, updated with over 100 million daily changes. That’s not a wrapper around Google or Bing. That’s original infrastructure.

    The independence matters for AI developers building retrieval-augmented generation (RAG) systems. Brave Search counts companies like Mistral AI among its API customers. Snowflake built a native integration for its Cortex platform. These aren’t small experiments. They’re production infrastructure decisions.

    The practical difference: when you query Brave’s API, you’re not getting Google results filtered through another service. You’re getting results from an independent crawl and ranking system. For some developers, that’s a feature. For others, it’s a risk.

    Comparison diagram illustrating two approaches to search API integration: the Wrapper Approach connecting to Google/Bing Index and the Independent Index using Brave Search API.
    Independent index vs wrapper approach

    Technical specifications

    The LLM Context API exposes two primary endpoints. The Search endpoint returns structured web results with content extraction. The Answers endpoint provides direct responses synthesised from multiple sources, designed specifically for LLM grounding.

    Pricing breaks down into three tiers:

    PlanCostRate LimitUse Case
    Free$5 monthly credits1 QPSDevelopment, testing
    Search (Base)$3 per 1,000 requests20 QPSHigh-volume retrieval
    Search (Pro)$5 per 1,000 requests50 QPSScale applications
    Answers$4 per 1K requests + $5 per million tokens2 QPSRAG systems, chatbots

    The latency numbers are respectable. Brave reports p90 latency under 600ms for the LLM Context API. For AI applications where every millisecond counts in the user experience, that puts Brave in the same performance band as established competitors.

    Rate limits scale with plan tier. The Free tier caps at 1 query per second, sufficient for development work. Base plans support 20 QPS, Pro plans reach 50 QPS, and Enterprise customers negotiate custom limits. For comparison, Serper offers similar QPS tiers but relies on Google’s index rather than independent crawl data.

    The API is OpenAI SDK compatible, meaning developers can slot it into existing codebases with minimal refactoring. That’s a pragmatic design choice that lowers switching costs.

    A chart displaying the tiers of the Brave Search API with four options: Free, Base, Pro, and Enterprise, including pricing and queries per second (QPS) details.
    Brave Search API Tiers

    Security and compliance

    Brave differentiates itself on one technical dimension that competitors haven’t matched: Zero Data Retention. Query logs aren’t stored. Search histories don’t persist. For enterprises building AI systems that process sensitive data, this is either a genuine advantage or a compliance checkbox, depending on who you ask.

    The company has achieved SOC 2 Type II attestation, the audit standard that enterprise security teams require before approving vendor integrations. Chegg, the education platform, provides a public testimonial citing Brave’s privacy posture as a key factor in their adoption decision.

    The privacy positioning aligns with Brave’s broader brand identity. The company built its reputation on browser privacy features and ad-blocking technology. Extending that philosophy to API infrastructure is consistent, if not revolutionary.

    Brave Search API security features highlighting Zero Data Retention, SOC 2 Type II compliance, and Privacy-First policies.
    Brave Search API Security Features

    Benchmarks and the Grok problem

    Brave commissioned independent benchmark testing to validate its search quality against competitors. The evaluation methodology used human raters scoring answer quality across multiple dimensions.

    The results tell a nuanced story. Brave’s Ask feature scored a 4.66 average rating. ChatGPT scored 4.32. Google AI Mode reached 4.39. Perplexity landed at 4.01. These numbers suggest Brave’s search quality sits in the top tier of AI search tools.

    But one competitor scored higher.

    Grok, xAI’s search-enabled model, achieved a 4.71 rating, outperforming Brave by 0.05 points. The margin is narrow. The ranking is clear. Brave isn’t claiming the top spot. They’re claiming a competitive position in the leading pack.

    The benchmarks come with a caveat worth noting. The evaluation was conducted on November 30, 2025, making the data roughly two and a half months old at launch time. In the fast-moving AI search space, that’s an eternity. Model updates, ranking algorithm changes, and feature launches could shift these numbers significantly.

    Brave also reports a 94.1% F1-score on SimpleQA benchmarks using multi-search with reasoning, and 92.1% on single-search queries. These technical metrics measure factual accuracy and answer completeness, complementing the human preference scores.

    Bar chart showing AI search benchmark ratings for Grok, Brave, Google AI Mode, ChatGPT, and Perplexity, with Brave highlighted in blue.
    Al Search Benchmark Ratings

    Competitive positioning

    The search API market fragments into three categories. Google wrappers like Serper offer low pricing ($0.30-2.00 per 1,000 requests) but depend entirely on Google’s infrastructure and terms. AI-native tools like Tavily charge premium rates (approximately $8-10 per 1,000 requests) for search infrastructure optimised for LLM consumption. Multi-engine aggregators like SerpAPI provide broad coverage at mid-range pricing (around $15 per 1,000 requests).

    Brave sits in the middle on pricing at $3 per 1,000 requests for base search (or $5 for Pro), with the Answers tier adding token costs for synthesised responses. The company positions itself as offering independent infrastructure at non-premium prices.

    The competitive landscape looks like this:

    ProviderPrice per 1KIndex TypeAI Optimised
    Serper$0.30-2.00Google wrapperPartial
    Brave Search$3.00-5.00IndependentYes
    Tavily$8-10AI-nativeYes
    SerpAPI$15Multi-enginePartial

    The value proposition for developers depends on what they’re building. For simple keyword retrieval, Serper’s lower pricing makes sense if Google dependency isn’t a blocker. For RAG systems requiring grounded, synthesised answers, Brave and Tavily compete directly on features rather than price alone.

    The independent index becomes relevant when developers need results that differ from Google’s ranking. Brave’s crawl covers enough of the web to return competitive results, but the ranking algorithms will surface different content in edge cases. Whether that’s a feature or a bug depends on the use case.

    Bar graph comparing Search API pricing per 1,000 requests for Serper, Brave, Tavily, and SerpAPI, showing prices from $2 to $15.
    Search APl Pricing per 1K Requests

    Who should switch

    For developers building RAG systems, the LLM Context API provides search infrastructure purpose-built for LLM consumption. The Answers endpoint returns synthesised responses rather than raw search results, reducing the need for developers to build their own synthesis layers. The OpenAI SDK compatibility minimises integration overhead.

    For enterprises evaluating search vendors, Brave offers something competitors don’t: Zero Data Retention with SOC 2 compliance. For regulated industries or privacy-sensitive applications, that’s either a decisive advantage or a procurement checkbox. The independent index may matter more or less depending on whether Google’s ranking aligns with the enterprise’s content needs.

    For developers concerned about vendor lock-in, the independent index provides strategic optionality. Google could change its API terms. Microsoft already retired Bing. Brave’s infrastructure represents a hedge against platform risk, assuming the company maintains its crawl quality and uptime.

    The customer list suggests early traction. Cohere, Together.ai, You.com, and Kagi have integrated Brave Search in various capacities. These aren’t proof points that Brave has won the market. They’re evidence that the independent search API niche has real demand.

    The question for developers is whether Brave’s value proposition justifies the integration effort. The API is technically sound. The pricing is reasonable. The benchmarks show competitive quality. But switching costs exist, and Google’s dominance in search creates familiarity bias.

    Flowchart illustrating the decision-making process for selecting a search API, detailing criteria such as independence, budget constraints, privacy concerns, and the need for AI optimization.
    Decision flowchart for search API selection

    References


    I write about AI infrastructure and developer tools regularly. If this kind of technical breakdown is useful to you, consider subscribing so you don’t miss the next one.