Before we dive in: I share practical insights like this weekly. Join developers and founders getting my newsletter with real solutions to engineering and business challenges.
I've been deeply involved in everything AI for a while now. I've built tools for Claude Code, I use these tools inside and out daily, and I've been tracking who's actually shipping versus who's just putting out press releases. I've reached a conclusion that I think is going to be controversial: Anthropic have already won. Unless OpenAI pull a rabbit out of a hat, I don't think they can recover.
OpenAI still has more users and ChatGPT dominates consumer mindshare, but when you look at the metrics that actually matter - enterprise adoption, path to profitability, product quality, developer loyalty - Anthropic is running away with it. The gap isn't closing, it's widening, and I don't see what changes that trajectory.
The Numbers That Actually Matter
Anthropic now commands 32% of the enterprise LLM API market while OpenAI has dropped to 25%. That's a complete inversion from early 2024, when OpenAI was the default enterprise choice. In 18 months Anthropic went from a niche alternative to the dominant enterprise AI provider, and the revenue trajectory tells the same story. They hit $1 billion ARR by late 2024, then reached $7 billion by October 2025. That's 600% growth in under a year. OpenAI roughly tripled in the same period which is impressive by normal standards, but they're being outpaced by a company with a fraction of their headcount and brand recognition.
80% of Anthropic's revenue comes from business customers while OpenAI's enterprise split sits around 40%. Consumer revenue might look good on a slide deck, but it doesn't build a sustainable business when you're burning billions on compute.
Anthropic just signed a term sheet for $10 billion at a $350 billion valuation, and Sequoia Capital is joining the round. This is the same Sequoia that's already invested in both OpenAI and xAI. When a VC breaks the unwritten rule of not backing direct competitors, it tells you something about which direction things are heading.
Profitability and the Enterprise Moat
Anthropic is projected to reach profitability by 2028. OpenAI's internal documents show $74 billion in cumulative operating losses through that same year, and Sam Altman has admitted the $200/month ChatGPT Pro subscription is currently unprofitable because users hammer it so hard. Their flagship consumer product loses money on power users, which means every heavy user is a liability on the balance sheet.
When 80% of your revenue comes from businesses paying for API access and enterprise contracts, your unit economics look completely different than when you're subsidising consumers. For context on just how different the financial pictures are: OpenAI is projecting $74 billion in cumulative losses, xAI is burning $1 billion per month on infrastructure, and Anthropic has enterprise contracts with healthy margins and a clear profitability timeline. In a capital-intensive industry where everyone is projecting massive losses for years, the company that reaches profitability first wins by surviving long enough to see the others stumble.
I've written before about why AI isn't making developers as productive as we think, and the same logic applies to AI companies themselves. Raw user numbers don't mean much if each user costs you money, and enterprise contracts with healthy margins beat consumer subscriptions every time.
Amazon has invested $8 billion total in Anthropic, Google over $3 billion, and Microsoft $5 billion in November 2025. Claude is now the only frontier model available across AWS, Azure, and Google Cloud, which for enterprise buyers is massive because they're not locked into a single cloud provider and can run Claude wherever their existing infrastructure lives.
Anthropic also created MCP as the standard for how AI connects to enterprise data systems, and when OpenAI, Google, and Microsoft all adopt your protocol, that's not competition - that's validation. Anthropic defined how AI talks to enterprise software, and now everyone else is building to their spec.
Claude Code Has No Competition
I've gone from having opinions about AI coding tools to having one very strong opinion: Claude Code is so far ahead of everything else that the comparison barely makes sense anymore.
I use Claude Code exclusively now, not as a preference among options but as the only option that actually works for how I develop. I'm always in the terminal, I never use an IDE, and if I need to look at code I use vim. Claude Code fits this workflow perfectly because it was designed for terminal-native development from the ground up. I've found the productivity gains are genuine and significant, and I've written about managing multiple Claude Code sessions and the workflows that emerge when you treat AI as a development partner rather than a fancy autocomplete. When Anthropic says that 90% of Claude Code's own code was written by Claude Code, I believe them because I've experienced similar patterns across every project I've worked on recently.
My typical day now involves spinning up Claude Code sessions in the terminal, describing what I need, and reviewing what comes back. I hardly look at the code directly and when I do it's vim. The entire development loop happens in conversation. Senior engineers at various companies are reporting it "recreating a year's worth of work in an hour" and that matches my experience - not every time, but when the tool clicks with a particular problem domain the speed is genuinely startling.
Claude holds 42-54% of the code generation market while OpenAI sits at 21%. GitHub Copilot still claims more total users, but much of Copilot's recent growth came from adding Claude models to its offerings. When your competitor's flagship product improves by adopting your technology, you've already won the technical argument. On benchmarks Claude Sonnet 4.5 leads SWE-bench Verified at 77.2%, with GPT-5 at 74.9% and Grok 3 at 70.8%. The gap isn't massive but it's consistent, and it compounds when you factor in the tooling built around those models.
Claude Code went from essentially zero to a billion-dollar product in six months. Nothing fancy - just a tool that understood what developers actually needed: terminal-native, fast, and deeply integrated with how real development work happens. Developers were waiting for something that fit their actual workflow rather than another IDE plugin, and the growth trajectory tells you just how much unmet demand there was.
I'm on the £90/month Claude Max plan and it's more than I need right now. I have access to Claude Code, Cowork, extended context, everything. Compare that to Cursor which I don't use at all - it's too expensive for what it offers and it's an IDE wrapper which doesn't fit how I work. Claude Code in the terminal with vim when I need to inspect something is worth more to me than any IDE integration.
Cowork and the Integrated Ecosystem
If Claude Code wasn't enough, Cowork arrived and expanded what I thought was possible with these tools. It gives Claude direct access to local folders on your computer to complete multi-step tasks, and this isn't a chatbot you paste context into - it's an agent that can navigate your actual file system, understand project structure, and execute complex workflows across multiple files and data sources.
I used it recently for EabhaSeq, my AI-enhanced prenatal testing project. I needed to find both funding opportunities and initial researcher users, the kind of task that would normally take hours of manual searching and cross-referencing and synthesis. Cowork handled it by actually working through relevant sources, understanding what I was building, and pulling together actionable leads. That's just scratching the surface of what it can do.
It replaces entire categories of tools like Strawberry Browser and other agent frameworks that required extensive setup and prompt engineering. I've used Strawberry Browser and similar tools before, and the setup friction alone is enough to make you question whether the task is worth automating. Cowork just works because it's built directly into the Claude ecosystem and shares context with everything else. That context sharing is the difference between a bolted-on tool and something that genuinely understands what you're doing.
Anthropic aren't just building models, they're building integrated workflows that compound on each other: Claude Code for development, Cowork for research and automation, the API for production systems, MCP for enterprise integration. Each piece makes the others more valuable. When I'm in Claude Code building a feature and need to research something, Cowork handles it using the same context. When I need to ship that feature to production, the API is right there. That's an ecosystem, and ecosystems are incredibly hard to compete against once they reach critical mass.
OpenAI's Unforced Errors
OpenAI's 2024 was rough. CTO Mira Murati left after 6.5 years, Chief Scientist Ilya Sutskever departed, co-founder John Schulman went straight to Anthropic, and Chief Research Officer Bob McGrew left too. Only three of the original thirteen founding members remain.
These weren't random departures. Safety-focused researchers saw the direction things were heading - the pivot toward pure commercial growth, the nonprofit-to-for-profit drama, the controversial investor terms asking backers not to fund competitors. It all pointed to a company losing its original mission. Many of these departing safety researchers ended up at Anthropic, which is not a coincidence. The company founded by ex-OpenAI researchers concerned about safety has become the destination for more ex-OpenAI researchers concerned about safety.
Leadership instability matters more than people realise, especially for enterprise sales. Having worked in regulated industries, I can tell you that enterprise buyers care deeply about vendor stability. They're signing multi-year contracts worth millions and they need to trust that their AI vendor will be around, stable, and consistent. Anthropic's steady leadership with Dario and Daniela Amodei still at the helm and low executive turnover is a genuine competitive advantage. It might not show up in benchmark comparisons, but it absolutely shows up in enterprise sales cycles.
OpenAI also introduced investor terms that asked backers not to fund competitors. When you're the clear market leader you don't need to play defence like that, and it's the kind of move that suggests internal awareness that the competitive position is weaker than the public narrative.
The xAI Wildcard
I can't write about this race without addressing xAI, because I think they're the most interesting player that most analysis gets wrong. I'm genuinely bullish on them and here's why.
The xAI consumer product is quite far behind Claude right now. I use Grok sporadically on the app and the product is missing features that Claude has had for ages - no projects on mobile, memory is inconsistent, and Claude's tone and personality are noticeably better. There's something about how Claude engages that feels more thoughtful, though I can't precisely articulate why.
When I've tested Grok on substantive tasks though, the model quality is at least as good as Claude. I've run it through detailed challenges like building a PC with specific budget constraints, prenatal research deep dives, and code review on complex systems. Grok holds its own and sometimes surprises me. The reasoning depth is there and the knowledge is there, but what's missing is everything around the model - the product experience, the workflow integration, the personality that makes you want to keep using it.
OpenAI's gap with Anthropic is showing up in both model quality and product quality. xAI's gap is almost entirely product, and the underlying model is competitive. That means the gap is closable with execution, and if there's one thing xAI has demonstrated it's execution speed.
xAI built Colossus, a 200,000+ GPU supercomputer, in 122 days. The typical data centre construction timeline is 24 months, so they compressed that by a factor of six. The X merger gives them access to real-time data from 600 million monthly users, the government contracts are rolling in with $200 million from DoD and GSA access to federal agencies, and the fundraising ($42 billion in 30 months at a $230 billion valuation) shows serious investor confidence.
Yes they're burning $1 billion per month and yes the consumer product needs significant work, but I've seen what happens when a team with that kind of execution speed gets serious about product polish. The infrastructure moat they're building is real, and once the product catches up they'll be formidable.
My prediction is that xAI becomes the primary competitor to Anthropic within two years, not OpenAI. OpenAI are too distracted, too unstable, and too focused on consumer plays that don't build sustainable businesses. xAI has the model quality, the infrastructure, and the hunger. They just need to ship product improvements at the same pace they build data centres.
What Could Prove Me Wrong
OpenAI has 800 million weekly active users versus Claude's 30 million monthly, and consumer dominance creates optionality they could leverage in unexpected ways. GPT-5 is competitive on benchmarks and Microsoft's backing means they can survive the burn longer than almost anyone. Google is the elephant in the room too - Gemini is improving rapidly, and if Google gets serious they have distribution advantages through Search, Android, and Workspace that dwarf everyone else's.
None of that changes the fundamental dynamic though. Enterprise AI is where the sustainable money lives and Anthropic is winning that race decisively. As I've written about before, the developer community is voting with their workflows and they're choosing Claude.
The AI market is fragmenting. Anthropic has won enterprise and coding, OpenAI retains consumer and brand recognition, and xAI is building infrastructure at a pace nobody can match. The question is which segment matters most for long-term survival, and I think that answer is clear.
Anthropic's 80% enterprise revenue mix, path to 2028 profitability, stable leadership, dominant position in code generation, and integrated product ecosystem puts them in a position that's going to be very difficult to challenge. OpenAI's consumer fortress and Microsoft backing give them staying power, but staying power isn't the same as winning. xAI's infrastructure and execution speed make them the dark horse to watch, but they need to close the product gap before they can seriously compete for the customers that actually pay the bills.
Based on everything I'm seeing and experiencing daily, I wouldn't bet against Claude. And right now, I'm not betting on anyone else.
