Claude vs ChatGPT

Brilliant Noise 6 May 2026

Beyond Claude & ChatGPT: what OpenAI and Anthropic’s choices mean for your business

We refresh this page regularly to keep pace with fast-moving AI platforms and policies.

For most of the past three years, choosing between Claude and ChatGPT has been a feature comparison. Which one writes better? Which has the longer context window? Which integrates more cleanly with our stack? Those are still useful questions. They’re no longer sufficient ones.

OpenAI and Anthropic are now political and economic actors at scale. The choices each is making – about who they work with, what they refuse, where they invest, which governments they court – are reshaping what their products can do, who can use them, and how. Tool choice is now also strategic alignment.

That doesn’t mean one company is right and the other is wrong. Both have made coherent, defensible choices. But understanding those choices – what they reveal about where each company is going – matters more now than the latest benchmark.

Two paths through 2025-26

Both companies have raised vast sums of capital and both are now infrastructure for parts of the global economy. But their recent decisions tell different stories.

OpenAI has bound itself tightly to the Trump administration’s industrial strategy. In January 2025, Sam Altman appeared at the White House to launch Stargate – a $500 billion AI infrastructure venture in partnership with SoftBank, Oracle and MGX. Stargate has since grown to roughly seven gigawatts of planned US capacity, with international sites in the UAE, Norway and Argentina. In February 2026, OpenAI signed a contract with the Department of Defense to deploy its models in classified environments. It has proposed a “Classified Stargate” purpose-built for government workloads, called for tax breaks on AI infrastructure, and pushed for a strategic reserve of raw materials needed for chip manufacturing. Its consumer business continues to scale; valuation reached approximately $852 billion in early 2026.

Anthropic has taken a measurably different path. CEO Dario Amodei publicly criticised Stargate as “chaotic” and opposed the rescission of the Biden administration’s AI Executive Order. The company refused to agree to broader “all lawful purposes” contracting language with the Department of Defense – a position that prompted a “supply chain risk” designation, with the DC Circuit declining to lift it in April 2026. Anthropic formed AnthroPAC, an employee-funded political action committee, and continues to position itself publicly as the more safety-focused frontier lab. Its February 2026 Series G round closed at $30 billion, valuing the company at $380 billion. Strategic focus skews heavily toward enterprise and developer use cases rather than consumer.

Some things are common to both. Both are part of the Frontier Model Forum’s efforts to detect Chinese distillation campaigns – attempts by Chinese AI labs to train competing models on the outputs of leading US ones. Both have substantial defence-adjacent revenue lines. Both are losing money: HSBC estimates OpenAI’s 2026 losses alone at around $14 billion, with cumulative losses expected to reach $44 billion by 2028. Anthropic’s profitability picture is less publicly reported but similarly distant. Neither company is a cash cow.

Where corporate strategy meets product strategy

The strategic differences aren’t abstract. They shape what each company prioritises in its products – and that affects what your team can do with them.

OpenAI’s path is shaped by consumer scale and government engagement. Its product strategy reflects this: a wide consumer ecosystem, multimodal capabilities (image, video, voice), broad plugin architecture, and increasingly purpose-built government deployments. The bets that pay off here are reach and ubiquity. The trade-off is that the product line is sprawling, with roadmap decisions sometimes optimised for breadth more than depth.

Anthropic’s path is shaped by enterprise focus and a publicly-stated commitment to safety research. Its product strategy reflects this: long context windows, strong reasoning and writing performance, fewer consumer features, and infrastructure aimed at developers and enterprise integrations (Claude Code, the Model Context Protocol). The bets that pay off here are depth and reliability. The trade-off is reach – Anthropic’s consumer presence is a fraction of ChatGPT’s, and its commercial visibility outside enterprise procurement teams remains limited.

Both companies are converging on agentic capabilities, but from different starting points. OpenAI is wrapping agency around its broad ecosystem; Anthropic is exposing it through developer primitives. Neither approach is inherently better. They will produce different tools, with different strengths, on different timelines. We’ll go deeper on this in our forthcoming piece on AI agents.

The geopolitical frame

Two larger forces sit behind the strategic divergence. Both worth naming.

The first is US-China AI competition. The Trump administration has framed AI development as an industrial and national security priority, with explicit reference to China as the rival. Stargate sits inside that frame, as does the Department of Defense’s accelerated procurement of frontier AI from multiple vendors – Google’s Gemini, OpenAI’s models, xAI’s Grok, and intermittently Anthropic’s Claude. The Frontier Model Forum’s collaboration on detecting Chinese distillation campaigns indicates that even commercial competitors are now coordinating on a shared adversary.

The second is the question of how AI labs should engage with the state. OpenAI’s answer has been close, deep collaboration on the basis that influence sits inside the room rather than outside it. Anthropic’s answer has been more arms-length, with publicly-stated red lines and a willingness to accept commercial cost to maintain them. Each approach has principled people behind it. Each has consequences that flow through to product, hiring, regulation and reputation.

The two companies’ political action committees are now part of the wider tech-industry policy machine alongside Google, Microsoft, Amazon and Meta. The era of treating AI labs as politically neutral utility providers is over.

What this means for your AI strategy

For marketing, comms and strategy leaders, four shifts in how to think about AI vendor choice.

Treat vendor choice as a board-level question. A few years ago, AI tooling decisions sat with IT or the innovation team. Today they touch reputation, regulation, supply chain risk, intellectual property and competitive strategy. The question of which models you use, and how you justify that choice, belongs alongside other strategic supplier decisions.

Anticipate roadmap divergence. OpenAI and Anthropic are increasingly building for different primary audiences. Expect features that matter to consumers and government workloads to land first on OpenAI; expect features that matter to enterprise integrations and developer tooling to land first on Anthropic. Plan against that pattern rather than waiting to react to it.

Build vendor-flexible capability. The single most important hedge against any of this is internal AI capability that doesn’t depend on a single vendor. Teams that have learned how to assess prompts, outputs and use cases across multiple models are far more resilient to roadmap changes, pricing shifts and the political weather around any one company.

Be deliberate about your reasons. Most organisations will rationally use multiple models from multiple vendors. That’s fine. What matters is being able to articulate why each is in your stack. “We use Claude for X because of Y, and ChatGPT for A because of B” is a defensible position. “We use whatever someone in marketing signed up for last year” is not.

What position to take?

It’s tempting to read all this and conclude that one company is being irresponsible and the other is being precious. But both are making serious choices in genuinely uncertain conditions.

OpenAI’s bet is that the safest path is to be deeply involved in shaping how AI gets deployed at scale, including by governments, including in defence. Anthropic’s bet is that the safest path is to maintain explicit limits and accept the commercial consequences. Both are coherent positions. Both involve trade-offs. Neither is obviously correct.

What’s clear is that neither company is a neutral utility. The decisions made in San Francisco boardrooms, in Washington meetings and in policy briefings are shaping the tools that show up on your desktop. The job of leadership now is to look past the chatbot interface and see the strategic context behind it.

Used well, both Claude and ChatGPT are extraordinary tools. Choosing between them – or, more often, choosing to use both – is now one of the more consequential supplier decisions a leader will make this year.

If you spot a change in either company’s strategy that affects this guidance, tell us. We keep this page updated so it stays practical and current.

Last updated: May 2026