Vast data center at night with glowing server racks in blue and orange lighting, representing Anthropic and xAI Colossus AI compute partnership

Anthropic Secures All of xAI's Colossus Compute

AIntelligenceHub
··6 min read

Rivals turned partners: Anthropic just secured all compute at xAI's Colossus 1 supercomputer, adding 220,000 Nvidia GPUs to Claude's infrastructure while Claude Code rate limits double immediately.

In February 2026, Elon Musk wrote on X that Anthropic "hates Western civilization." Last week, he told his followers he'd spent time with senior Anthropic team members and was "impressed." The day after that, Anthropic signed a deal to use all of the compute capacity at SpaceX's Colossus 1 data center, making it one of the more unexpected partnerships in the AI industry's recent history.

The official announcement from Anthropic details a deal giving the company access to more than 300 megawatts of new computing capacity and over 220,000 Nvidia GPUs, available within the month. Colossus 1, the massive Memphis, Tennessee data center originally built by xAI to run Grok, is now part of Claude's infrastructure.

SpaceX and xAI merged earlier this year, combining Elon Musk's rocket company and his AI startup. That means Anthropic is now renting the entire supercomputer that its direct rival built. The deal is unusual. It's also exactly what both companies needed.

Why Anthropic Needed Colossus Compute Right Now

The financial context makes the urgency clear. Anthropic's annualized revenue run rate crossed $30 billion in April 2026, up from $9 billion at the end of 2025. CEO Dario Amodei disclosed this week that the company grew 80-fold in Q1 alone, far exceeding its own internal projection of 10x growth. He called the pace "too hard to handle."

Claude Code is driving nearly all of that acceleration. The AI coding assistant Anthropic launched in mid-2025 caught fire with enterprise development teams faster than anyone predicted. Enterprise teams aren't just using it for code completion. They're running autonomous agent workflows through it, routing large-scale refactoring jobs, building internal tooling pipelines that consume Claude models continuously throughout the workday. That kind of usage burns through compute in hours.

The scale is significant. Enterprise teams running heavy agent workloads can consume the equivalent of hundreds of dollars per developer per day in API costs. When those workloads run autonomously through multi-step pipelines, a single session can push against rate limits without any human actively managing token spend. For a company suddenly serving thousands of enterprise accounts with these patterns, the gap between available compute and actual demand becomes a product problem quickly.

The five-hour rate limits on Claude Code were a concrete symptom. Enterprise teams building continuous integration pipelines, overnight agent runs, and long-form refactoring workflows would hit the cap and have to restructure their work sessions. A coding assistant that interrupts itself because it ran out of capacity isn't competing well against alternatives that don't.

Anthropic was already locked into long-range capacity commitments: nearly 1 gigawatt from Amazon Web Services arriving by end of 2026, 5 gigawatts from Amazon total, another 5 gigawatts from Google and Broadcom targeted for 2027, a $30 billion Azure deal with Microsoft and Nvidia, and $50 billion in U.S. data center investment with Fluidstack. Those agreements are enormous. They're also future-dated.

Colossus 1 is different. It's built, operational, and coming online for Anthropic within the month. For a company that grew 80x in one quarter, having 220,000 GPUs available now versus two years from now is a meaningful operational difference.

What Colossus Means for Claude Users and Rate Limits

Anthropic attached specific product changes to the compute announcement, unusual for an infrastructure deal. Most capacity acquisitions take months to affect user experience. These changes are listed as effective immediately.

Claude Code's five-hour rate limits are doubling for Pro, Max, Team, and Enterprise subscribers. The peak-hours capacity reduction that had previously been applied to Pro and Max accounts is being eliminated entirely. Those plans now get consistent capacity regardless of whether it's peak usage time. Claude Opus API rate limits are rising significantly for API customers, with some tiers reportedly seeing 1,500 percent increases in maximum input tokens per minute and 900 percent increases in output tokens per minute.

For enterprise teams that had built agentic workflows around the existing five-hour limit, doubling that cap changes what's architecturally possible in a single session. Teams running multi-agent pipelines through Claude Opus were working around rate limits as an infrastructure constraint. Removing peak-hours throttling means the capacity planning problem gets simpler: consistent behavior regardless of when in the day the workloads run.

The Colossus capacity isn't fully online yet; Anthropic says it arrives within the month. The immediate improvements likely draw on capacity already secured elsewhere. The Colossus lift adds to that over the coming weeks.

As for the Elon Musk factor: the public history between Musk and Anthropic made this deal harder to predict than a straightforward capacity transaction. Musk called Anthropic a company that "hates Western civilization" in February. His own AI venture, xAI, competes directly with Anthropic through Grok. Both companies compete for the same enterprise accounts, the same developer ecosystems, and the same GPU supply.

Then the public position shifted. Musk posted that he had spent significant time with Anthropic's senior team and was "impressed." The deal followed two days later. Neither company has explained the precise sequence. The practical explanation is transactional: SpaceX has infrastructure it needs to monetize, Anthropic has demand it needs to fill. What matters is the structural implication: Anthropic is treating compute as a pure commodity, independent of competitive dynamics at the model layer. If the frontier AI market matures this way, expect more unusual partnerships between companies that compete on products but cooperate on infrastructure.

The Infrastructure Race and IPO Context Behind the Deal

The Colossus deal joins a compute portfolio that now spans nearly every major provider. Reviewing Anthropic's $200 billion Google Cloud commitment shows how this multi-provider strategy has been building. Add Amazon, Microsoft, Fluidstack, and now SpaceX/xAI, and Anthropic has structured relationships with almost every significant AI compute source in existence.

That's a deliberate hedge against concentrated supply risk. If any single provider hits capacity constraints, others absorb the load. If a new compute source becomes available with favorable economics, Anthropic has shown it will pursue the deal regardless of competitive history. OpenAI's infrastructure runs primarily through its exclusive Microsoft relationship. Anthropic's is distributed. One approach offers simplicity; the other offers resilience at the cost of coordination complexity.

OpenAI recently hit 10 gigawatts of committed compute ahead of its 2029 target. Both companies are in an infrastructure race that runs parallel to the model quality race. The company that can run the most requests, at the most consistent latency, at a price enterprise customers renew on, wins market share regardless of benchmark leaderboards.

The announcement also included a detail most coverage skipped: Anthropic has "expressed interest" in working with SpaceX to develop orbiting AI data centers. This is speculative. Cooling in orbital environments differs fundamentally from ground-based solutions. Power delivery, maintenance logistics, and round-trip latency all require solutions that don't fully exist yet. But the inclusion in a press release about a Memphis data center deal communicates something about how both parties are thinking. If orbital compute becomes viable in the 2030s, Anthropic is in the conversation early.

The IPO timing matters too. Anthropic is expected to go public later in 2026, with Goldman Sachs, JPMorgan, and Morgan Stanley reportedly in early discussions. At implied valuations in the hundreds of billions, investors need to understand not just that the company is growing but that it can sustain growth against supply constraints.

The sequence of recent announcements reads like deliberate narrative construction. Revenue crossing $30 billion annualized. 80-fold year-over-year growth in Q1. Infrastructure agreements with every major compute provider. The Colossus deal lands at the top of that stack, demonstrating Anthropic's ability to move fast, work around complicated relationship histories, and secure compute from sources competitors wouldn't have predicted.

Amodei's 80x growth disclosure and the SpaceX announcement arrived in the same week. The timing wasn't accidental. Together they frame a company with unprecedented demand, the infrastructure strategy to match it, and a team executing both simultaneously.

For teams navigating what these infrastructure shifts mean for their own AI stack, AI Infrastructure in 2026: Chips, Cloud, and Capacity Choices tracks how the provider landscape is evolving and what the major capacity decisions look like from a buyer's perspective.

The practical question for the next 30 days: do the rate limit improvements hold and expand as Colossus capacity comes online? Anthropic has committed to that outcome. If it delivers, the developer experience on Claude Code changes in ways that matter for enterprise retention, and the competitive distance between Anthropic and its rivals in the agentic coding market gets harder to close.

Weekly newsletter

Get a weekly summary of our most popular articles

Every week we send one email with a summary of the most popular articles on AIntelligenceHub so you can stay up-to-date on the latest AI trends and topics.

One weekly email. No sponsored sends. Unsubscribe when you want.

Comments

Every comment is reviewed before it appears on the site.

Comments stay pending until review. Posts with more than two links are held back.

Related articles