The AI company Anthropic is doubling down on AI infrastructure, securing multi-gigawatt compute capacity in a major expansion of its partnerships with Google and Broadcom.

The deal centers around Tensor Processing Units (TPU) – specialized chips developed by Google to handle machine learning and AI workloads. As one of the first processors designed specifically for this domain, TPUs are built around large-scale matrix multiplication and are capable of processing massive datasets in parallel. Their high performance significantly accelerates the training of large language models.

According to Anthropic, the new capacity is expected to come online starting in 2027.

Anthropic CFO Krishna Rao commented:

“This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development… We are making our most significant compute commitment to date to keep pace with our unprecedented growth.”

Why do these deals matter?

The partnership is aimed at supporting Anthropic’s frontier Claude models, demand for which has surged in 2026. The company’s annualized revenue has surpassed $30 billion - more than triple its approximately $9 billion revenue in 2025. Anthropic also noted that at the time of its Series G funding announcement in February, over 500 enterprise customers were spending more than $1 million annually. That figure has now doubled in less than two months, exceeding 1,000.

Anthropic had previously announced an increase in TPU capacity in October as part of its collaboration with Google Cloud. The new agreement further deepens its partnerships with both Broadcom and Google.

The company also highlighted that Claude is trained and deployed across a range of AI hardware, including AWS Trainium, Google TPUs, and NVIDIA GPUs. Leveraging multiple platforms allows Anthropic to match workloads with the most suitable chips while ensuring higher performance and reliability for customers. At the same time, Amazon remains its primary cloud provider and training partner, with continued collaboration under the Rainier project.

Most of the new compute capacity will be located in the United States, in line with Anthropic’s commitment to invest $50 billion in strengthening American computing infrastructure. This partnership significantly expands that initiative.

Claude remains the only AI model available across all three of the world’s largest cloud platforms – Amazon Web Services (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry).

Shifting from Nvidia GPU dominance and Broadcom’s role

It is important to note that the largest AI developers including Anthropic and OpenAI remain heavily reliant on Nvidia GPUs. However, companies are actively exploring alternative solutions.

In response to this shift, Broadcom has announced two major agreements. The first is with Google to manufacture next-generation AI chips through 2031. The second partnership involves providing Anthropic with access to approximately 3.5 gigawatts of compute capacity powered by Google’s AI processors. According to BigGo Finance, this collaboration could generate up to $42 billion in AI-related revenue for Broadcom by 2027.

These developments highlight a broader industry effort to reduce dependence on Nvidia’s hardware ecosystem. Broadcom, in particular, is positioning itself as a key player in the evolving AI infrastructure landscape. Through its partnership with Google, the company is strengthening its role as a supplier of AI hardware and expanding the TPU ecosystem. At the same time, its collaboration with Anthropic enables Broadcom to tap into high-revenue opportunities by supporting the development of advanced AI models. Reports also indicate that Broadcom is collaborating not only with Anthropic but also with OpenAI, despite the competitive dynamic between the two.

Broadcom also notes that the final scale of compute capacity expansion for Anthropic will depend on the company’s performance and growth trajectory.

The race for AI leadership is increasingly shifting toward specialized and efficient hardware that underpins the broader AI infrastructure stack.