Anthropic Expands Partnership on TPUs with Google Cloud
Anthropic Expands Partnership on TPUs with Google Cloud

October 23, 2025 — San Francisco, CA: Anthropic has announced a major expansion of its long-term partnership with Google Cloud, including plans to utilize up to one million Tensor Processing Units (TPUs) to power its next generation of AI research and product development.

The expanded agreement, valued in the tens of billions of dollars, represents one of the largest known AI infrastructure expansions to date. Anthropic expects the move to bring over a gigawatt of compute capacity online by 2026, significantly scaling its ability to train and deploy advanced models such as Claude.

Anthropic’s choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years,” said Thomas Kurian, CEO of Google Cloud. “We are continuing to innovate and drive further efficiencies and increased capacity of our TPUs, building on our already mature AI accelerator portfolio, including our seventh-generation TPU, Ironwood.

Fueling Frontier AI Development

Anthropic’s expanded use of Google’s AI infrastructure will dramatically increase its available compute power, enabling faster model training, more rigorous testing, and large-scale alignment research to ensure the responsible evolution of AI systems.

The company, now serving over 300,000 business customers, has seen its number of large enterprise accounts—those generating more than $100,000 in annual run-rate revenue—grow nearly sevenfold in the past year.

Anthropic and Google have a longstanding partnership, and this latest expansion will help us continue to grow the compute we need to define the frontier of AI,” said Krishna Rao, CFO of Anthropic. “Our customers—from Fortune 500 companies to AI-native startups—depend on Claude for their most important work. This expanded capacity ensures we can meet exponentially growing demand while keeping our models at the cutting edge.

A Multi-Cloud, Multi-Chip Strategy

While expanding its use of Google Cloud’s TPUs, Anthropic emphasized that it remains committed to a diversified compute strategy spanning multiple chip architectures and cloud partners.

The company continues to leverage:

  • Google Cloud TPUs for high-efficiency training and inference workloads.

  • Amazon’s Trainium chips, as part of its partnership with AWS.

  • NVIDIA GPUs for flexibility across research and production environments.

This multi-platform approach, Anthropic says, helps ensure continued scalability, reliability, and resilience across its global infrastructure.

Anthropic also reaffirmed its ongoing collaboration with Amazon on Project Rainier, a massive AI compute cluster featuring hundreds of thousands of chips across multiple U.S. data centers.

Pushing the Boundaries of Responsible AI

Anthropic noted that the new compute expansion will not only enable larger models but also support alignment research and responsible deployment at scale — areas the company views as essential to advancing safe and beneficial AI systems.

The additional TPU capacity will allow Anthropic to run larger experiments and stress-test systems more thoroughly, accelerating innovation while upholding its commitment to safety, interpretability, and transparency in AI development.

“This investment ensures we can advance Claude’s capabilities while maintaining our commitment to safety and reliability at every scale,” the company said in a statement.

About Anthropic

Founded in 2021, Anthropic is an AI research company dedicated to developing reliable, interpretable, and steerable AI systems. Its flagship product, Claude, powers AI experiences for thousands of enterprises worldwide and serves as a foundational model across industries including technology, finance, legal, and customer service.

Source: Anthropic Official Blog