CoreWeave, an American AI and cloud computing company, has expanded its agreement with Meta Platforms, Inc. to provide additional AI cloud capacity through December 2032, in a deal valued at approximately US$ 21 billion. CoreWeave described the agreement as part of broader demand for large-scale compute infrastructure used in AI development and deployment.
According to a press release, CoreWeave will supply dedicated infrastructure to support Meta’s AI training and inference workloads under the agreement. The capacity will be deployed across multiple locations and will include early deployments of NVIDIA Vera Rubin platform. The distributed setup is intended to support performance requirements as Meta scales its AI systems.
Michael Intrator, Co-founder, CEO, Chairman, CoreWeave, said, “This is another example that leading companies are choosing CoreWeave’s AI cloud to run their most demanding workloads.”
The arrangement extends an existing relationship between the two companies and increases the amount of compute capacity available to Meta for developing and running AI models at scale.
Unlike typical hyperscaler cloud agreements, the deal is structured as a long-term reservation of AI compute capacity tied to infrastructure delivery, rather than broad, elastic cloud consumption. The commitment functions more like a GPU supply arrangement embedded within a cloud services contract, with usage dependent on CoreWeave’s deployment of physical compute infrastructure over time.

