Meta in Discussion $200 Billion AI Data Center Project – Who Will Benefit?

Wallstreet InsightWednesday, Feb 26, 2025 12:39 am ET
2min read

Meta Platforms is in discussions to build a new AI data center project worth $200 billion, amidst Microsoft leasing out its data centers, as concerns about the overcapacity of AI computing power grow.

Meta's executives have informed data center developers that the company is considering building the campus in states such as Louisiana, Wyoming, or Texas, with senior executives having visited potential sites this month.

Meta has arguably been one of the biggest beneficiaries of the AI wave. As a social media giant, it has deepened user engagement through AI-powered personalized recommendations and improved ad targeting. In the fourth quarter, its revenue grew 21% year-on-year to $48.4 billion, while net profit surged 49% to $20.8 billion. Due to the emergence of DeepSeek, investors are increasingly confident that Meta can optimize its own large models, lowering training costs while maximizing efficiency.

Having tasted the benefits of AI, Meta has announced that its capital expenditures will reach $60-65 billion this year, with a focus on generative AI and core business. Zuckerberg further emphasized that, in the coming years, Meta will invest hundreds of billions of dollars in AI infrastructure. I continue to think that investing heavily in CapEx and infrastructure will be a strategic advantage over time, he said.

Compared to SoftBank, Oracle, and OpenAI's $500 billion Stargate partnership, Meta appears to be a more reliable player, especially considering the scale. It is expected that Meta will go solo with this plan. In terms of beneficiaries, while Meta has been purchasing a significant amount of Nvidia's general-purpose GPU computing power, it will also supplement this with AMD's chips. Additionally, Meta will invest heavily in ASICs through its collaboration with Broadcom and Arm.

Zuckerberg has previously stated that Meta held the equivalent of 600,000 Nvidia H100 GPUs at the end of last year, and the number is expected to reach 1.3 million GPUs this year. Considering the high cost of Nvidia's current computing chips, it is clear that Meta won't put all its eggs in one basket, nor does it need to rely solely on the more efficient H200 or B200 models. As a complement, Meta is also purchasing AMD's MI300X GPUs for AI inference.

In addition to general-purpose chips, Meta's earnings report indicated that it will accelerate the adoption of MTIA chips developed in collaboration with Broadcom. These chips, which are designed to integrate with Meta's computing clusters for training and inference, are specifically optimized for ad ranking and recommendations and will likely be much cheaper than general-purpose GPUs. Companies like Google, Apple, OpenAI, and Amazon are already adopting self-developed ASIC chips in collaboration with Broadcom. These chips are more tailored to each company's specific algorithm needs and also serve as leverage in negotiations with Nvidia.

Furthermore, there's the matter of CPUs. While GPUs currently dominate the training of large models, the role of CPUs remains significant. In February, it was reported that Arm, a subsidiary of SoftBank, is accelerating the development of CPU platforms for large data center servers based on customizable designs. Meta has become its first customer and may provide customized solutions to other large-scale data center clients in the future.

Overall, although the AI boom has somewhat subsided, demand for computing power from major tech companies continues to surge. This is because refining large models means more computing power can be applied to real-world applications. While Nvidia remains the leader in this field, its profit margins of 60-70% will push major companies to adopt a more flexible approach to chip usage, benefiting the industry as a whole.