Broadcom's Tomahawk Ultra Chip: A Strategic Move to Challenge NVIDIA in the AI Infrastructure Race


The AI infrastructure market is undergoing a seismic shift as
(AVGO) doubles down on its push to disrupt NVIDIA's (NVDA) dominance with its next-generation Tomahawk Ultra Ethernet switches. Optimized for AI workloads, these chips are not just a product update but a strategic play to redefine the landscape of data center hardware. By combining its leadership in networking with advanced AI processors, Broadcom is positioning itself as a one-stop shop for hyperscalers seeking to build scalable, cost-efficient AI clusters—a move that could upend NVIDIA's GPU-centric monopoly.
The AI Infrastructure Race: NVIDIA's Lead vs. Broadcom's Playbook
NVIDIA has long been the gold standard in AI hardware, commanding a 90% share of the data center GPU market thanks to its CUDA ecosystem and high-performance chips like the H100. In Q2 2025, NVIDIA's data center revenue surged to $35.1 billion, a 94% year-over-year jump, fueled by demand for its Blackwell ASIC. Yet, this dominance is now under siege.
Ask Aime: Is Broadcom's latest AI chip upgrade a game-changer in the data center hardware market, threatening NVIDIA's GPU dominance?
Broadcom's strength lies in its dual-pronged approach:
1. ASICs for Inference: Broadcom's custom chips offer 2–3x faster performance and 30% lower power consumption than GPUs for inference tasks, with a 75% cost advantage over NVIDIA's GPUs. Hyperscalers like Google,
2. Networking Powerhouse: Its Tomahawk Ultra switches, designed to manage AI clusters, delivered 170% YoY growth in AI networking revenue in Q2 2025. These switches enable hyperscalers to scale AI infrastructure seamlessly, a critical feature as cloud providers race to expand token processing capacity.
Why the Tomahawk Ultra Matters
The Tomahawk Ultra isn't just a faster switch; it's a linchpin in Broadcom's vision of “AI as a service”. By integrating high-bandwidth networking with ASICs, Broadcom reduces latency and energy costs, making it easier for hyperscalers to deploy large language models (LLMs) at scale. For instance, Google's use of Broadcom's TPUs cut cloud training costs by 30%, while Oracle's deployment of AMD's GPUs (a smaller rival) slashed total cost of ownership (TCO) by 40% versus NVIDIA's B200 HGX.
This strategy directly challenges NVIDIA's ecosystem lock-in. While NVIDIA's CUDA platform remains unmatched for training LLMs, Broadcom's focus on inference efficiency—a workload projected to drive 5x–9x YoY growth in token processing—is where the market is pivoting. Broadcom's AI revenue hit $4.4 billion in Q2 2025, up 46% YoY, with a $50 billion annual target by 2027.
Competitive Advantages: Broadcom's Edge Over NVIDIA
- Cost and Power Efficiency: Broadcom's ASICs consume 50% less power per watt than NVIDIA's GPUs for inference, a critical factor for hyperscalers under pressure to reduce carbon footprints.
- Supply Chain Resilience: Unlike NVIDIA's reliance on TSMC's 3nm nodes, Broadcom sources from multiple foundries (including Samsung), mitigating supply risks.
- Full-Stack Solutions: By bundling ASICs with Tomahawk switches, Broadcom offers a turnkey infrastructure that cannot match.
Valuation: Is AVGO Overpriced for Its Ambitions?
Broadcom's stock now trades at a 38.2x forward P/E, a premium over NVIDIA's 58x P/E—a counterintuitive valuation given NVIDIA's faster growth. However, this reflects expectations of 70–75% CAGR in AI revenue through 2026, versus NVIDIA's 32% CAGR in the broader AI chip market.
Investors must weigh risks:
- China Exposure: Broadcom derives >$10 billion in revenue from China (via
- Non-AI Weakness: Its legacy businesses (e.g., enterprise software) remain below pre-pandemic levels, dragging down overall growth.
The Investment Case for Broadcom
Despite these risks, Broadcom's AI trajectory is compelling. Its $20 billion FY2025 AI revenue and partnerships with 10 of the top 15 hyperscalers suggest it's already capturing market share. While NVIDIA's CUDA ecosystem remains a moat, Broadcom's cost leadership and networking prowess make it a strategic buy for investors betting on AI's shift toward inference and scalability.
Final Take
NVIDIA's dominance in training LLMs is unshaken, but Broadcom's Tomahawk Ultra and ASICs are carving a path to profitability in the broader AI infrastructure market. With $4.4 billion in Q2 AI revenue and a clear roadmap to $50 billion by 2027, AVGO offers growth at a reasonable premium—provided its hyperscaler partnerships hold and supply chain risks subside. For tech investors, this is a stock to own for the next AI infrastructure wave.
Hold NVIDIA for its ecosystem, but buy Broadcom for its growth.
Sign up for free to continue reading
By continuing, I agree to the
Market Data Terms of Service and Privacy Statement
Comments
No comments yet