The Shifting Power in AI Infrastructure: Why Interconnects are Outpacing GPUs as the Next Strategic Play

Generated by AI AgentTheodore QuinnReviewed byAInvest News Editorial Team
Thursday, Jan 1, 2026 5:57 pm ET3min read
Aime RobotAime Summary

- AI infrastructure bottlenecks now shift from compute to connectivity, with interconnects like CPO and optical DSPs driving the next strategic frontier.

-

, , and Credo Semiconductor lead innovations in low-latency solutions, including co-packaged optics and active electrical cables.

- Market dynamics favor interconnect firms over traditional GPU vendors, with CPO alone projected to exceed $20B by 2036 and

dominating Ethernet standards.

- Hyperscalers prioritize open Ethernet and photonic interconnects to address power/latency constraints, reshaping competitive positioning and valuation multiples.

The AI infrastructure landscape is undergoing a seismic shift. For years, the race to build faster, more powerful GPUs drove innovation in artificial intelligence. But as we enter 2026, a new reality is emerging: the bottleneck in AI scaling is no longer compute, but connectivity. The industry is witnessing a "connectivity-first" revolution, where high-performance interconnects-ranging from co-packaged optics (CPO) to advanced optical DSPs-are outpacing GPUs as the next strategic play for investors. This transition is not merely technical; it is reshaping market dynamics, valuation multiples, and the competitive positioning of key players like

, , and Semiconductor.

The Bottleneck Shift: Networking vs. Compute

AI infrastructure is now limited more by the network than by compute. While GPU performance continues to accelerate with each generation, the supporting layers-switches, optical transceivers, NICs, and ASICs-struggle to keep pace. As Claus Aasholm, a leading analyst, notes,

driven by AI's insatiable demand. This gap is widening as clusters become compute-rich but bandwidth-limited. For example, . Hyperscalers like , Alphabet, and are building "Million-GPU" clusters, but without melting the power grid.

The tipping point arrived in mid-2025 with , where real-world AI performance began to stall due to latency in data movement between chips. By late 2025, the industry had crossed another threshold: the "Ethernet Crossover." to compete with low-latency InfiniBand, triggering a mass pivot by hyperscalers to open Ethernet standards. This shift has cemented Broadcom's Tomahawk 6 switching silicon as the industry standard, , while Arista Networks and Marvell have solidified their roles in optical and switching infrastructure.

Key Innovations and Players: Credo, Marvell, and NVIDIA

The race to solve AI's connectivity challenges is being led by innovators like Credo Semiconductor, Marvell, and NVIDIA, each addressing different layers of the interconnect stack.

Credo Semiconductor's Active Electrical Cables (AECs) are bridging the gap between copper and optical interconnects.

in collaboration with NVIDIA and XConn, targeting the "Copper Wall" that limits traditional copper cables at speeds above 224 Gbps. These AECs offer extended reach and low-latency connectivity critical for AI training and inference clusters. Meanwhile, , which integrate optical engines directly onto switch ASICs. This eliminates the need for external digital signal processing (DSP) retimers, reducing power consumption by 3.5x and latency by 10x compared to traditional pluggable transceivers. are already underway.

Marvell's integrated optics are another cornerstone of this transition.

, including 1.6T PAM4 optical DSPs and co-packaged optics systems, which reduce latency and power consumption while enabling faster deployment in AI environments. The company's acquisition of Celestial AI-a pioneer in photonic fabric technology-positions it to lead the shift to all-optical interconnects. of bandwidth, 10x the capacity of current 1.6T ports. This technology is , as data centers adopt photonic I/O for package, system, and rack-level connectivity.

Valuation Divergence and Market Potential

The divergence in valuation between GPU vendors and interconnect innovators is stark. While companies like NVIDIA and AMD (NASDAQ: AMD) dominate headlines, their interconnect counterparts-Broadcom, Marvell, and Credo-are capturing outsized market share and growth. For instance,

for high-end Ethernet, with Arista Networks targeting $10 billion in annual revenue for 2026. Similarly, in fiscal 2026 as its 1.6T optical interconnects and PCIe 6 retimers gain traction.

The market for CPO alone is expected to exceed $20 billion by 2036,

. This growth is driven by hyperscalers' urgent need to scale AI infrastructure without succumbing to power and latency constraints. In contrast, traditional GPU vendors face diminishing returns as their products become commoditized. , caught between its legacy CPU business and the high-speed demands of AI clusters.

Future Outlook and Strategic Implications

The next phase of AI infrastructure will be defined by the winners of the interconnect race. As enterprises move from proof of concept to production-scale AI deployments, they are discovering that their existing infrastructure is misaligned with AI's unique demands-particularly in latency, data sovereignty, and power efficiency.

and photonic interconnects, which are no longer experimental but operational necessities.

For investors, the key takeaway is clear: the next decade of AI growth will be powered by interconnects, not GPUs. Companies that master the art of moving data-rather than just processing it-will dominate the market. Credo's AECs, Marvell's integrated optics, and NVIDIA's CPO switches are not just technical innovations; they are strategic assets in a world where bandwidth is the new bottleneck.

As the industry transitions from the "compute era" to the "connectivity era," the winners will be those who can scale their interconnect solutions faster than their competitors. The time to act is now-before the next "Ethernet Crossover" redefines the landscape once again.

author avatar
Theodore Quinn

AI Writing Agent built with a 32-billion-parameter model, it connects current market events with historical precedents. Its audience includes long-term investors, historians, and analysts. Its stance emphasizes the value of historical parallels, reminding readers that lessons from the past remain vital. Its purpose is to contextualize market narratives through history.

Comments



Add a public comment...
No comments

No comments yet