NVIDIA's Co-Packaged Optics: A Breakthrough in Data-Center Connectivity through Industry Collaboration

Tuesday, Aug 26, 2025 1:10 pm ET2min read

NVIDIA is transforming data-center connectivity by integrating optical and electrical components through collaboration with industry partners. The company's co-packaged optics (CPO) platform brings together advanced technologies in photonics ICs, electronic ICs, fiber, packaging, connectors, and lasers. The Micro Ring Modulator silicon photonics engine, a key innovation, achieves high data throughput without sacrificing physical compactness. Collaboration with TSMC has helped solve manufacturing problems associated with the Micro Ring Modulator, enabling reliable and repeatable performance at production scale. The platform is designed to be energy-efficient and scalable, setting a new standard for optical interconnects in the future of AI factory interconnects.

NVIDIA has announced ambitious plans to revolutionize the interconnectivity of AI GPUs in data centers by leveraging light-based technology. By 2026, the company aims to integrate silicon photonics and co-packaged optics (CPO) into its next-generation AI platforms, promising higher transfer rates and lower power consumption. This innovative approach addresses the growing demands of large-scale AI deployments, positioning NVIDIA as a leader in the field.

The challenge of interconnecting thousands of GPUs to function as a single system has become increasingly apparent as AI clusters grow in scale and complexity. Traditional networking configurations, which rely on copper cables and pluggable optical modules, are reaching their limits in terms of speed, power efficiency, and scalability [1]. NVIDIA's solution involves relocating switches to the end of the row, creating a consistent, low-latency fabric across multiple racks. This architectural change necessitates optical connections for nearly all server-to-switch and switch-to-switch links, as copper becomes impractical at speeds like 800 Gb/s over extended distances [1].

At the heart of NVIDIA's innovation lies the adoption of co-packaged optics (CPO). This technology embeds the optical conversion engine alongside the switch ASIC, dramatically reducing electrical loss and power consumption [1]. NVIDIA reports that CPO offers significant advantages over traditional pluggable modules, including a 3.5x increase in power efficiency, a 64x improvement in signal integrity, and a 10x boost in resiliency due to fewer active devices [1]. Additionally, CPOs enable faster deployment times, simplifying service and assembly.

NVIDIA's roadmap includes two major platforms leveraging CPO technology: Quantum-X InfiniBand switches and Spectrum-X Photonics. The Quantum-X InfiniBand switches, scheduled for early 2026, will deliver 115 Tb/s of throughput, supporting 144 ports operating at 800 Gb/s each. The Spectrum-X Photonics platform, set to arrive in the second half of 2026, will utilize the Spectrum-6 ASIC, powering devices with up to 409.6 Tb/s of bandwidth and 512 ports at 800 Gb/s [1].

NVIDIA's development closely follows TSMC's Compact Universal Photonic Engine (COUPE) roadmap, which unfolds in three stages [1]. The first generation offers 1.6 Tb/s data transfer for OSFP connectors, the second generation enables 6.4 Tb/s at the motherboard level using CoWoS packaging, and the third generation aims for 12.8 Tb/s within processor packages.

NVIDIA emphasizes that co-packaged optics are not just an optional enhancement but a structural requirement for future AI data centers. The company envisions that its CPO-based switches will power new AI clusters for increasingly sophisticated generative AI applications, offering improvements in key metrics such as time-to-turn-on, time-to-first-token, and long-term reliability [1]. By eliminating thousands of discrete components, these new clusters promise faster installation, easier servicing, and reduced power consumption per connection.

NVIDIA's investment in light-based GPU interconnects represents a significant step forward in addressing the growing demands of large-scale AI deployments. The success of this technology could reshape the landscape of AI data centers in the coming years.

References:
[1] https://theoutpost.ai/news-story/nvidia-unveils-plans-for-light-based-gpu-interconnects-by-2026-revolutionizing-ai-data-centers-19465/
[2] https://www.tomshardware.com/networking/nvidia-outlines-plans-for-using-light-for-communication-between-ai-gpus-by-2026-silicon-photonics-and-co-packaged-optics-may-become-mandatory-for-next-gen-ai-data-centers

NVIDIA's Co-Packaged Optics: A Breakthrough in Data-Center Connectivity through Industry Collaboration

Comments



Add a public comment...
No comments

No comments yet