Nvidia's Computex 2025 Keynote: AI Infrastructure Revolution, Growth Expected by 2025

Morgan Stanley, in a report released on May 19, stated that while Nvidia's keynote speech at Computex 2025 did not reveal any major surprises, the company's path to regaining growth in the second half of the year is clear. The report highlighted that several mid-term concerns that had been troubling the market were being addressed one by one, including customer digestion cycles, GB200 bottlenecks, and ecosystem collaboration issues. Nvidia is expected to return to strong growth by the second half of 2025.
During his two-hour keynote speech at Computex 2025, Nvidia's CEO, Jensen Huang, outlined a vision for an emerging era of AI factories. He described how traditional data centers, once used for conventional applications, have evolved into AI data centers that function as intelligent factories, converting electrical power into "tokens." Huang emphasized that Nvidia is no longer just a technology company but an AI infrastructure company, marking the third major infrastructure revolution after electricity and the internet—the era of intelligent infrastructure.
Huang introduced several key products, including the Grace Blackwell GB200 superchip, which features dual-chip packaging and connects 72 GPUs, effectively creating a "virtual giant chip." This chip is built on the latest NVLink Spine architecture, offering performance equivalent to the 2018 Sierra supercomputer. Huang also unveiled the NVLink Fusion plan, which allows seamless integration of other manufacturers' CPUs, ASICs, and TPUs with Nvidia's GPUs. This technology provides NVLink Chiplet and interface IP, enabling customizable infrastructure combinations.
The NVLink Fusion architecture is designed to address communication speed issues between GPUs and CPUs in AI servers, a significant barrier to scalability. It offers higher bandwidth and lower latency compared to standard PCIe interfaces, with bandwidth advantages up to 14 times greater. This technology is already being adopted by companies like Fujitsu and Qualcomm, who are integrating NVLink functionality into their own CPU designs. Nvidia has also attracted custom silicon accelerators from designers like MediaTek, Marvell, and Alchip, supporting various types of custom AI accelerators to work in tandem with Nvidia's Grace CPU.
Huang also announced the DGX Spark and DGX Station, personal AI supercomputers aimed at AI researchers who want to own their own supercomputing power. These devices are designed to be plug-and-play, fitting into standard household outlets. Additionally, Huang introduced the RTX Pro enterprise AI server, which supports traditional x86, Hypervisor, and Windows IT workloads, and can run graphical AI agents, including the ability to run games like Crysis.
Nvidia's new AI storage architecture, which includes AIQ, Nemo, and GPU storage front-end, is designed to handle non-structured data semantics, integrating GPUs for search, sorting, embedding, and indexing. The company is collaborating with Dell, Hitachi, IBM, NetApp, and VAST to deploy enterprise-level platforms.
Huang also highlighted the potential of robotics as the next trillion-dollar industry, driven by the Isaac Groot platform and the Jetson Thor processor. Nvidia's Isaac operating system manages all neural network processing, sensor processing, and data pipelines, leveraging pre-trained models developed by a team of professional robotics experts. Huang mentioned that Nvidia is applying its AI models to autonomous vehicles, partnering with Mercedes-Benz to launch a fleet of cars equipped with Nvidia's end-to-end autonomous driving technology.
Finally, Huang announced the development of the Newton physics engine, a collaboration with DeepMind and Disney Research. This engine, which supports GPU acceleration and offers high micro-precision and super-real-time operation, will be open-sourced in July. Nvidia plans to integrate Newton into its Isaac simulator, enabling more realistic simulations for robotics applications.
Despite these advancements, Nvidia faces several short-term challenges. The company's sales targets are at risk of being reviewed and potentially prohibited by the U.S. Department of Commerce, which could result in over 500 million dollars in potential losses. Additionally, the slow rollout of the GB200 chip poses a risk to the company's stock price. However, Morgan Stanley's report indicates that many of Nvidia's mid-term concerns are being resolved, including stabilizing customer digestion cycles, optimizing collaboration between cloud providers and LLM providers, and alleviating GB200 bottlenecks.
Nvidia has also announced collaborations with Foxconn and China Taiwan to build a new supercomputer equipped with 10,000 Blackwell GPUs. TSMC will be the primary client for this research and development, and Nvidia plans to establish a new office in China Taiwan to further strengthen its ecosystem collaborations.

Comments
No comments yet