AMD's “Advancing AI 2025” Unveils Aggressive Push Into AI Hardware and Systems Market

Jay's InsightThursday, Jun 12, 2025 3:31 pm ET
2min read

Advanced Micro Devices (AMD) used its “Advancing AI 2025” event in San Jose to signal just how serious it is about chasing Nvidia’s lead in AI accelerators and full-stack data center solutions. The company unveiled a suite of next-gen products, announced new partnerships with AI and hyperscaler heavyweights, and highlighted its growing momentum in silicon, systems, and software. While shares slipped from $121 to $118, much of that was likely technical—a rejection at the 200-day moving average ($124) has traders eyeing a pullback zone rather than a negative fundamental shift.

Ask Aime: What's next for AMD's AI tech?

At the heart of AMD’s announcement was the launch of the MI350X and MI355X accelerators, its latest AI chips boasting up to 288GB of HBM3E memory and 8TB/s memory bandwidth. These chips, powered by the new CDNA 4 architecture and built on TSMC’s advanced N3P node, promise up to 4.2x the performance of the MI300X in AI workloads. AMD says inference performance has improved by 35x over the previous generation, and early benchmarks show competitive parity—or even slight outperformance—against Nvidia’s B200 and GB200 in both training and inference tasks, particularly at FP4 and FP6 precision levels.

Ask Aime: Will AMD's new AI chips surpass Nvidia's in performance?

CEO Lisa Su introduced several complementary hardware systems, including the MI400 series expected in 2026 and the “Helios AI Rack,” a liquid-cooled setup that connects up to 72 GPUs and delivers 2.9 exaflops of FP4 compute. In parallel, AMD introduced “Vulcano,” a purpose-built networking chip aimed at optimizing data flow in high-bandwidth AI systems, and unveiled new ROCm 7 software, an attempt to close the ecosystem gap with Nvidia’s dominant CUDA platform.

AMD is not going it alone. Major players including Meta, Oracle, OpenAI, Supermicro, and even Elon Musk’s xAI are partnering with AMD to deploy these chips. Oracle will run clusters of up to 131,000 MI355X chips in its cloud, while Meta is using the MI350 for training recommendation models. OpenAI CEO Sam Altman appeared on stage, praising AMD’s hardware and noting the explosion in demand for long-context models, saying “we need tons of computers, tons of memory, and tons of CPUs as well.” Altman’s presence underscored AMD’s progress in gaining credibility with tier-one AI developers.

The event also highlighted AMD’s quiet but significant acquisition strategy: the company has invested in or acquired 25 AI-related startups in the last year. This includes server-builder ZT Systems, chip team Untether AI, and talent from Lamini. These moves aim to build out AMD’s AI stack from silicon to software to systems—mirroring Nvidia’s approach. The company also launched AMD Developer Cloud, offering developers instant access to AMD’s chips for testing and scaling applications.

Still, the road ahead is challenging. ROCm software continues to lag Nvidia’s CUDA in adoption and developer traction, and AMD remains years behind in terms of total ecosystem maturity. Cantor Fitzgerald acknowledged the bullish roadmap by raising its price target on AMD shares from $120 to $140 but cautioned that the true revenue impact of these chips may not materialize until 2026. It sees potential for $10+ EPS by 2028 if AMD captures just 5–10% of a projected $500B AI chip market, but also expects near-term volatility as the Instinct ramp progresses.

Performance metrics shared by AMD showed a measured confidence: in certain Llama model inference tasks, an 8-GPU MI355X node performed 1.2–1.3x better than Nvidia’s comparable 8-GPU GB200 HGX setup. Training performance was more evenly matched, but AMD claimed up to a 1.13x edge on select workloads. Perhaps most critically, AMD emphasized improvements in total cost of ownership (TCO), claiming its denser racks with advanced cooling can deliver more compute per dollar—a message clearly aimed at hyperscalers watching both power and capex closely.

With shares retreating modestly despite the deluge of product announcements and marquee partnerships, the market appears to be in a “show me” phase. The 200-day moving average remains a key technical barrier, but the story is building: AMD is no longer just trying to sell chips; it’s pushing toward being a full-stack AI systems provider. That ambition carries execution risk—but also the potential for substantial reward if the company can convert design wins into durable revenue streams.

In sum, AMD’s Advancing AI 2025 event showcased a company hitting the gas on every front—chips, systems, software, and partnerships. While it’s not yet Nvidia’s equal, AMD is no longer a distant second. And with performance metrics tightening and hyperscaler support growing, investors would do well to keep this pullback on their radar.

Watch: How this savvy CEO is using AI to save companies millions.