Nvidia's market cap surged to $3.4 trillion on Monday as investors cheers for the sky-high prices of its new Blackwell architecture GPUs, despite tech giants still eagerly acquiring them. This has left Wall Street in shock.
Nvidia's DGX B200 Blackwell AI server has been listed online by server solution provider Broadberry with a rather astonishing price tag of $515,410.
Full specs include:
Built with eight Nvidia Blackwell GPUs
1,440GB total GPU memory, with 64TB/s HBM3e bandwidth
72 petaFLOPS FP8 training and 144 petaFLOPS FP4 inference
Nvidia networking
Foundation of Nvidia DGX BasePOD and DGX SuperPOD
Includes Nvidia AI Enterprise and Nvidia Base Command™ software
40% more expensive than the previous generation
Even with high expectations, Wall Street was impressed by the pricing of the DGX B200. "We believe the 40%+ higher pricing of the DGX B200 versus DGX H100 systems could be considered higher than expected, analyst Aaron Rakers wrote in a note to clients. This level of pricing could ease concerns over Nvidia's [gross margin percentage] dynamics.
Rumors of design issues with the Blackwell architecture GPU, which could delay mass production and increase costs, have raised concerns. However, Nvidia stated in its earnings call that the AI chip will be mass-produced and shipped in the fourth quarter, contributing billions of dollars in revenue. CEO Jensen Huang emphasized that the demand for Blackwell AI processors is insane, with customers eager to acquire them quickly and requesting larger quantities.
Morgan Stanley analysts believe that despite significant but easily fixable design issues causing low yields, Nvidia will produce around 450,000 Blackwell GPUs this year. If accurate, and if the company successfully sells these products, it could mean over $10 billion in revenue. With the DGX B200 pricing revealed, each B200 GPU averages around $64,000—far exceeding Morgan Stanley's estimate of $22,000. This is partly because Nvidia prefers to sell entire systems rather than individual GPUs.
Global Tech Giants' AI Race: Competing to Acquire Nvidia Chips
Bank of America stated on Monday that Nvidia's Blackwell architecture GPU release comes amid an AI arms race among cloud hyperscalers like Microsoft, Amazon, and Alphabet.
Capex estimates continue to rise: since March, for every $1 of upward revision in 2024 sales estimates for hyperscalers, there has been a $19 upward revision in capex estimates, Bank of America said in a note.
Last Friday, Goldman Sachs raised Nvidia's target stock price from $135 to $150, indicating a potential 9% increase from current levels. The firm noted that Nvidia's competitive advantage relies on its vast installed base, which fuels a virtuous cycle that attracts more developers. The company's innovation in data centers, chip-level technologies, and its robust software offerings are also key factors.
Despite concerns that ASICs from tech giants, like Apple's use of Google's TPU to train Apple Intelligence, could challenge Nvidia's dominance, Citigroup maintains confidence in the company.
Nvidia is still the king, Citigroup wrote in a report. Due to limitations in hardware use cases, building custom silicon may only be suited for large hyperscalers. However, this does not diminish the need for GPUs in both training and inference, as GPUs remain at the forefront of performance benchmarks and offer scalability for AI infrastructure, the firm said.
Citigroup also pointed out that Nvidia's current expected P/E ratio is about 35 times, below its three-year and five-year averages, indicating that the stock is not severely overvalued.