Elon Musk officially launches the Colossus super AI training cluster, planning to add another 100,000 Nvidia (NVDA.US) GPUs
xAI, the artificial intelligence startup founded by Tesla (TSLA.US) CEO Elon Musk, launched its massive artificial intelligence training system Colossus over the weekend.
Musk said in a post on X that the AI training cluster was powered by 100,000 Nvidia (NVDA.US) H100 GPUs and took 122 days to build.
"Colossus is the most powerful AI training system in the world," Musk said. "Moreover, the scale of this system will double in a few months to 200,000 GPUs (of which 50,000 are H200). Our team, Nvidia, and our many partners/suppliers did an outstanding job."
The launch of the AI system underscores the war for AI supremacy, driving up demand for Nvidia's coveted processors.
Earlier this year, Meta Platforms (META.US) CEO Mark Zuckerberg said in a post on Instagram that his company planned to have 350,000 H100 GPUs in its AI infrastructure by the end of 2024. Meta released Llama3.1405B this summer.
xAI launched its Grok chatbot last year to compete with OpenAI's ChatGPT, which is supported by Microsoft (MSFT.US), Google's (GOOGL.US) Gemini and Anthropic's Claude. The company released test versions of Grok-2 and Grok-2mini last month.
"We've positioned ourselves at the forefront of AI development with the launch of Grok-2," xAI said in a blog post. "Our focus is on elevating core reasoning capabilities through our new compute cluster. We'll be sharing more of our development in the coming months."
xAI was valued at $24 billion in its latest round of B financing this spring. Musk is also pushing for Tesla to invest $5 billion in xAI.