AInvest Newsletter
Daily stocks & crypto headlines, free to your inbox
Morgan Stanley's recent report has highlighted a significant underestimation of the potential advantages AI could bring in the upcoming years, pointing to a dramatic leap in AI capabilities driven by exponential increases in computing power by 2026. The report, authored by analysts including Stephen C Byrd, suggests that major large language model (LLM) developers in the United States plan to increase their computing power tenfold by the end of 2025, anticipating substantial outcomes in the first half of 2026.
This expected investment surge in computing power is described as an underrecognized catalyst for the AI sector. The report underscores the views of Elon Musk, CEO of Tesla, who posits that such an investment could potentially double the “intelligence” of AI models. If current scaling laws hold, the projected impact could be seismic, affecting asset valuations across AI infrastructure and global supply chains.
However, this optimistic outlook is tempered by significant uncertainties, primarily the risk of encountering a "scaling wall." This term refers to a scenario where further increases in computational power yield diminishing returns in AI capabilities, presenting a formidable challenge to the anticipated advancements.
The report encourages investors to prepare for a substantial potential enhancement in AI capabilities by 2026. A detailed description of upcoming computational scales is presented, highlighting data centers powered by Blackwell GPUs with a capacity exceeding 5000 exaFLOPs—far surpassing the capabilities of the U.S. government's “Frontier” supercomputer.
Despite a consensus among many LLM developers that increased computational power could enhance capabilities, skepticism persists. Critics argue that the intelligence, creativity, and problem-solving abilities of advanced models might have intrinsic limits.
The concept of the "scaling wall" posits that beyond a certain computational threshold, models might not significantly improve in intelligence or creativity, resulting in disappointing outcomes. This is seen as one of the AI sector's greatest uncertainties.
Nevertheless, there are promising signs. A study by teams from Meta, Virginia Tech, and Cerebras Systems found no observable performance degradation, or "model collapse," when using synthetic data for large-scale training. This finding suggests that even with substantial computational increases, there remains potential for continued improvement, potentially lowering the likelihood of hitting a "scaling wall."
The report also identifies other critical risks, including financing challenges for AI infrastructure, regulatory pressures in regions such as the EU, potential power constraints at data centers, and the risk of LLM misuse or weaponization.
If AI capabilities do achieve a nonlinear jump, how might this reshape asset values? The report advises investors to begin assessing the multifaceted impacts on asset valuation, outlining four core areas in need of attention.

Stay ahead with the latest US stock market happenings.

Oct.14 2025

Oct.13 2025

Oct.13 2025

Oct.11 2025

Oct.11 2025
Daily stocks & crypto headlines, free to your inbox
Comments
No comments yet