Altman redefines AGI, citing OpenAI's limitations.

Monday, Aug 11, 2025 7:21 am ET2min read

Sam Altman, co-founder of OpenAI, has stated that the term AGI (Artificial General Intelligence) is "not a super useful term" and that the new model developed by his company falls short of his own definition of AGI. Altman notes that the model is not capable of continuous learning on its own, a key characteristic of AGI.

In recent developments, the pursuit of Artificial General Intelligence (AGI) has gained significant traction, with tech giants like OpenAI and Google making substantial strides. However, the path to AGI remains fraught with uncertainty and challenges.

OpenAI, led by Sam Altman, has released GPT-5, an advanced model that will power ChatGPT. While Altman acknowledges the model as a significant step towards AGI, he emphasizes that it falls short of his definition of AGI due to its inability to continuously learn and adapt post-launch [1]. This limitation underscores the current state of AGI development, where systems, while impressive, lack the autonomy required to perform full-time human jobs.

Google, on the other hand, has introduced Genie 3, a world model that can simulate realistic environments. This model, developed by Google DeepMind, is a key step towards AGI as it allows AI systems to interact with convincing simulations of the real world. Genie 3 could be used to train robots and autonomous vehicles, showcasing its potential in developing autonomous agents [2]. However, Google has not yet released Genie 3 to the public, indicating that it is still in the early stages of development.

The race towards AGI is intense, with both US and Chinese tech companies investing heavily. OpenAI's annual recurring revenue has reached $13 billion, reflecting the financial success of its AI models [1]. Meanwhile, Google, Meta, Microsoft, and Amazon are collectively spending nearly $400 billion on AI this year, more than the defense spending of EU members [1]. This investment highlights the global race for AI dominance, with China making significant strides in the field.

Despite the hype and investment, experts caution against unrealistic expectations. Benedict Evans, a tech analyst, describes AGI as a "thought experiment" and emphasizes the scientific uncertainty surrounding its development [1]. Even Aaron Rosenberg, a partner at Radical Ventures, acknowledges that while a more limited definition of AGI could be achieved around the end of the decade, the definition and feasibility of AGI remain fluid and subject to evolution [1].

The pursuit of AGI is not without its challenges. Ensuring that AI systems are reliable, transparent, and free of bias remains a critical concern. Experts like David Bader, director of the Institute for Data Science at the New Jersey Institute of Technology, argue that claims of superintelligence are more about competitive positioning than actual technical breakthroughs [1].

In conclusion, while the pursuit of AGI continues to captivate the tech industry, the path remains uncertain and fraught with challenges. The investments and developments by tech giants like OpenAI and Google highlight the global race for AI dominance, but the feasibility and timeline of AGI remain open to debate.

References:
[1] https://www.theguardian.com/technology/2025/aug/09/its-missing-something-agi-superintelligence-and-a-race-for-the-future
[2] https://www.theguardian.com/technology/2025/aug/05/google-step-artificial-general-intelligence-deepmind-agi

Altman redefines AGI, citing OpenAI's limitations.

Comments



Add a public comment...
No comments

No comments yet