OpenAI's Polite Users Costing Millions, CEO Says
OpenAI CEO Sam Altman has revealed that users' polite interactions with ChatGPT, such as saying "please" and "thank you," are costing the company tens of millions of dollars. Altman made this statement on April 16, highlighting the unexpected financial implications of user behavior on AI systems. The additional tokens generated by polite language require extra computational resources to process, leading to higher energy consumption and increased costs for the company.
Altman's comments sparked a discussion about the motivations behind users' polite interactions with AI models. Some users believe that treating AI with courtesy is a moral imperative, while others are driven by personal development or fear that mistreating the bots could have consequences. Engineer Carl Youngblood, for instance, claims that treating AIs with courtesy is a moral imperative for him, as callousness in daily interactions can cause interpersonal skills to atrophy.
The debate over ChatGPT's electricity consumption has also been a topic of interest. A research paper from Digiconomist founder and Bitcoin mining critic Alex de Vries states that a single ChatGPT query requires around three watt-hours of electricity. However, data analyst Josh You from AI research institute Epoch AI argues that this figure is an overestimate, and is closer to 0.3 watt-hours due to more efficient models and hardware compared to 2023. Altman recently stated that the cost of AI output has been falling tenfold every year as AI models become more efficient.
Despite the increased costs, OpenAI expects to more than triple its revenue this year to $12.7 billion, despite an uptick in competition from other AI companies. However, the company does not expect to be cash-flow positive until 2029, when it expects its revenue to top $125 billion. This highlights the importance of optimizing AI systems to minimize unnecessary costs and maximize efficiency. Companies like OpenAI must continue to innovate and optimize their AI systems to minimize unnecessary costs and maximize efficiency. This may involve developing new algorithms that can process user inputs more efficiently or implementing strategies to reduce the computational load required for generating responses.
