OpenAI Faces Governance Concerns Amid Profit Shift

Coin WorldFriday, Jun 20, 2025 12:12 pm ET
2min read

The “OpenAI Files” report has brought to light significant concerns about governance, leadership, and safety culture within the influential AI lab, OpenAI. Compiled by two nonprofit watchdogs, the Midas Project and the Tech Oversight Project, the report draws on a variety of sources, including legal documents, media coverage, and insider accounts, to question the company’s commitment to safe AI development. As OpenAI transitions toward a more profit-driven model, the report calls for reforms to ensure ethical leadership and public accountability.

The report, described as the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI, aims to raise awareness and propose a path forward for the company. It highlights issues with the leadership team, particularly CEO Sam Altman, who has become a polarizing figure within the industry. Altman was famously removed from his role as chief of OpenAI in November 2023 by the company’s non-profit board due to concerns about his leadership and communication with the board, particularly regarding AI safety. He was reinstated after a chaotic week that included a mass employee revolt and a brief stint at Microsoft.

Several executives, including former OpenAI executives Mira Murati and Ilya Sutskever, raised questions about Altman’s suitability for the role. According to an article, former Chief Technology Officer Murati expressed discomfort about Altman leading the company to Artificial General Intelligence (AGI), while Sutskever stated that he did not believe Altman should have the authority to control AGI. Dario and Daniela Amodei, former VP of Research and VP of Safety and Policy at OpenAI, respectively, also criticized the company and Altman after leaving OpenAI in 2020, describing Altman’s tactics as “gaslighting” and “psychological abuse.” Dario Amodei went on to co-found and take the CEO role at rival AI lab, Anthropic.

Other prominent figures, including AI researcher and former co-lead of OpenAI’s super-alignment team, Jan Leike, have publicly critiqued the company. When Leike departed for Anthropic in early 2024, he accused the company of letting safety culture and processes “take a backseat to shiny products.”

The report comes at a critical juncture for OpenAI, as the company attempts to shift away from its original capped-profit structure to embrace its for-profit aims. Currently, OpenAI is completely controlled by its non-profit board, which is answerable to the company’s founding mission of ensuring that AI benefits all of humanity. This has led to conflicting interests between the for-profit arm of the company and the non-profit board as OpenAI seeks to commercialize its products. The original plan to spin out OpenAI into an independent, for-profit company was scrapped in May and replaced with a new approach, which will turn OpenAI’s for-profit group into a public benefit corporation controlled by the non-profit.

The “OpenAI Files” aim to raise awareness about the internal workings of one of the most powerful tech companies and to propose a path forward for OpenAI that focuses on responsible governance and ethical leadership as the company seeks to develop AGI. The report emphasizes that the governance structures and leadership integrity guiding a project as important as this must reflect the magnitude and severity of the mission. It suggests that OpenAI could one day meet those standards, but serious changes would need to be made.