The OpenAI Files: When the Race for Profit Threatens AI Safety

6 minutes de lecture

“`html

An explosive report titled « The OpenAI Files« , published on June 19, 2025, highlights serious accusations made by former OpenAI employees. They claim that the company, once a pioneer in developing ethical and secure artificial intelligence, is now sacrificing its founding principles for the sake of profit-driven competition. As OpenAI, known for ChatGPT, prepares to become a for-profit entity, these revelations raise crucial questions about AI safety and the governance of technology giants.


An initial mission betrayed?

Founded in 2015 by figures like Sam Altman and Elon Musk, OpenAI had committed to developing artificial general intelligence (AGI) for the benefit of humanity. Its initial structure, a non-profit organization, limited investor profits to prioritize safety and ethics. However, according to the report, this promise has been progressively abandoned. Former employees, including Carroll Wainwright, denounce a “profound betrayal”: “The non-profit mission was a guarantee to do what is right when the stakes became high.”

The shift toward a for-profit model, initiated as early as 2018 and reinforced by massive fundraising ($40 billion in March 2025), marks a turning point. Critics point to a removal of profit caps, originally designed to prevent profits from taking precedence over safety. This change, combined with opaque practices, fuels concerns that the company prioritizes shareholder interests over public good.


Damning accusations

The report reveals several troubling practices:

  • Reduction in safety efforts: Teams dedicated to AI safety research have been dismantled, and OpenAI allegedly abandoned its commitment to dedicate 20% of its computing resources to such work. Jason Green-Lowe, of the Center for AI Policy, warns: “If OpenAI acts this way under nominal non-profit oversight, imagine its behavior once freed to maximize profits.”
  • Suppression of whistleblowers: Employees who express concerns are silenced, fearing for their jobs or savings.
  • Conflicts of interest: Sam Altman and board members allegedly hold stakes in startups linked to OpenAI, creating risks of biased decisions.
  • Misleading titles in SEC filings: Altman allegedly falsely claimed the title of Y Combinator president in official documents, without the organization’s approval.


Implications for AI safety

OpenAI’s transition toward a for-profit model raises concerns about the safety of advanced AI systems. Former employees, backed by experts like Geoffrey Hinton (Nobel laureate for his work on AI), fear that the race to commercialization will lead to reduced safety testing. Anish Tondwalkar, former engineer, warns: “If OpenAI becomes a for-profit company, essential safeguards, such as the clause to assist rivals in case of a breakthrough toward AGI, could disappear overnight.”

The report also highlights the absence of independent oversight. Signatories are calling for robust governance, with mechanisms to protect whistleblowers and maintain OpenAI’s initial commitments. Without these safeguards, the development of superintelligent AI could pose existential risks.


A talent war that complicates the situation

In parallel, OpenAI is at the center of a fierce battle to attract the best AI talent. Sam Altman recently revealed that Meta was offering $100 million bonuses to poach its engineers, a practice he deems counterproductive to company culture. Yet OpenAI itself is not exempt from criticism: its equity structure, based on profit participation units (PPUs), limits employees’ ability to donate their shares to charities, which fuels internal tensions.

This talent war, combined with investor pressure and competition with players like Anthropic and xAI, pushes OpenAI to accelerate its developments, sometimes at the expense of caution.


What can be done to restore trust?

Faced with these challenges, several measures are proposed:

  1. Strengthen governance: Establish independent oversight to ensure that safety remains a priority, even within a for-profit framework.
  2. Protect whistleblowers: Create an environment where employees can report concerns without fear of retaliation.
  3. Maintain initial commitments: Restore profit caps and safety clauses, such as the obligation to assist safety-aligned projects.
  4. Increased transparency: Publish regular reports on safety efforts and potential conflicts of interest.
  5. State regulation: Former employees have urged the attorneys general of California and Delaware to block OpenAI’s restructuring, an effort supported by figures like Elon Musk.

A critical turning point for AI

The OpenAI Files affair highlights the tensions inherent in the commercialization of AI. As Sam Altman proclaims that “the age of superintelligence has begun,” dissenting voices remind us that without safeguards, this revolution could come at the expense of humanity. The OpenAI case illustrates a broader challenge: how to reconcile innovation, profit, and responsibility in a sector where the stakes are colossal?

To follow developments in this controversy and discover the latest advances in AI, subscribe to the AI-Explorer.io newsletter.


Sources:

“`

Partager cet article
Laisser un commentaire