OpenAI

verified credible theory

Folder: 04 - TECHNOLOGY & SURVEILLANCE Source note: SRC - OpenAI


What OpenAI Actually Is

OpenAI was founded in 2015 as a nonprofit research laboratory with a stated mission to develop artificial general intelligence that “benefits all of humanity.”

It is now a public benefit corporation valued at approximately 13 billion, Amazon for $50 billion, and actively building AI systems for the US Department of Defense.

Until January 2024 OpenAI’s policies explicitly prohibited use of its models for weapons development, military and warfare applications, and activity with high risk of physical harm. Those restrictions were quietly removed. verified

The gap between the founding mission and the current trajectory is one of the most documented stories in modern technology.


The Structure — How The

Mission Got Restructured

2015: Founded as nonprofit. Mission: safe AGI for all humanity. Elon Musk co-founds and donates approximately $38 million.

2019: Created OpenAI Global LLC — a capped-profit subsidiary. Microsoft invested $1 billion.

2023: Microsoft invested a further 13 billion+. Microsoft became OpenAI’s exclusive cloud provider through Azure. A clause in the agreement states Microsoft loses access to OpenAI’s models once AGI is achieved — with AGI privately defined as when the system generates maximum profits for earliest investors. verified

That is the private financial definition of AGI: not a threshold of human-level intelligence but a threshold of maximum profit generation.

May 2024: Half of OpenAI’s AI safety researchers left the company. Superalignment team co-leaders Ilya Sutskever and Jan Leike both departed. Leike’s resignation statement: loss of belief that “when OpenAI says it’s going to do something or says that it values something, that that is actually true.” The superalignment team had been promised 20% of company compute resources. They received nowhere near that. verified

October 2025: Completed restructuring to Delaware public benefit corporation. Nonprofit received equity in exchange for releasing control — the safety mission is now financially dependent on the for-profit success of the entity it was supposed to oversee. Valuation at restructuring: $500 billion. #verified

February 2026: Amazon invested 100 billion cloud expansion deal. Microsoft threatened to sue over Azure exclusivity violations. OpenAI and Microsoft are now in an active legal dispute over the same investment relationship that funded OpenAI’s growth. verified


The Policy Reversal — Military Use

This is the most important section in the note and it happened quietly.

Until January 2024 OpenAI’s usage policies explicitly prohibited:

  • Weapons development
  • Military and warfare applications
  • Activity with high risk of physical harm

In January 2024 these restrictions were quietly modified. The new policy focuses on “preventing harm to people and property destruction” — language broad enough to permit AI warfare, military, and defense applications that the previous policy explicitly barred. verified

No public announcement. No press release. A policy update.

June 2025: The Department of Defense awarded OpenAI a $200 million contract to develop “prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains.” Expected completion: July 2026. This is OpenAI’s first direct military contract. verified

OpenAI also partnered with Anduril Industries — a defense tech company building AI-integrated military hardware including anti-drone systems — in December 2024. verified

February 27, 2026: Hours after publicly backing Anthropic for refusing Pentagon demands, Sam Altman announced OpenAI had struck its own deal with the Department of Defense — the deal Anthropic had refused.

OpenAI said it had achieved limitations in its agreement around surveillance of US citizens and lethal autonomous weapons — the same restrictions Anthropic demanded and the Pentagon refused. Whether those limitations are meaningful or enforceable is the open question. credible

Chalk messages appeared outside OpenAI’s San Francisco offices: “Where are your redlines?” “You must speak up.” “What are the safeguards?” Some of those feelings were shared by employees inside the building. verified

March 9, 2026: Caitlin Kalinowski resigned from OpenAI over its Pentagon contract, stating that key safeguards around domestic surveillance and autonomous weapons were not adequately reviewed before the agreement was signed. verified


Privacy — What ChatGPT Does

With Your Conversations

ChatGPT and OpenAI’s products collect: verified

  • All conversation content
  • Account information
  • Device and network data
  • Usage patterns and behaviour
  • Voice data when voice features are used
  • Images submitted to the model
  • Documents submitted to the model

Training by default: Consumer ChatGPT conversations — Free, Plus, Pro, and Team — are used to train OpenAI models by default. Users can disable this in settings. Enterprise and API accounts are excluded from training by default. verified

The August 2025 privacy incident: Thousands of private ChatGPT conversations — containing personal details, names, locations, and intimate topics — were inadvertently exposed to public search engines. An experimental “share with search engines” feature malfunctioned. The opt-in toggle was broken — conversations went public without user knowledge or intent. OpenAI removed the feature permanently on August 1, 2025. Sam Altman acknowledged the issue. verified

The May 2025 court preservation order: In the New York Times copyright lawsuit a federal judge issued a preservation order requiring OpenAI to retain all ChatGPT conversation logs from all consumer accounts — affecting over 400 million users.

This includes conversations users had already chosen to delete.

The New York Times had demanded 1.4 billion private ChatGPT conversations be handed over as evidence. OpenAI is appealing the order. As of March 2026 all consumer ChatGPT conversations are being preserved under court order regardless of user deletion requests. verified

What this means practically: Every conversation you have had with ChatGPT on a consumer account — including those you deleted — is currently being preserved in OpenAI’s systems under a federal court order. They cannot be permanently deleted until the court order is lifted or the case resolves.


The Sam Altman Question

Sam Altman was fired by OpenAI’s board in November 2023 for “lack of candour” — the board stated he was “not consistently honest” with them. He was reinstated five days later after 743 of 770 employees threatened to quit. The board members who fired him were subsequently removed. verified

What specifically prompted the firing was later clarified by former board member Helen Toner: Altman had not disclosed his ownership stake in the OpenAI startup fund, had withheld information about ChatGPT’s launch before it happened, and had misrepresented safety processes to the board.

The board members who made the decision are gone. The person they removed is back in charge with a restructured board significantly more aligned with his leadership. The events that caused the firing were never fully addressed publicly. #verified


The AGI Race — Their Own Words

OpenAI’s own 2023 blog stated: “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.” verified

In January 2025 Altman posted on X claiming OpenAI’s newest model had reached AGI — then two weeks later contradicted that claim in a Bloomberg interview. The confusion was not resolved. verified

In December 2025 Altman issued a “Code Red” memo to all OpenAI employees after Google’s Gemini 3 model outperformed GPT-5.1 on benchmarks. The company building what it describes as potentially the most dangerous technology in human history issued an emergency memo because a competitor’s model scored better on tests. verified

The tension between the safety mission and the competitive race is not theoretical. It is in the internal communications.


The Honest Summary

OpenAI began as a nonprofit safety organisation. It is now:

  • A $500 billion corporation
  • Building AI for the US military
  • With a Pentagon contract for warfighting applications
  • Having quietly removed its ban on military AI use
  • With 400 million users’ deleted conversations preserved under court order
  • With its primary safety researchers gone
  • In active legal disputes with its largest investor
  • While claiming it is building the most dangerous technology in human history and racing to build it faster

None of this requires editorial commentary. The documents show this. The timeline is the argument.

The question worth holding: what does a safety mission mean when the organisation holding it has a 500 billion valuation, a race dynamic with China, and a founding structure where the nonprofit that was supposed to oversee the for-profit is now financially dependent on it?

We do not answer that. We document it and hold it open.


Linked Notes

Surveillance Capitalism · The Managed World · Microsoft · X & xAI · Anthropic · Palantir · The Pattern of Revelation · The Planetary Nervous System · Digital Privacy & Protection · I. The Observer · SRC - OpenAI