Anthropic
Folder: 04 - TECHNOLOGY & SURVEILLANCE Source note: SRC - Anthropic
A Note Before This Note
This vault was built using Claude — Anthropic’s AI model. The writer acknowledges this creates a conflict of interest in writing honestly about Anthropic.
Every claim below is sourced to primary documents. Every tag reflects the evidence level honestly. The reader should apply independent scrutiny to this note above any other in the vault — and the source note provides the primary documents to do exactly that.
We write it anyway because the vault that omits uncomfortable truths about its own tools is not the vault this project set out to build.
What Anthropic Is
Anthropic is an AI safety company founded in 2021 by Dario Amodei, Daniela Amodei, and seven other former OpenAI researchers who left citing concerns about safety and the direction of the company.
The founding premise: if AGI is coming regardless, it is better to have safety-focused researchers at the frontier than to cede that ground to those less focused on safety. verified
As of 2026 Anthropic is valued at approximately $350 billion. Its primary investors are Google and Amazon — two of the largest surveillance capitalism operations documented in this vault.
Amazon has committed 2 billion+. Both receive cloud computing partnership rights in return. Anthropic runs primarily on Amazon Web Services (AWS). verified
The safety mission is real. The funding structure is a tension the company has never fully resolved.
The Military Contracts
June 2024: Anthropic’s Claude became the first frontier AI model authorised for use on US Department of Defense classified networks — deployed through its partnership with Palantir. See Palantir
July 2025: The Pentagon formalised this with a contract valued at up to $200 million. Anthropic formed a national security and public sector advisory council composed of former senior defence and intelligence officials.
The Venezuela operation — January 3, 2026: US special operations forces captured Venezuelan President Nicolás Maduro in Caracas. 83 people were killed including 47 Venezuelan soldiers. The Wall Street Journal reported Claude was used in the operation via Palantir’s Maven Smart System — the Pentagon’s premier AI platform.
Whether Anthropic knew Claude would be used in a kinetic military operation is disputed. What is confirmed: Claude was deployed via Palantir on classified networks. Anthropic’s usage policy bars “battlefield management applications.” The policy and the deployment are in documented conflict. verified
A senior Anthropic executive subsequently contacted a senior Palantir executive to ask whether Claude had been used in the raid. The Pentagon’s account: the question “was raised in a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid, people were shot.” Anthropic disputed this characterisation. verified
The Pentagon Dispute —
What Anthropic Refused
Following the Venezuela operation the Pentagon demanded Anthropic remove two categories of restriction from Claude’s usage policy: verified
-
Autonomous weapons — the Pentagon wanted Claude deployable in weapons systems that operate without meaningful human oversight over lethal decisions.
-
Mass domestic surveillance — the Pentagon wanted Claude deployable to monitor American citizens at scale.
Anthropic refused both. verified
Pentagon spokesman Sean Parnell: “Our nation requires that our partners be willing to help our warfighters win in any fight.”
Pentagon Undersecretary Emil Michael urged Anthropic to “cross the Rubicon” on military AI use cases. verified
The Pentagon designated Anthropic a “supply chain risk to national security” — a designation typically reserved for foreign adversaries like Huawei. Within hours of the designation OpenAI was reported to be in talks with the Pentagon for a replacement contract. verified
The Information Technology Industry Council — which represents major tech companies — sent a letter to Pete Hegseth expressing concern about the supply chain risk designation. verified
The Honest Assessment
The Anthropic story has genuine tension that most tech company stories do not.
What supports the safety mission:
- Founded by safety researchers who left OpenAI over safety concerns
- Refused to remove guardrails for autonomous weapons and mass domestic surveillance
- That refusal is costing the company a $200 million contract and its Pentagon relationship
- The two specific lines held — autonomous lethal AI and domestic mass surveillance — are the two most consequential lines in the entire AI safety debate
- Claude’s constitution and usage policies are publicly documented with genuine transparency about what the model will and will not do #verified
What creates legitimate concern:
- Accepted $200 million Pentagon contract in the first place
- Partnered with Palantir — the company this vault documents as the infrastructure of mass surveillance and military targeting
- Claude was the first AI on classified military networks
- National security advisory council of former defence and intelligence officials
- Added Trump’s former deputy chief of staff to its board weeks before the dispute
- Primary investors are Google and Amazon — both documented surveillance capitalism operators
- $350 billion valuation creates pressure toward commercial deployment that safety principles must withstand #verified
The question the evidence raises:
A company can have genuine safety principles and still make structural choices that put those principles under pressure. Anthropic accepted a Palantir partnership, a Pentagon contract, and classified network deployment. Then it held the line when the Pentagon asked it to cross it.
Whether the structural choices were naive, commercially necessary, or inconsistent with the founding mission depends on how you weigh the sequence.
What is documented: the line was held under significant financial and political pressure. That is not nothing. Whether it stays held as the company approaches IPO and its investors require returns is the question worth watching. credible
Privacy — What Claude
Conversations Are Used For
Anthropic’s privacy policy documents: verified
- Conversations with Claude are used to train and improve Claude models by default
- Users can turn off conversation history on Claude.ai — this prevents conversations being used for training
- For API users — conversations are not used for training by default
- Anthropic employs human reviewers who may read conversations for safety and quality purposes
- Data is stored on AWS
Compared to Meta, Google, and Microsoft: Anthropic’s data collection is significantly more limited in scope and purpose. The primary use is model improvement not advertising profiling or behavioural surveillance. verified
The honest caveat: Anthropic is not a privacy tool. Conversations processed on their servers are accessible to the company and to law enforcement with valid legal process. For sensitive conversations — use local AI on your own hardware. See Digital Privacy & Protection
The Broader AI Industry
Context
Anthropic exists in an industry where the choices are not between safe and unsafe AI companies but between companies with different relationships to safety.
OpenAI restructured away from nonprofit oversight. See OpenAI xAI trains on documented hate speech and has government contracts. See X & xAI Google DeepMind is embedded in the world’s largest surveillance advertising operation. See Google & Alphabet Meta AI trains on the most comprehensive personal data harvesting operation ever built. See Meta
In this landscape Anthropic’s safety focus — even with its contradictions — is a documented difference in approach.
Whether that difference is sufficient given the scale of what is being built is the question this vault cannot answer. It is the question the researchers who built these systems are asking themselves.
I. The Observer holds it open.
Linked Notes
OpenAI · X & xAI · Palantir · Google & Alphabet · Meta · Microsoft · The Managed World · Surveillance Capitalism · Digital Privacy & Protection · The Pattern of Revelation · I. The Observer · SRC - Anthropic