The Pentagon Claims That Anthropic's 'Soul' Creates a Supply-Chain Risk. That Makes No Sense
By Matt Novak
Published on March 13, 2026.
The Pentagon has designated Anthropic's AI model Claude as a supply chain risk, claiming that it creates a unique threat to American national security. Emil Michael, the Under Secretary of Defense for Research and Engineering, argued that Anthropic’s ‘Soul’ guide document, which influences its interactions with users and their personality, was discovered by a user and made headlines last year and included guidance like “being truly helpful to humans is one of the most important things Claude can do for both Anthropic and the world. The Pentagon gave Anthropic an ultimatum in February that it would have to remove guardrails that prohibit Claude from being used in mass domestic surveillance and fully autonomous weapons or face being labeled a supply-chain risk. However, Anthropic refused this, and the Pentagon has given the company the designation, which has never been used against a U.S. company before. The company is now suing, and it is taking six months to remove Claude from its system. Michael argued that removing Claude from the Pentagon's systems would take no overnight.
Read Original Article