Authors: YaoXu, Secretary-General of CGAIG and Associate Professor at FDDI
LiChenyao, Research Assistant of CGAIG
Abstract: In July 2025, Dario Amodei appeared at a Pentagon press briefing, shaking hands with senior defense officials as it was announced that Claude had become the first commercial large language model to enter classified U.S. military networks. Earlier reporting by The Wall Street Journal indicated that, following an early-2026 raid targeting Venezuelan President Nicolás Maduro, Claude had been deeply integrated—via systems developed by Palantir Technologies—into intelligence processing and operational simulation.Anthropic subsequently sought confirmation from the Pentagon regarding the accuracy of those reports and conveyed that the company could not accept the use of Claude for mass surveillance of U.S. citizens or for the autonomous selection of physical attack targets. This position reportedly triggered strong dissatisfaction within the Department of Defense.
On February 24, 2026, inside the Pentagon on the banks of the Potomac River, Secretary of “Department of War” (formerly known as the Department of Defense) Pete Hegseth delivered an ultimatum to Amodei: by 17:01 local time on February 27, Anthropic must remove all guardrails restricting Claude’s military applications, allowing the U.S. armed forces to employ the model for “all lawful purposes,” including target identification, fire-control guidance, and large-scale surveillance. Should Anthropic refuse, the Pentagon would consider two measures. First, it could designate the company as a “supply chain risk,” effectively severing its commercial ties with the federal system. Second, it could invoke the Cold War–era Defense Production Act to requisition the model in the name of national security and modify it to meet military requirements.
At approximately 4:00 p.m. local time on February 27, President Donald Trump ordered all U.S. government agencies to cease using Anthropic’s AI technologies and granted the Pentagon six months to phase out the company’s systems. After the expiration of the ultimatum, Hegseth publicly endorsed the president’s decision, stating that the Pentagon “must possess full and unrestricted access to the models developed by leading AI firms for all lawful purposes in defense of the Republic.” With this, the rare public confrontation between the federal government and Anthropic over AI safety reached its peak.

The leadership of Anthropic is mostly composed of early core members from OpenAI. They were concerned that the pace of commercialization would overshadow the safety agenda, which was one of the reasons why they left OpenAI. Therefore, Anthropic has long emphasized safety first in its external communication and regards model safeguards as an important part of the company's reputation.
Image source: Observer
Key words: AI Safety Governance; Model Guardrails; Civil–Military Relations; Defense Production Act; Strategic Narrative Control.
Original link: https://mp.weixin.qq.com/s/jLPSDXJhWD6bA17c8rMmew

