The US technology industry’s elite have rallied around Anthropic in its federal court battle against the Pentagon’s supply-chain risk designation, with Microsoft leading the charge by filing an amicus brief in San Francisco. Microsoft’s filing called for a temporary restraining order and argued that the Pentagon’s action posed a serious threat to technology networks that support both commercial and military operations. The filing was accompanied by a joint brief from Amazon, Google, Apple, and OpenAI, making this one of the most unified industry responses to a government action in modern memory.
Anthropic’s confrontation with the Pentagon began when the company refused to enter a $200 million contract without guarantees that its Claude AI would not be used for mass surveillance of Americans or to power weapons capable of acting without human control. Defense Secretary Pete Hegseth labeled the company a supply-chain risk after talks broke down, and the Pentagon’s technology chief later publicly ruled out any prospect of renegotiating the deal. The designation has already resulted in the cancellation of Anthropic’s government contracts.
Microsoft’s involvement in the case is rooted in its direct use of Anthropic’s AI in systems it provides to the federal government and its status as a key partner in the Pentagon’s $9 billion Joint Warfighting Cloud Capability contract. The company also has separate multibillion-dollar agreements with various government agencies. Microsoft urged cooperation between government and industry to ensure that AI advances national security goals without crossing ethical lines related to surveillance or autonomous warfare.
Anthropic has challenged the designation through two simultaneous lawsuits, one in California and one in Washington DC, arguing that the supply-chain risk label was applied as unconstitutional retaliation for its public advocacy of responsible AI development. The company’s court filings revealed that it does not trust Claude to operate safely in lethal autonomous contexts, which it said was the technical and ethical basis for its contract demands. The company noted that no US firm had ever previously received such a designation.
Congress is also demanding answers about the role of AI in US military operations in Iran, where a strike reportedly killed more than 175 civilians at an elementary school. Lawmakers have sent formal letters to the Pentagon asking about AI involvement in targeting and the degree of human oversight exercised. The parallel legal and legislative actions are converging into a pivotal moment for the future regulation of artificial intelligence in American national security policy.
