The legal confrontation between AI startup Anthropic and the US Department of Defense has drawn in one of the world’s most powerful technology companies, with Microsoft submitting a supporting legal brief in a San Francisco federal court. Microsoft’s filing argued that without a temporary restraining order, the damage to technology suppliers and government systems dependent on Anthropic’s AI would be severe and potentially irreversible. The move reflects how critical Anthropic’s technology has become to a wide range of commercial and government operations.
Anthropic’s legal challenge stems from a Pentagon decision to classify the company as a supply-chain risk following the breakdown of a $200 million contract negotiation. The company had insisted that its AI technology not be deployed for mass surveillance of US citizens or to autonomously direct lethal weapons, conditions the Pentagon rejected. Defense Secretary Pete Hegseth labeled the company a supply-chain risk, a designation previously used only against firms with connections to hostile foreign governments.
Microsoft integrates Anthropic’s AI tools into systems it provides to the US military, making it a directly affected party in this dispute. The company is a key partner in the Pentagon’s $9 billion Joint Warfighting Cloud Capability contract and holds additional agreements worth several billion dollars more. Microsoft’s public statement called for a unified approach in which government, industry, and the American public work together to ensure the responsible use of AI in national security contexts.
Anthropic filed lawsuits in both a California federal court and the DC circuit court of appeals on the same day, arguing that the Pentagon’s designation violated its First Amendment rights. The company stated in its legal filings that the supply-chain risk label was being used as ideological punishment for its public AI safety positions. Anthropic also disclosed in court documents that it does not currently believe Claude can function reliably or safely in lethal autonomous warfare situations.
The timing of this legal battle is particularly significant, as questions swirl around the use of AI in recent military strikes in Iran. House Democrats have formally asked the Pentagon whether artificial intelligence played a role in a strike that reportedly killed more than 175 civilians at an Iranian elementary school. The convergence of these issues has brought unprecedented public attention to the question of how, and under what ethical constraints, AI should be allowed to operate within military systems.
Microsoft Files Court Brief as Anthropic Takes Pentagon to Court Over Unprecedented AI Designation
19