A United States judge has temporarily halted the Pentagon’s blacklisting of Anthropic, marking a significant development in the company’s ongoing dispute with the military regarding the safety of artificial intelligence (AI) on the battlefield.
Anthropic, in its lawsuit filed in a California federal court, contends that U.S. Secretary of War Pete Hegseth exceeded his authority by labeling Anthropic a national security supply-chain risk without due process. This designation allows the government to identify companies that may pose risks to military systems from potential infiltration or sabotage by adversaries.
The company alleges that the government’s actions violated its First Amendment right to free speech by retaliating against its stance on AI safety. Anthropic claims it was not afforded the opportunity to challenge the designation, thereby infringing upon its Fifth Amendment right to due process.
District Judge Rita Lin, appointed during former President Joe Biden’s tenure, supported Anthropic’s arguments in a 43-page ruling. However, the ruling will not be immediately enforced, with a grace period of seven days granted to the administration to potentially appeal the decision.
Hegseth’s decision, a response to Anthropic’s objection to the military’s use of its AI chatbot Claude for U.S. surveillance or autonomous weaponry, has resulted in Anthropic being barred from certain military contracts. The company’s executives fear substantial financial losses and damage to their reputation due to this restriction.
Anthropic asserts that AI models lack sufficient reliability for use in autonomous weapons and opposes domestic surveillance as an infringement of rights. While the Pentagon believes private entities should not impede military operations, it clarifies that it has no interest in utilizing AI technology for unauthorized purposes.
Judge Lin’s ruling highlighted that the government’s actions were seemingly punitive towards Anthropic rather than serving national security interests as claimed. The judge remarked that penalizing Anthropic for criticizing the government’s contracting stance in the media constitutes illegal First Amendment retaliation.
In response to the ruling, Anthropic spokesperson Danielle Cohen expressed satisfaction, emphasizing the company’s commitment to constructive collaboration with the government for the benefit of all Americans through safe and dependable AI technologies.
The designation of Anthropic as a supply-chain risk under a government procurement statute is unprecedented for a U.S. company. Anthropic’s legal challenge argues that the decision lacks legal basis, factual support, and contradicts previous commendation of Claude by the military.
The Justice Department countered Anthropic’s stance, suggesting that the company’s reluctance to comply with contractual terms could create uncertainty within the Pentagon regarding the utilization of Claude, potentially jeopardizing military operations. The government maintains that the designation was a consequence of Anthropic’s contractual stance rather than its AI safety concerns.
In a separate legal action in Washington, Anthropic is contesting another Pentagon supply-chain risk designation that could lead to its exclusion from civilian government contracts.
