Sunday, March 29, 2026

Latest Posts

“Judge Halts Pentagon Blacklist Against Anthropic AI”

A United States judge issued a temporary restraining order on Thursday against the Pentagon’s blacklisting of Anthropic, marking a new development in the company’s contentious battle with the military regarding the safety of artificial intelligence (AI) on the battlefield. Anthropic, in its lawsuit filed in a California federal court, alleges that U.S. Secretary of War Pete Hegseth exceeded his authority by labeling Anthropic a national security supply-chain risk without due process, a designation that can be applied by the government to companies that may expose military systems to potential infiltration or sabotage by adversaries. The company claims that the government’s action infringed on its First Amendment right to free speech and Fifth Amendment right to due process, as it was not granted an opportunity to challenge the designation.

U.S. District Judge Rita Lin, appointed by former President Joe Biden, sided with Anthropic in a 43-page ruling but stipulated that the decision would only come into effect after seven days to allow the administration time to appeal. Hegseth’s move, following Anthropic’s objection to allowing its AI chatbot Claude for U.S. surveillance or autonomous weaponry, led to the company’s exclusion from certain military contracts. Executives at Anthropic have expressed concerns that this exclusion could result in significant financial losses and damage to their reputation.

Anthropic argues that AI models are not sufficiently reliable to be safely utilized in autonomous weapons and opposes domestic surveillance as a violation of rights. While the Pentagon asserts that private companies should not dictate military actions, it also clarified that it is not interested in the contested uses and would only deploy the technology within legal boundaries.

Judge Lin, in her ruling, suggested that the government’s actions seemed more punitive towards Anthropic than aligned with stated national security interests. She stated, “The record indicates that Anthropic is facing repercussions for criticizing the government’s contracting stance in the media,” highlighting that penalizing Anthropic for bringing attention to the government’s contracting practices constitutes illegal First Amendment retaliation.

Spokesperson Danielle Cohen expressed Anthropic’s satisfaction with the court’s decision, emphasizing the company’s commitment to collaborating productively with the government to ensure the safe and beneficial use of AI for all Americans. Anthropic’s designation as a supply-chain risk under a government-procurement statute, aimed at safeguarding military systems from foreign sabotage, marks the first time a U.S. company has received such a public designation.

The lawsuit filed by Anthropic on March 9 challenges the legality and basis of the decision, stating that it lacks factual support and contradicts the military’s previous praise for Claude. The Justice Department countered by suggesting that Anthropic’s reluctance to lift restrictions could lead to operational uncertainties in the Pentagon and potentially disrupt military systems. The government contends that the designation was a result of Anthropic’s refusal to accept contractual terms rather than its stance on AI safety.

Furthermore, Anthropic has an additional lawsuit pending in Washington concerning a separate Pentagon supply-chain risk designation that could potentially bar the company from civilian government contracts.

Latest Posts

Don't Miss