US court blocks Trump administration's ban on Anthropic AI in Pentagon saga
Kyiv • UNN
A judge suspended the Pentagon's decision to sever ties with Claude's creator. The ban was called an illegal retaliation for Anthropic's refusal to engage in military AI.

Anthropic PBC has secured a US court ruling blocking the Trump administration's ban on the government's use of the company's artificial intelligence technology, after the creator of the Claude chatbot said the move could cost it billions of dollars in lost revenue, UNN reports with reference to Bloomberg.
Details
On Thursday, US District Court Judge Rita F. Lin issued a preliminary injunction, suspending the administration's plan to sever all ties with Anthropic while the case proceeds in federal court in San Francisco. She stayed the ruling for seven days to give the government an opportunity to appeal.
Earlier this month, the company filed a lawsuit to block the Pentagon's claim that Anthropic poses a threat to the US supply chain, escalating a bitter dispute over safeguards for military use of artificial intelligence technologies.
Claude overtakes ChatGPT after Pentagon deal collapse07.03.26, 08:55 • 5962 views
The startup wanted assurances that its AI would not be used for mass surveillance of Americans or the deployment of autonomous weapons, while the government, citing national security, argued it could not accept any restrictions.
Lin questioned the validity of the ban, stating that it did not appear to be aimed at national security interests.
"If it's about the integrity of the operational chain of command, the War Department could simply stop using Claude," the judge wrote. "Instead, these measures appear to be aimed at punishing Anthropic." Such a move, she said, "is a classic illegal act of retaliation, violating the First Amendment."
A Pentagon spokesperson did not immediately respond to a request for comment sent after business hours. Emil Michael, Deputy Under Secretary of Defense for Research and Engineering, called the decision a "disgrace" in a post on X late Thursday and said there were "dozens of factual errors" in the ruling.
In a statement, Anthropic welcomed the judge's decision. "While this case was necessary to protect Anthropic, our customers, and our partners, we remain focused on working productively with the government to ensure all Americans benefit from safe and reliable AI," the company said in a statement.
Anthropic claims it is being excluded from government contracts for disagreeing with the administration, and states that the legal principles at stake concern every federal contractor whose views are disliked by the government. The Trump administration has vowed to wage a legal battle to exclude Anthropic from all US government agencies.
During hearings before Lin this week, a government official stated that trust is a key component of any military relationship with companies providing services to them, and that Anthropic destroyed that trust during contract negotiations by attempting to dictate to the Pentagon policies on the use of artificial intelligence technologies.
The lawyer argued that the government was concerned about the risk of future sabotage by Anthropic, including changes to the AI software that the government procures from the company.
However, in her ruling, Lin stated that the US Department of Justice had no "legitimate basis" to believe that Anthropic's firm stance on restrictions on AI technology use could lead the company to "become a saboteur."
At the hearing, Anthropic's lawyer noted that the Pentagon can test any AI model before its deployment, and that Anthropic has no ability to stop the model's operation, change its behavior, disable it, or see how it is being used by the military.
As part of the litigation over the ban, Anthropic also filed a lawsuit in the Washington, D.C. Court of Appeals, focusing on the law governing supply chain risk reduction procedures in procurement. In that lawsuit, the company argued that the US War Department exceeded its authority by taking actions that were "arbitrary, unreasonable, and constituted an abuse of discretion."
Pentagon may have used AI tool in operation against Maduro - WSJ14.02.26, 12:59 • 11862 views