Anthropic Sues to Stop Pentagon “Blacklist” as AI Guardrails Turn Into a National Security Fight

Anthropic just escalated its standoff with the U.S. military into open legal warfare.

On Monday, the maker of Claude filed a lawsuit seeking to block the Pentagon from branding it a national security “supply-chain risk”—a label that can effectively shut a company out of defense work and pressure contractors to cut ties. Anthropic argues the move is unlawful, ideologically motivated, and a direct punishment for the company’s refusal to remove safety limits on how its AI can be used.

This is no longer a quiet procurement dispute. It’s becoming a defining case about who controls the boundaries of powerful AI: the government that wants maximum flexibility, or the companies that build the models and insist some uses are off-limits.


What Anthropic is asking the courts to do

Anthropic’s main case—filed in federal court in California—asks a judge to undo the Pentagon’s designation and block agencies from enforcing it.

Anthropic also filed a second case in the U.S. Court of Appeals for the D.C. Circuit, challenging a separate legal authority the government invoked—one that could broaden the restrictions beyond the Pentagon and into the wider civilian federal government after an interagency review.

In short: Anthropic is trying to stop the label from hardening into a government-wide quarantine.


Why the Pentagon moved against Anthropic

At the heart of the dispute are two “non-negotiable” lines Anthropic says it will not cross:

  1. No fully autonomous lethal weapons use without meaningful human oversight
  2. No mass domestic surveillance of Americans

The Pentagon’s view is the opposite: it insists the military must retain full flexibility to use AI for “any lawful purpose,” and that national security can’t be constrained by a private company’s terms of service.

That clash—“lawful flexibility” vs. “safety red lines”—is the spark that lit the fire.


The timeline: from negotiations to an all-out ban

Anthropic frames the conflict as building over months:

  • The dispute traces back to negotiations around the Pentagon’s GenAI platform, where Anthropic says the government demanded Claude be available for “all lawful uses.”
  • Anthropic says it offered broad cooperation, but refused to remove the two red lines above.
  • On Feb. 24, Defense Secretary Pete Hegseth reportedly met with CEO Dario Amodei and presented an ultimatum: comply within days or face severe consequences, including being pushed out of the defense supply chain.
  • On Feb. 27, President Trump posted a directive ordering federal agencies to stop using Anthropic’s technology.
  • Shortly afterward, Hegseth publicly declared Anthropic a “supply-chain risk” and signaled contractors should not conduct commercial business with it.
  • Multiple agencies moved quickly to terminate or cut ties with Anthropic.

Anthropic says this sequence shows retaliation, not a neutral security determination.


Anthropic’s legal claims in plain language

Anthropic is making several core arguments:

  • First Amendment retaliation: it says the government punished the company for expressing views about AI safety and the limits of its own technology.
  • Due process violations: it argues it was effectively blacklisted without proper notice or a meaningful chance to respond.
  • Administrative Procedure Act violations: it says the designation was arbitrary, lacked evidence, and didn’t follow required procedures.
  • Presidential overreach: it claims Trump’s directive exceeded legal authority.

The company is essentially telling the court: You can’t use the machinery of the federal government to punish a firm because it won’t say yes to everything.


The unusual support: AI workers step in

A notable twist: a group of researchers and engineers from OpenAI and Google submitted a supporting legal brief in their personal capacity. Their argument isn’t “Anthropic is perfect.” It’s that punishing one lab for publicly debating AI risk could chill the entire industry’s willingness to speak openly about safety—exactly when debate is most needed.


What this fight is really about

Strip away the personalities and the press releases, and this case is about a question that’s going to define the next decade:

When AI becomes strategic infrastructure, who sets the rules—vendors or the state?

If the government wins decisively, it signals a future where “AI safety” becomes secondary to national security discretion, and vendors may have to accept broader military use to keep contracts.

If Anthropic wins, it strengthens the precedent that AI companies can set enforceable boundaries—even when the customer is the U.S. government.