Anthropic Says Chinese AI Firms Used Claude to Boost Their Own Models — and It’s Turning Into a Bigger Fight Over AI Security

Anthropic has accused three Chinese AI companies of using its Claude chatbot to extract capabilities and improve their own models, a charge that pushes the AI race deeper into a new phase: not just competition over chips and talent, but competition over model outputs themselves.

According to Reuters, Anthropic said DeepSeek, Moonshot, and MiniMax generated more than 16 million interactions with Claude using roughly 24,000 fake accounts, allegedly violating Anthropic’s terms of service and regional access rules. Anthropic said the firms used a method known as distillation—training a weaker model on the outputs of a stronger one.

What this means in plain English

Distillation isn’t inherently illegitimate. AI companies use it all the time to build smaller, faster, cheaper versions of their own models. The problem Anthropic is pointing to is who is doing the distilling and whose outputs are being harvested. If a rival is systematically pulling answers from your model at scale to train theirs, the fight stops looking like normal benchmarking and starts looking like capability extraction.

That’s why this story matters beyond one company complaint. It highlights a core vulnerability in frontier AI: even if you protect weights, data centers, and chips, your model can still leak value through everyday usage if access controls are weak or abuse detection lags.

Anthropic’s bigger argument: this supports tighter chip export controls

Anthropic didn’t just make a technical accusation—it used the moment to argue for stronger export controls on chips, saying such restrictions can reduce both direct training capacity and the scale of improper distillation. Reuters also notes the company framed illicitly distilled models as a potential national security risk, especially if they spread without safeguards.

That’s a notable escalation in framing. The debate is no longer only “can rivals copy capabilities?” It’s also “what happens if copied capabilities spread faster than safety controls?”

The most revealing part: what Anthropic says each firm was targeting

Reuters reports Anthropic described the campaigns in unusually specific terms:

  • DeepSeek allegedly targeted reasoning capabilities across tasks and the creation of censorship-safe alternatives for policy-sensitive queries.
  • Moonshot allegedly focused on agentic reasoning, tool use, coding, and data analysis.
  • MiniMax allegedly targeted agentic coding, tool use, and orchestration, with Anthropic saying it detected the effort while it was still active.

Anthropic also claimed MiniMax quickly pivoted traffic to a newly released model within 24 hours—suggesting a highly adaptive extraction strategy rather than passive testing.

Why this is a bigger industry story, not just an Anthropic story

This lands shortly after OpenAI warned U.S. lawmakers that Chinese AI firm DeepSeek was targeting top U.S. AI companies to replicate models and use them for training, according to Reuters. Put together, the message from major U.S. AI labs is getting sharper: they see model distillation by competitors as a real strategic threat, not a hypothetical edge case.

That raises difficult questions for the whole AI sector:

  • How do labs distinguish normal heavy usage from extraction campaigns?
  • How much rate limiting and identity verification is enough without killing developer adoption?
  • Will cloud/API access become more restricted by geography, ownership, or sector?
  • And if outputs become the new attack surface, what does “AI security” even mean now?

The business context matters too

Reuters notes Anthropic said it recently raised $30 billion and is now valued at $380 billion. At that scale, claims like this are not just technical disclosures—they are also policy signals, investor signals, and competitive signals.

In other words, this is partly about misuse detection, but it’s also about defining the rules of the AI market while those rules are still being written.

Bottom line

Anthropic’s accusation marks another step in the AI race becoming more like a high-stakes security contest than a normal software rivalry. If frontier labs are right that large-scale distillation campaigns are growing more sophisticated, then the next AI battle won’t be only about who builds the best model.

It will be about who can protect their model’s capabilities in the wild—without closing off the very ecosystems that made these tools valuable in the first place.

Related Articles

- Advertisement -spot_img

Latest Articles