OpenAI is tightening the fine print on its new Defense Department agreement after signing up to deploy its AI models on classified government cloud networks.
CEO Sam Altman says OpenAI is working with the Pentagon to add language that makes the company’s “principles very clear” — including a key new limitation: the Defense Department has affirmed OpenAI’s services will not be used by Defense intelligence agencies (such as the NSA) unless OpenAI and the government negotiate a follow-on contract modification.
In plain terms: classified use is one thing; intelligence-agency use is another — and it will require a new set of terms.
What’s being added to the agreement
Altman framed the update as a clarification designed to prevent “scope creep”:
- OpenAI’s tools can be used under the existing Pentagon deal.
- But if intelligence agencies want access, it can’t just happen automatically through the current arrangement.
- It would require a separate modification to the contract.
That’s a meaningful boundary because “Defense use” can cover a wide range of activities — from logistics and planning to analysis and communications. Intelligence use raises sharper concerns around surveillance, sensitive targeting workflows, and constitutional limits.
Why OpenAI is doing this now
OpenAI’s Pentagon pact landed in the middle of a heated debate about how far military customers should be allowed to push commercial AI tools — and whether private AI labs can enforce guardrails at all.
By adding explicit language about intelligence agencies, OpenAI is trying to do three things at once:
- Limit ambiguity: reduce the chance that different parts of government interpret the agreement differently.
- Prevent mission creep: avoid sliding from “classified support tools” into “intelligence operations” by default.
- Protect its safety stance: keep space to reassess higher-risk uses (and impose extra controls) before expanding the scope.
What this means for government AI going forward
This is a signal to every AI company selling into national security: contracts are becoming the battleground.
Not public statements. Not “principles” pages. The actual agreement text.
Expect future defense AI deals to look more like tightly structured frameworks, with explicit boundaries around:
- which agencies can use the tools
- what categories of work are off-limits
- what requires separate approvals
- and what triggers termination or renegotiation
Bottom line
OpenAI isn’t walking away from defense work — it’s drawing a clearer map of where its technology can go under the current deal, and where it can’t go without a new negotiation.


